Disk Read/Write Error

Questions about SNMP, Power, System, Logs, disk, & RAID.
Post Reply
GEoMaNTiK
Know my way around
Posts: 183
Joined: Fri Apr 25, 2008 12:00 am
Location: Melbourne

Re: Disk Read/Write Error

Post by GEoMaNTiK » Fri Nov 27, 2009 4:32 am

QNAPJason wrote:Hi GEoMaNTiK,
Our tech. support will provide assistace to you shortly. Please check your PM.

thanks

Jason


Hi Jason,

Yes, Tech Support were in touch and have organised for the replacement of my problem TS-509.

Once again, I have to say that QNAP has the BEST support/service I've seen, simply Grade A+.

Thanks for your help Jason.
QNAP TS-119P+ 3.4.3 build 0520T (WD 500GB 2.5" Scorpio Blue)
QNAP TS-119 3.4.3 build 0520T (WD 640GB 3.5" AV-GP Green Power)
Lian-Li Q08B NAS Case w/Dual Core ATOM 1.8GHz (WD 1.5TB 3.5" Caviar Green x4 and WD 1.0TB 3.5" Caviar Green x2) Microsoft Server 2008 R2
Lian-Li PC-V354B Case w/Intel Dual Core i3-530 (Hitachi 2TB 3.5" Deskstar x7) Microsoft Server 2008 R2

QNAP NMP-1000 1.1.9 build 0107T x2
Popcorn Hour A200 (WD 320GB 2.5" Green Power) [Streaming Media via TS-509]
VortexBox 1.7 (HDD: WD 2TB Caviar Green) streaming to Logitech Squeezbox Touch & Boom

User avatar
thanatos74
Starting out
Posts: 46
Joined: Wed Jan 21, 2009 5:46 pm
Location: Munich

Re: Disk Read/Write Error

Post by thanatos74 » Fri Dec 04, 2009 4:36 am

Also have/had that problem on my TS 509.
I'm running latest firmware 3.1.2 Build 1014T on the device.
I have 5 x Seagate 1,5 TB ST31500341AS disks (all of them with firmware version CC1H) configured as Raid 5.

Until yesterday everything went just fine.
Because I wanted to get rid of the encryption on my existing Raid 5, I've deleted the Raid and created a new one (also Raid 5).
With the previous configuration the device was running about one year without any problems (never touch a running system :( )
I've started the raid creation process yesterday evening. When I came back this morning, the raid setup process finished but the newly created raid was already degraded, because disk 1 failed...I/O error.
Sad to say, that I was too angry to have a look at dmesg before just turning of the device... :roll:

First I tried to reactivate the disk in the 509 - without any success.
Needless to say, that SMART reported no problems with that particular disk.
My next step was, to remove the disk from the nas, connected it to another pc running Seagates Seatools. All the checks with Seatools finished without any errors or warnings. Also the Seatools affirmed, that the Smart Status of the disk is all ok.
So far, the disk itself seemed to have no problems or caused such.

Next, I've used Seatools again to erase the disk (didn't wait until the whole disk was erased, but only started the process, waited a few minutes and interrupted it again).

When I put back the drive into the 509 now, the drive was accepted again without any errors!!! :D

Right now, the raid rebuild is running again - this will last some more hours and hopefully complete successfully.

I come back to you guys when I definitely know the end of the story...

_thanatos
~ Two hours of trial and error can save ten minutes of RTFM ~

loonlife
New here
Posts: 8
Joined: Thu Dec 03, 2009 3:20 pm

Re: Disk Read/Write Error

Post by loonlife » Fri Dec 04, 2009 6:16 pm

Hi all,

i have exactly the same problem with my TS-509 pro and 5 x Samsung HD154UI (1,5 TB) configured as RAID-5. Everything was ok, until yesterday night the green light of Disk 1 switched to a flashing red. The configmenu says that Disk 1 has a "Disk Read / Write Error" but S.M.A.R.T told me for Disk 1: "GOOD".
All attempts to reactivate the disk failed, so i connected Disk 1 to my PC and analyzed it with the Samsung Disk Tools. After ~6h of testing the Samsung software told me "Disk is ok".
So i low level formated the disk with the samsung tools and putted it back into the nas. The NAS accepted the disk and immediatelly started to resync.

well, maybe its a workaround to resync the raid every few weeks (when the error occurs) with a fresh formated disk but i dont think this is the idea of the inventor.

bye

User avatar
thanatos74
Starting out
Posts: 46
Joined: Wed Jan 21, 2009 5:46 pm
Location: Munich

Re: Disk Read/Write Error

Post by thanatos74 » Sat Dec 05, 2009 7:50 am

thanatos74 wrote:Also have/had that problem on my TS 509.
I'm running latest firmware 3.1.2 Build 1014T on the device.
I have 5 x Seagate 1,5 TB ST31500341AS disks (all of them with firmware version CC1H) configured as Raid 5.

Until yesterday everything went just fine.
Because I wanted to get rid of the encryption on my existing Raid 5, I've deleted the Raid and created a new one (also Raid 5).
With the previous configuration the device was running about one year without any problems (never touch a running system :( )
I've started the raid creation process yesterday evening. When I came back this morning, the raid setup process finished but the newly created raid was already degraded, because disk 1 failed...I/O error.
Sad to say, that I was too angry to have a look at dmesg before just turning of the device... :roll:

First I tried to reactivate the disk in the 509 - without any success.
Needless to say, that SMART reported no problems with that particular disk.
My next step was, to remove the disk from the nas, connected it to another pc running Seagates Seatools. All the checks with Seatools finished without any errors or warnings. Also the Seatools affirmed, that the Smart Status of the disk is all ok.
So far, the disk itself seemed to have no problems or caused such.

Next, I've used Seatools again to erase the disk (didn't wait until the whole disk was erased, but only started the process, waited a few minutes and interrupted it again).

When I put back the drive into the 509 now, the drive was accepted again without any errors!!! :D

Right now, the raid rebuild is running again - this will last some more hours and hopefully complete successfully.

I come back to you guys when I definitely know the end of the story...

_thanatos


Ok here are the news.
After the started rebuild went very well and without any problems, some hours later drive 1 again dropped out of the raid...

At that precious time there was heavy I/O, because I was copying about 1 TB of data from one single drive in the device to the raid 5.
About 800 GB have been copied, when the error occoured.

Here is the complete dmesg output, grabed some minutes later:

Code: Select all

[/] # dmesg
, 0, 0].
Retried request is finished...MV_Request: Cdb[28, 0, 0,53, 14,84, 0, 0, 80, 0, 0, 0].
appRequest.cgi[27480]: segfault at 089ef1c8 eip 0804cc59 esp bfd01ddc error 4
md0: bitmap file is out of date (0 < 31451) -- forcing full recovery
md0: bitmap file is out of date, doing full recovery
md0: bitmap initialized from disk: read 11/11 pages, set 357317 bits
created bitmap (175 pages) for device md0
Interrupt Error: 0x40000080 orgIntStatus: 0x40000081 completeSlot=0x2.
Toggle CMD register start stop bit at port 0x0.
Abort error requests....
MV_Request: Cdb[28, 0,25,6a, cc,84, 0, 0, 78, 0, 0, 0].
Abort error requests....
MV_Request: Cdb[2a, 0,25,6a, d2,fc, 0, 1,  0, 0, 0, 0].
Abort error requests....
MV_Request: Cdb[2a, 0,25,6a, d4,fc, 0, 1,  0, 0, 0, 0].
Abort error requests....
MV_Request: Cdb[2a, 0,25,6a, cc,fc, 0, 1,  0, 0, 0, 0].
Abort error requests....
MV_Request: Cdb[2a, 0,25,6a, cf,fc, 0, 1,  0, 0, 0, 0].
Abort error requests....
MV_Request: Cdb[2a, 0,25,6a, ce,fc, 0, 1,  0, 0, 0, 0].
Abort error requests....
MV_Request: Cdb[2a, 0,25,6a, d3,fc, 0, 1,  0, 0, 0, 0].
Abort error requests....
MV_Request: Cdb[2a, 0,25,6a, d0,fc, 0, 1,  0, 0, 0, 0].
Abort error requests....
MV_Request: Cdb[2a, 0,25,6a, d1,fc, 0, 1,  0, 0, 0, 0].
Abort error requests....
MV_Request: Cdb[2a, 0,25,6a, cd,fc, 0, 1,  0, 0, 0, 0].
Device_IssueReadLogExt on device 0x0.
Read Log Ext is finished on device 0x0.
Retry request...MV_Request: Cdb[2a, 0,25,6a, cd,fc, 0, 1,  0, 0, 0, 0].
Retried request is finished...MV_Request: Cdb[2a, 0,25,6a, cd,fc, 0, 1,  0, 0, 0, 0].
Retry request...MV_Request: Cdb[2a, 0,25,6a, d1,fc, 0, 1,  0, 0, 0, 0].
Retried request is finished...MV_Request: Cdb[2a, 0,25,6a, d1,fc, 0, 1,  0, 0, 0, 0].
Retry request...MV_Request: Cdb[2a, 0,25,6a, d0,fc, 0, 1,  0, 0, 0, 0].
Retried request is finished...MV_Request: Cdb[2a, 0,25,6a, d0,fc, 0, 1,  0, 0, 0, 0].
Retry request...MV_Request: Cdb[2a, 0,25,6a, d3,fc, 0, 1,  0, 0, 0, 0].
Retried request is finished...MV_Request: Cdb[2a, 0,25,6a, d3,fc, 0, 1,  0, 0, 0, 0].
Retry request...MV_Request: Cdb[2a, 0,25,6a, ce,fc, 0, 1,  0, 0, 0, 0].
Retried request is finished...MV_Request: Cdb[2a, 0,25,6a, ce,fc, 0, 1,  0, 0, 0, 0].
Retry request...MV_Request: Cdb[2a, 0,25,6a, cf,fc, 0, 1,  0, 0, 0, 0].
Retried request is finished...MV_Request: Cdb[2a, 0,25,6a, cf,fc, 0, 1,  0, 0, 0, 0].
Retry request...MV_Request: Cdb[2a, 0,25,6a, cc,fc, 0, 1,  0, 0, 0, 0].
Retried request is finished...MV_Request: Cdb[2a, 0,25,6a, cc,fc, 0, 1,  0, 0, 0, 0].
Retry request...MV_Request: Cdb[2a, 0,25,6a, d4,fc, 0, 1,  0, 0, 0, 0].
Retried request is finished...MV_Request: Cdb[2a, 0,25,6a, d4,fc, 0, 1,  0, 0, 0, 0].
Retry request...MV_Request: Cdb[2a, 0,25,6a, d2,fc, 0, 1,  0, 0, 0, 0].
Retried request is finished...MV_Request: Cdb[2a, 0,25,6a, d2,fc, 0, 1,  0, 0, 0, 0].
Retry request...MV_Request: Cdb[28, 0,25,6a, cc,84, 0, 0, 78, 0, 0, 0].
Retried request is finished...MV_Request: Cdb[28, 0,25,6a, cc,84, 0, 0, 78, 0, 0, 0].
Port_Monitor: Running_Slot=0x3d50850b.
MV_Request: Cdb[2a, 0,2e,2f, 54,84, 0, 1,  0, 0, 0, 0].
MV_Request: Cdb[2a, 0,2e,2f, 59,84, 0, 1,  0, 0, 0, 0].
MV_Request: Cdb[2a, 0,2e,2f, 5a,84, 0, 0, 40, 0, 0, 0].
MV_Request: Cdb[2a, 0,2e,2f, 50,84, 0, 1,  0, 0, 0, 0].
MV_Request: Cdb[2a, 0,2e,2f, 51,84, 0, 1,  0, 0, 0, 0].
MV_Request: Cdb[28, 0,2e,2f, 50,44, 0, 0, 40, 0, 0, 0].
MV_Request: Cdb[2a, 0,2e,2f, 57,84, 0, 1,  0, 0, 0, 0].
MV_Request: Cdb[2a, 0, 0, 0, 89,df, 0, 0,  8, 0, 0, 0].
MV_Request: Cdb[2a, 0,2e,2f, 52,84, 0, 1,  0, 0, 0, 0].
MV_Request: Cdb[2a, 0,2e,2f, 53,84, 0, 1,  0, 0, 0, 0].
MV_Request: Cdb[2a, 0,2e,2f, 56,84, 0, 1,  0, 0, 0, 0].
MV_Request: Cdb[2a, 0,2e,2f, 58,84, 0, 1,  0, 0, 0, 0].
MV_Request: Cdb[2a, 0,2e,2f, 55,84, 0, 1,  0, 0, 0, 0].
ENABLE_WRITE_CACHE (current: enabled).
__MV__ reset handler f7c6dc80.
sd 2:0:0:0: Device offlined - not ready after error recovery
sd 2:0:0:0: Device offlined - not ready after error recovery
sd 2:0:0:0: Device offlined - not ready after error recovery
sd 2:0:0:0: Device offlined - not ready after error recovery
sd 2:0:0:0: Device offlined - not ready after error recovery
sd 2:0:0:0: Device offlined - not ready after error recovery
sd 2:0:0:0: Device offlined - not ready after error recovery
sd 2:0:0:0: [sda] Result: hostbyte=DID_OK driverbyte=DRIVER_TIMEOUT,SUGGEST_OK
end_request: I/O error, dev sda, sector 774853764
raid5: Disk failure on sda3, disabling device. Operation continuing on 3 devices
sd 2:0:0:0: [sda] Result: hostbyte=DID_OK driverbyte=DRIVER_TIMEOUT,SUGGEST_OK
end_request: I/O error, dev sda, sector 774854020
sd 2:0:0:0: [sda] Result: hostbyte=DID_OK driverbyte=DRIVER_TIMEOUT,SUGGEST_OK
end_request: I/O error, dev sda, sector 774854276
sd 2:0:0:0: [sda] Result: hostbyte=DID_OK driverbyte=DRIVER_TIMEOUT,SUGGEST_OK
end_request: I/O error, dev sda, sector 774854532
sd 2:0:0:0: [sda] Result: hostbyte=DID_OK driverbyte=DRIVER_TIMEOUT,SUGGEST_OK
end_request: I/O error, dev sda, sector 774854788
sd 2:0:0:0: [sda] Result: hostbyte=DID_OK driverbyte=DRIVER_TIMEOUT,SUGGEST_OK
end_request: I/O error, dev sda, sector 774855044
sd 2:0:0:0: [sda] Result: hostbyte=DID_OK driverbyte=DRIVER_TIMEOUT,SUGGEST_OK
end_request: I/O error, dev sda, sector 774855300
sd 2:0:0:0: rejecting I/O to offline device
sd 2:0:0:0: rejecting I/O to offline device
md: super_written gets error=-5, uptodate=0
sd 2:0:0:0: rejecting I/O to offline device
sd 2:0:0:0: rejecting I/O to offline device
md: super_written gets error=-5, uptodate=0
raid1: Disk failure on sda1, disabling device.
        Operation continuing on 4 devices
sd 2:0:0:0: rejecting I/O to offline device
sd 2:0:0:0: rejecting I/O to offline device
sd 2:0:0:0: rejecting I/O to offline device
md: super_written gets error=-5, uptodate=0
sd 2:0:0:0: rejecting I/O to offline device
RAID1 conf printout:
 --- wd:4 rd:5
 disk 0, wo:1, o:0, dev:sda1
 disk 1, wo:0, o:1, dev:sdb1
 disk 2, wo:0, o:1, dev:sdc1
 disk 3, wo:0, o:1, dev:sdd1
 disk 4, wo:0, o:1, dev:sde1
RAID1 conf printout:
 --- wd:4 rd:5
 disk 1, wo:0, o:1, dev:sdb1
 disk 2, wo:0, o:1, dev:sdc1
 disk 3, wo:0, o:1, dev:sdd1
 disk 4, wo:0, o:1, dev:sde1
RAID5 conf printout:
 --- rd:4 wd:3
 disk 0, o:0, dev:sda3
 disk 1, o:1, dev:sdb3
 disk 2, o:1, dev:sdc3
 disk 3, o:1, dev:sdd3
RAID5 conf printout:
 --- rd:4 wd:3
 disk 1, o:1, dev:sdb3
 disk 2, o:1, dev:sdc3
 disk 3, o:1, dev:sdd3
sd 2:0:0:0: rejecting I/O to offline device
sd 2:0:0:0: rejecting I/O to offline device
active port 0 :139
active port 1 :445
active port 2 :20
sd 2:0:0:0: rejecting I/O to offline device
sd 2:0:0:0: rejecting I/O to offline device
raid1: Disk failure on sda2, disabling device.
        Operation continuing on 1 devices
RAID1 conf printout:
 --- wd:1 rd:2
 disk 0, wo:1, o:0, dev:sda2
 disk 1, wo:0, o:1, dev:sdb2
RAID1 conf printout:
 --- wd:1 rd:2
 disk 1, wo:0, o:1, dev:sdb2
RAID1 conf printout:
 --- wd:1 rd:2
 disk 0, wo:1, o:1, dev:sde2
 disk 1, wo:0, o:1, dev:sdb2
RAID1 conf printout:
 --- wd:1 rd:2
 disk 0, wo:1, o:1, dev:sde2
 disk 1, wo:0, o:1, dev:sdb2
md: recovery of RAID array md5
md: minimum _guaranteed_  speed: 2000000 KB/sec/disk.
md: using maximum available idle IO bandwidth (but not more than 2000000 KB/sec) for recovery.
md: using 128k window, over a total of 530048 blocks.
md: unbind<sda2>
md: export_rdev(sda2)
md: unbind<sda1>
md: export_rdev(sda1)
raid1: Disk failure on sda4, disabling device.
        Operation continuing on 3 devices
md: unbind<sda4>
md: export_rdev(sda4)
md: unbind<sda3>
md: export_rdev(sda3)
active port 0 :139
active port 1 :445
active port 2 :20
md: md5: recovery done.
RAID1 conf printout:
 --- wd:2 rd:2
 disk 0, wo:0, o:1, dev:sde2
 disk 1, wo:0, o:1, dev:sdb2
[/] # dmesg


Again, I removed the disk from the nas, formatted it on my pc and reinserted it - immediately the raid rebuild started again

The bad thing is, that I'm really feeling very bad about my nas now...I think I will never trust that device completely again :(
Why are dropping disk out of the raid?
When will this happen the next time (I'm quite sure it WILL happen again)?
When will there be a final solution from QNAP for that issue?

And where should I store my important data now??
One of the reasons I bought myself a prebuild nas system (and did not assemble one by myself) was that I expected those systems to run stable and reliable out of the box...obviously I was wrong.
Rebuilding the raid every few days now cant be the solution, thats clear!

At the moment I'm quite disappointed.

_thanatos
~ Two hours of trial and error can save ten minutes of RTFM ~

GEoMaNTiK
Know my way around
Posts: 183
Joined: Fri Apr 25, 2008 12:00 am
Location: Melbourne

Re: Disk Read/Write Error

Post by GEoMaNTiK » Sat Dec 05, 2009 8:26 am

thanatos74 wrote:
thanatos74 wrote:And where should I store my important data now??
One of the reasons I bought myself a prebuild nas system (and did not assemble one by myself) was that I expected those systems to run stable and reliable out of the box...obviously I was wrong.
Rebuilding the raid every few days now cant be the solution, thats clear!

At the moment I'm quite disappointed.

_thanatos


Out of interest mate, when did you purchase your 509 and has it already been replaced for a newer model?
QNAP TS-119P+ 3.4.3 build 0520T (WD 500GB 2.5" Scorpio Blue)
QNAP TS-119 3.4.3 build 0520T (WD 640GB 3.5" AV-GP Green Power)
Lian-Li Q08B NAS Case w/Dual Core ATOM 1.8GHz (WD 1.5TB 3.5" Caviar Green x4 and WD 1.0TB 3.5" Caviar Green x2) Microsoft Server 2008 R2
Lian-Li PC-V354B Case w/Intel Dual Core i3-530 (Hitachi 2TB 3.5" Deskstar x7) Microsoft Server 2008 R2

QNAP NMP-1000 1.1.9 build 0107T x2
Popcorn Hour A200 (WD 320GB 2.5" Green Power) [Streaming Media via TS-509]
VortexBox 1.7 (HDD: WD 2TB Caviar Green) streaming to Logitech Squeezbox Touch & Boom

User avatar
thanatos74
Starting out
Posts: 46
Joined: Wed Jan 21, 2009 5:46 pm
Location: Munich

Re: Disk Read/Write Error

Post by thanatos74 » Sat Dec 05, 2009 4:00 pm

I bought my device in January this year and no, it has not been replaced.
Did I miss something here? Until some days ago I had no problems, so from my point of view there was no need to replace it.

In the meantime I also opened up a ticket with qnap technical support...
~ Two hours of trial and error can save ten minutes of RTFM ~

woodhouse
New here
Posts: 3
Joined: Sat Dec 05, 2009 6:01 pm

Re: Disk Read/Write Error

Post by woodhouse » Sat Dec 05, 2009 6:22 pm

Hi folks,

cant beleive the same thing happens to other guys in the same period than it happens to me. Yesterday morning my DISK 3 dropped out of the raid without any reason. I tried the same stuff than most of the other posters did, without any success.

I run my raid with five SAMSUNG RAID EDITION HE103UJ (1TB) configured as RAID 5. I bought the box in august 09 an everything was ok until yesterday. After the crash i checked the disk by connecting it to my computer via esata - and all checks finished without any errors. I bought a completly new disk (same brand, same type) and started to rebuild the RAID array. After rebuild finished, everything was ok for about ~3 hours, then - and i couldnt beleive it - the BRAND NEW DISK dropped out of array slot THREE (again).

whats going on whith this box? i paid a lot of money for a 'out of the box solution' and all i expect is 100% stability. I dont give a f*** to all the features and gimmicks and qpckgs and toy stuff. First thing that i expect for my money is RELIABILITY to the raid, if the camera surv. stuff or the multimedia stuff is buggy, hey who cares, but the raid is buggy? This is the same thing as when u buy a car and it doesnt drive .. but hey, the route guidance works awesome!

i cant trust my box anymore and i toy with the idea of buying another NAS and use my 509 to backup the new one...

is there some kind of freakin counter that runs all 509's at the same time straight to **?
please fix the firmware ..fast! thank u much

User avatar
thanatos74
Starting out
Posts: 46
Joined: Wed Jan 21, 2009 5:46 pm
Location: Munich

Re: Disk Read/Write Error

Post by thanatos74 » Sat Dec 05, 2009 6:37 pm

Very curious...

Woodhouse, I agree with you - most important thing concerning the nas is reliability.
A friend of mine also owns a brand new TS-509 - he just bought it 2 or 3 weeks ago (along with 5 Samsung disks)
The same time, my device started to make "raid-trouble" also his one does!!!
It was really the same night, both of our devices dropped disks out of the raid (and no, there is no whatever connection between those two devices)

Accident?
~ Two hours of trial and error can save ten minutes of RTFM ~

User avatar
thanatos74
Starting out
Posts: 46
Joined: Wed Jan 21, 2009 5:46 pm
Location: Munich

Re: Disk Read/Write Error

Post by thanatos74 » Sun Dec 06, 2009 4:41 pm

Things even got worse yesterday.
While the raid rebuild, the system started to reboot itself and even crashed twice - when the crash happened the system responded to nothing. Even the pushbutton on the front had no effect at all.
What also happened was, that ALL of the drives "disappeared" and only showed up again after rebooting.

Well, I'm running out of ideas now.
The last thing I will try today, is to use "System Reset" to bring the system back to delivery state. After that I will reinstall the latest firmware. Hope that helps.
If not, it really seems that I'm facing some kind of hardware problem.
Maybe QNAP can help?
~ Two hours of trial and error can save ten minutes of RTFM ~

User avatar
barjantoz
Starting out
Posts: 11
Joined: Mon Aug 10, 2009 1:13 pm

Re: Disk Read/Write Error

Post by barjantoz » Sun Dec 06, 2009 7:01 pm

Not sure if this is related. My TS 439-Pro has been running on RAID 5 without any problem for 129 days (non-stop) before it completely shut down by itself few hours ago. Now it's stucked. at "System Booting"

User avatar
thanatos74
Starting out
Posts: 46
Joined: Wed Jan 21, 2009 5:46 pm
Location: Munich

Re: Disk Read/Write Error

Post by thanatos74 » Sun Dec 06, 2009 7:10 pm

:!: :?: :!: :?: :!: :?: :!: :?: :!: :?:

I give up now...

After completely resetting the system to its defaults, formatting each of the 5 disks on a separate computer, reinstalling firmware (I even tried 3.2.0 Build 1107T Beta) nothing has changed to better now.
I've created a new Raid 5. The Raid and the ext4 filesystem were created without any errors.
After that the nas started to resync the raid. This ran for some minutes, when suddenly the disks 1 to 4, one by one started to get ejected!!
It started with disk 1 and went on until disk 4 - for some reasons, disk 5 is still there.

output of dmesg:

Code: Select all

[~] # dmesg
read error for block 59648
raid1: sdb: unrecoverable I/O read error for block 59776
scsi 2:0:5:0: rejecting I/O to dead device
end_request: I/O error, dev sdb, sector 1060096
md: super_written gets error=-5, uptodate=0
scsi 2:0:5:0: rejecting I/O to dead device
end_request: I/O error, dev sdb, sector 1060096
md: super_written gets error=-5, uptodate=0
RAID1 conf printout:
 --- wd:1 rd:2
 disk 0, wo:1, o:1, dev:sde2
 disk 1, wo:0, o:1, dev:sdb2
RAID1 conf printout:
 --- wd:1 rd:2
 disk 0, wo:1, o:1, dev:sde2
 disk 1, wo:0, o:1, dev:sdb2
md: recovery of RAID array md5
md: minimum _guaranteed_  speed: 200000 KB/sec/disk.
md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
md: using 128k window, over a total of 530048 blocks.
md: resuming recovery of md5 from checkpoint.
raid1: sdb: unrecoverable I/O read error for block 59904
md: md5: recovery done.
raid1: sdb: unrecoverable I/O read error for block 60032
raid1: sdb: unrecoverable I/O read error for block 60160
raid1: sdb: unrecoverable I/O read error for block 60288
scsi 2:0:5:0: rejecting I/O to dead device
end_request: I/O error, dev sdb, sector 1060096
md: super_written gets error=-5, uptodate=0
scsi 2:0:5:0: rejecting I/O to dead device
end_request: I/O error, dev sdb, sector 1060096
md: super_written gets error=-5, uptodate=0
RAID1 conf printout:
 --- wd:1 rd:2
 disk 0, wo:1, o:1, dev:sde2
 disk 1, wo:0, o:1, dev:sdb2
RAID1 conf printout:
 --- wd:1 rd:2
 disk 0, wo:1, o:1, dev:sde2
 disk 1, wo:0, o:1, dev:sdb2
md: recovery of RAID array md5
md: minimum _guaranteed_  speed: 200000 KB/sec/disk.
md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
md: using 128k window, over a total of 530048 blocks.
md: resuming recovery of md5 from checkpoint.
raid1: sdb: unrecoverable I/O read error for block 60416
raid1: sdb: unrecoverable I/O read error for block 60544
md: md5: recovery done.
raid1: sdb: unrecoverable I/O read error for block 60672
raid1: sdb: unrecoverable I/O read error for block 60800
scsi 2:0:5:0: rejecting I/O to dead device
end_request: I/O error, dev sdb, sector 1060096
md: super_written gets error=-5, uptodate=0
scsi 2:0:5:0: rejecting I/O to dead device
end_request: I/O error, dev sdb, sector 1060096
md: super_written gets error=-5, uptodate=0
RAID1 conf printout:
 --- wd:1 rd:2
 disk 0, wo:1, o:1, dev:sde2
 disk 1, wo:0, o:1, dev:sdb2
RAID1 conf printout:
 --- wd:1 rd:2
 disk 0, wo:1, o:1, dev:sde2
 disk 1, wo:0, o:1, dev:sdb2
md: recovery of RAID array md5
md: minimum _guaranteed_  speed: 200000 KB/sec/disk.
md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
md: using 128k window, over a total of 530048 blocks.
md: resuming recovery of md5 from checkpoint.
raid1: sdb: unrecoverable I/O read error for block 60928
raid1: sdb: unrecoverable I/O read error for block 61056
raid1: sdb: unrecoverable I/O read error for block 61184
raid1: sdb: unrecoverable I/O read error for block 61312
raid1: sdb: unrecoverable I/O read error for block 61440
raid1: sdb: unrecoverable I/O read error for block 61568
raid1: sdb: unrecoverable I/O read error for block 61696
raid1: sdb: unrecoverable I/O read error for block 61824
raid1: sdb: unrecoverable I/O read error for block 61952
raid1: sdb: unrecoverable I/O read error for block 62080
raid1: sdb: unrecoverable I/O read error for block 62208
raid1: sdb: unrecoverable I/O read error for block 62336
raid1: sdb: unrecoverable I/O read error for block 62464
raid1: sdb: unrecoverable I/O read error for block 62592
raid1: sdb: unrecoverable I/O read error for block 62720
raid1: sdb: unrecoverable I/O read error for block 62848
raid1: sdb: unrecoverable I/O read error for block 62976
raid1: sdb: unrecoverable I/O read error for block 63104
raid1: sdb: unrecoverable I/O read error for block 63232
raid1: sdb: unrecoverable I/O read error for block 63360
raid1: sdb: unrecoverable I/O read error for block 63488
raid1: sdb: unrecoverable I/O read error for block 63616
raid1: sdb: unrecoverable I/O read error for block 63744
raid1: sdb: unrecoverable I/O read error for block 63872
raid1: sdb: unrecoverable I/O read error for block 64000
raid1: sdb: unrecoverable I/O read error for block 64128
raid1: sdb: unrecoverable I/O read error for block 64256
raid1: sdb: unrecoverable I/O read error for block 64384
raid1: sdb: unrecoverable I/O read error for block 64512
raid1: sdb: unrecoverable I/O read error for block 64640
raid1: sdb: unrecoverable I/O read error for block 64768
md: md5: recovery done.
raid1: sdb: unrecoverable I/O read error for block 64896
scsi 2:0:5:0: rejecting I/O to dead device
end_request: I/O error, dev sdb, sector 1060096
md: super_written gets error=-5, uptodate=0
md: unbind<sda3>
md: export_rdev(sda3)
scsi 2:0:5:0: rejecting I/O to dead device
end_request: I/O error, dev sdb, sector 1060096
md: super_written gets error=-5, uptodate=0
RAID1 conf printout:
 --- wd:1 rd:2
 disk 0, wo:1, o:1, dev:sde2
 disk 1, wo:0, o:1, dev:sdb2
RAID1 conf printout:
 --- wd:1 rd:2
 disk 0, wo:1, o:1, dev:sde2
 disk 1, wo:0, o:1, dev:sdb2
md: recovery of RAID array md5
md: minimum _guaranteed_  speed: 200000 KB/sec/disk.
md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
md: using 128k window, over a total of 530048 blocks.
md: resuming recovery of md5 from checkpoint.
raid1: sdb: unrecoverable I/O read error for block 65024
raid1: sdb: unrecoverable I/O read error for block 65152
md: md5: recovery done.
raid1: sdb: unrecoverable I/O read error for block 65280
raid1: sdb: unrecoverable I/O read error for block 65408
scsi 2:0:5:0: rejecting I/O to dead device
end_request: I/O error, dev sdb, sector 1060096
md: super_written gets error=-5, uptodate=0
scsi 2:0:5:0: rejecting I/O to dead device
end_request: I/O error, dev sdb, sector 1060096
md: super_written gets error=-5, uptodate=0
RAID1 conf printout:
 --- wd:1 rd:2
 disk 0, wo:1, o:1, dev:sde2
 disk 1, wo:0, o:1, dev:sdb2
RAID1 conf printout:
 --- wd:1 rd:2
 disk 0, wo:1, o:1, dev:sde2
 disk 1, wo:0, o:1, dev:sdb2
md: recovery of RAID array md5
md: minimum _guaranteed_  speed: 200000 KB/sec/disk.
md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
md: using 128k window, over a total of 530048 blocks.
md: resuming recovery of md5 from checkpoint.
raid1: sdb: unrecoverable I/O read error for block 65536
md: md5: recovery done.
raid1: sdb: unrecoverable I/O read error for block 65664
raid1: sdb: unrecoverable I/O read error for block 65792
raid1: sdb: unrecoverable I/O read error for block 65920
scsi 2:0:5:0: rejecting I/O to dead device
end_request: I/O error, dev sdb, sector 1060096
md: super_written gets error=-5, uptodate=0
scsi 2:0:5:0: rejecting I/O to dead device
end_request: I/O error, dev sdb, sector 1060096
md: super_written gets error=-5, uptodate=0
RAID1 conf printout:
 --- wd:1 rd:2
 disk 0, wo:1, o:1, dev:sde2
 disk 1, wo:0, o:1, dev:sdb2
RAID1 conf printout:
 --- wd:1 rd:2
 disk 0, wo:1, o:1, dev:sde2
 disk 1, wo:0, o:1, dev:sdb2
md: recovery of RAID array md5
md: minimum _guaranteed_  speed: 200000 KB/sec/disk.
md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
md: using 128k window, over a total of 530048 blocks.
md: resuming recovery of md5 from checkpoint.
raid1: sdb: unrecoverable I/O read error for block 66048
raid1: sdb: unrecoverable I/O read error for block 66176
md: md5: recovery done.
scsi 2:0:5:0: rejecting I/O to dead device
end_request: I/O error, dev sdb, sector 1060096
md: super_written gets error=-5, uptodate=0


/proc/scsi/scsi ist quite empty:

Code: Select all

[~] # more /proc/scsi/scsi 
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
  Vendor: ATA      Model:   128MB  ATA Fla Rev: ADAA
  Type:   Direct-Access                    ANSI  SCSI revision: 05
Host: scsi3 Channel: 00 Id: 00 Lun: 00
  Vendor: Seagate  Model: ST31500341AS     Rev: CC1H
  Type:   Direct-Access                    ANSI  SCSI revision: 05
[~] #


QNAP, please help!!!!!!!!
~ Two hours of trial and error can save ten minutes of RTFM ~

woodhouse
New here
Posts: 3
Joined: Sat Dec 05, 2009 6:01 pm

Re: Disk Read/Write Error

Post by woodhouse » Mon Dec 07, 2009 1:43 am

So folks,

after a weekend of going through the ** of QnapNAS my next step is to shower the box with gas and light it up. My first step was to replace my DISK 3 on NAS with a brand new one and try to sync the raid. All my efforts failed. I had no chance to keep up the raid longer than a few hours. Yesterday i (again!!!) bought a new DISK (this time a Seagate 1.5tb) and tried to repair my raid. Meanwhile ALL Disks dropped of the raid - step by step, 1 to 5 (no idea why).

My last exercise was to replace the latest firmware (3.1.2 Build1014) with an older one ( 3.1.0xxxx) and after that i solved the disk drop out problem!

BUT the behaviour of the nas is now absolutly crappy. Copy a 4GB file takes ~20min - the copy process is interrupted by breaks from the NAS (in this time all lights are green but stalled) and (my absolute favourite error) when i doubleclick from my windows a ~300MB EXE file that is on a share at the nas, the complete NAS is stalled: no chance to open the webmenu, no reaction from the nas console. The box is completly dead. Just a power off/on can bring it back to work.

I was able to grab a dmsg from the console when this happens the file is filled with read write errors - i think that I/O is not one of the NAS´s best friends. You guys habe fun with the log i attached, maybe it helps you.

Now i finally solved the problem my way:

1. Format all disks
2. Build a new array and make it work
3. Sell the box at ebay

this hardware is definitely not the place i want my files (and so my work) to be. im outta here. bye

Code: Select all

sd 5:0:0:0: [sda] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE,SUGGEST_OK
sd 5:0:0:0: [sda] Sense Key : Medium Error [current]
Info fld=0x5624216a
sd 5:0:0:0: [sda] Add. Sense: No additional sense information
end_request: I/O error, dev sda, sector 1445208426
sd 5:0:0:0: [sda] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE,SUGGEST_OK
sd 5:0:0:0: [sda] Sense Key : Medium Error [current]
Info fld=0x56242232
sd 5:0:0:0: [sda] Add. Sense: No additional sense information
end_request: I/O error, dev sda, sector 1445208626
raid5:md0: read error corrected (8 sectors at 1443088256 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088264 on sda3)
raid5:md0: read error corrected (8 sectors at 1443087744 on sda3)
raid5:md0: read error corrected (8 sectors at 1443087752 on sda3)
raid5:md0: read error corrected (8 sectors at 1443087760 on sda3)
raid5:md0: read error corrected (8 sectors at 1443087768 on sda3)
raid5:md0: read error corrected (8 sectors at 1443087776 on sda3)
raid5:md0: read error corrected (8 sectors at 1443087784 on sda3)
raid5:md0: read error corrected (8 sectors at 1443087792 on sda3)
raid5:md0: read error corrected (8 sectors at 1443087800 on sda3)
raid5:md0: read error corrected (8 sectors at 1443087808 on sda3)
raid5:md0: read error corrected (8 sectors at 1443087816 on sda3)
raid5:md0: read error corrected (8 sectors at 1443087824 on sda3)
raid5:md0: read error corrected (8 sectors at 1443087832 on sda3)
raid5:md0: read error corrected (8 sectors at 1443087840 on sda3)
raid5:md0: read error corrected (8 sectors at 1443087848 on sda3)
raid5:md0: read error corrected (8 sectors at 1443087856 on sda3)
raid5:md0: read error corrected (8 sectors at 1443087864 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088000 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088008 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088016 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088024 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088032 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088040 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088048 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088056 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088064 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088072 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088080 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088088 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088096 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088104 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088112 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088120 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088128 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088136 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088144 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088152 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088160 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088168 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088176 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088184 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088192 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088200 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088208 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088216 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088224 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088232 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088240 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088248 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088272 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088280 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088288 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088296 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088304 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088312 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088320 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088328 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088336 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088344 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088352 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088360 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088368 on sda3)
raid5:md0: read error corrected (8 sectors at 1443088376 on sda3)
sd 5:0:0:0: [sda] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE,SUGGEST_OK
sd 5:0:0:0: [sda] Sense Key : Medium Error [current]
Info fld=0x56243072
sd 5:0:0:0: [sda] Add. Sense: No additional sense information
end_request: I/O error, dev sda, sector 1445212274
ENABLE_WRITE_CACHE (current: enabled).
sd 5:0:0:0: [sda] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE,SUGGEST_OK
sd 5:0:0:0: [sda] Sense Key : Medium Error [current]
Info fld=0x5624313a
sd 5:0:0:0: [sda] Add. Sense: No additional sense information
end_request: I/O error, dev sda, sector 1445212474
raid5:md0: read error corrected (8 sectors at 1443091584 on sda3)
raid5:md0: read error corrected (8 sectors at 1443091592 on sda3)
raid5:md0: read error corrected (8 sectors at 1443091600 on sda3)
raid5:md0: read error corrected (8 sectors at 1443091608 on sda3)
raid5:md0: read error corrected (8 sectors at 1443091616 on sda3)
raid5:md0: read error corrected (8 sectors at 1443091624 on sda3)
raid5:md0: read error corrected (8 sectors at 1443091632 on sda3)
raid5:md0: read error corrected (8 sectors at 1443091640 on sda3)
raid5:md0: read error corrected (8 sectors at 1443091648 on sda3)
raid5:md0: read error corrected (8 sectors at 1443091656 on sda3)
raid5:md0: read error corrected (8 sectors at 1443091664 on sda3)
raid5:md0: read error corrected (8 sectors at 1443091672 on sda3)
raid5:md0: read error corrected (8 sectors at 1443091680 on sda3)
raid5:md0: read error corrected (8 sectors at 1443091688 on sda3)
raid5:md0: read error corrected (8 sectors at 1443091696 on sda3)
raid5:md0: read error corrected (8 sectors at 1443091704 on sda3)
raid5:md0: read error corrected (8 sectors at 1443091840 on sda3)
raid5:md0: read error corrected (8 sectors at 1443091848 on sda3)
raid5:md0: read error corrected (8 sectors at 1443091856 on sda3)
raid5:md0: read error corrected (8 sectors at 1443091864 on sda3)
raid5:md0: read error corrected (8 sectors at 1443091872 on sda3)
raid5:md0: read error corrected (8 sectors at 1443091880 on sda3)
raid5:md0: read error corrected (8 sectors at 1443091888 on sda3)
raid5:md0: read error corrected (8 sectors at 1443091896 on sda3)
raid5:md0: read error corrected (8 sectors at 1443091904 on sda3)
raid5:md0: read error corrected (8 sectors at 1443091912 on sda3)
raid5:md0: read error corrected (8 sectors at 1443091920 on sda3)
raid5:md0: read error corrected (8 sectors at 1443091928 on sda3)
raid5:md0: read error corrected (8 sectors at 1443091936 on sda3)
raid5:md0: read error corrected (8 sectors at 1443091944 on sda3)
raid5:md0: read error corrected (8 sectors at 1443091952 on sda3)
raid5:md0: read error corrected (8 sectors at 1443091960 on sda3)
raid5:md0: read error corrected (8 sectors at 1443184504 on sda3)
sd 5:0:0:0: [sda] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE,SUGGEST_OK
sd 5:0:0:0: [sda] Sense Key : Medium Error [current]
Info fld=0x56259b86
sd 5:0:0:0: [sda] Add. Sense: No additional sense information
end_request: I/O error, dev sda, sector 1445305222
raid5:md0: read error corrected (8 sectors at 1443184640 on sda3)
raid5:md0: read error corrected (8 sectors at 1443184648 on sda3)
raid5:md0: read error corrected (8 sectors at 1443184656 on sda3)
raid5:md0: read error corrected (8 sectors at 1443184664 on sda3)
raid5:md0: read error corrected (8 sectors at 1443184672 on sda3)
raid5:md0: read error corrected (8 sectors at 1443184680 on sda3)
raid5:md0: read error corrected (8 sectors at 1443184688 on sda3)
raid5:md0: read error corrected (8 sectors at 1443184696 on sda3)
raid5:md0: read error corrected (8 sectors at 1443184704 on sda3)
raid5:md0: read error corrected (8 sectors at 1443184712 on sda3)
raid5:md0: read error corrected (8 sectors at 1443184720 on sda3)
raid5:md0: read error corrected (8 sectors at 1443184728 on sda3)
raid5:md0: read error corrected (8 sectors at 1443184736 on sda3)
raid5:md0: read error corrected (8 sectors at 1443184744 on sda3)
raid5:md0: read error corrected (8 sectors at 1443184752 on sda3)
raid5:md0: read error corrected (8 sectors at 1443184760 on sda3)
raid5:md0: read error corrected (8 sectors at 1443184768 on sda3)
raid5:md0: read error corrected (8 sectors at 1443184776 on sda3)
raid5:md0: read error corrected (8 sectors at 1443184784 on sda3)
raid5:md0: read error corrected (8 sectors at 1443184792 on sda3)
raid5:md0: read error corrected (8 sectors at 1443184800 on sda3)
raid5:md0: read error corrected (8 sectors at 1443184808 on sda3)
raid5:md0: read error corrected (8 sectors at 1443184816 on sda3)
raid5:md0: read error corrected (8 sectors at 1443184824 on sda3)
raid5:md0: read error corrected (8 sectors at 1443184832 on sda3)
raid5:md0: read error corrected (8 sectors at 1443184840 on sda3)
raid5:md0: read error corrected (8 sectors at 1443184848 on sda3)
raid5:md0: read error corrected (8 sectors at 1443184856 on sda3)
raid5:md0: read error corrected (8 sectors at 1443184864 on sda3)
raid5:md0: read error corrected (8 sectors at 1443184872 on sda3)
raid5:md0: read error corrected (8 sectors at 1443184880 on sda3)
raid5:md0: read error corrected (8 sectors at 1443184888 on sda3)


woodhouse
New here
Posts: 3
Joined: Sat Dec 05, 2009 6:01 pm

Re: Disk Read/Write Error

Post by woodhouse » Mon Dec 07, 2009 2:03 am

thanatos74, seems you had also a lot of 'raid trouble' - that fills a booring weekend :twisted:

but seriously, not to play around with the nas was my intention when i decided to buy a out of the box solution, i have none of the features activated, just the smb daemon, i just wanted a place where i can copy my files. if i want to geek around with the system i build my own raid.

i never mind to change the system by qnap.. what should i do when the box works for lets say a year and then crashes againg with disk drop from 1 till 5? no chance to get back my files. i backup the important files but i dont have enought storage to backup the whole nas :-(

now i sell my nas and go back to investigate what system may be an opportunity

thanatos74 wrote:Very curious...

Woodhouse, I agree with you - most important thing concerning the nas is reliability.
A friend of mine also owns a brand new TS-509 - he just bought it 2 or 3 weeks ago (along with 5 Samsung disks)
The same time, my device started to make "raid-trouble" also his one does!!!
It was really the same night, both of our devices dropped disks out of the raid (and no, there is no whatever connection between those two devices)

Accident?

User avatar
thanatos74
Starting out
Posts: 46
Joined: Wed Jan 21, 2009 5:46 pm
Location: Munich

Re: Disk Read/Write Error

Post by thanatos74 » Mon Dec 07, 2009 3:27 am

woodhouse wrote:thanatos74, seems you had also a lot of 'raid trouble' - that fills a booring weekend :twisted:

but seriously, not to play around with the nas was my intention when i decided to buy a out of the box solution, i have none of the features activated, just the smb daemon, i just wanted a place where i can copy my files. if i want to geek around with the system i build my own raid.

i never mind to change the system by qnap.. what should i do when the box works for lets say a year and then crashes againg with disk drop from 1 till 5? no chance to get back my files. i backup the important files but i dont have enought storage to backup the whole nas :-(

now i sell my nas and go back to investigate what system may be an opportunity

thanatos74 wrote:Very curious...

Woodhouse, I agree with you - most important thing concerning the nas is reliability.
A friend of mine also owns a brand new TS-509 - he just bought it 2 or 3 weeks ago (along with 5 Samsung disks)
The same time, my device started to make "raid-trouble" also his one does!!!
It was really the same night, both of our devices dropped disks out of the raid (and no, there is no whatever connection between those two devices)

Accident?


woodhouse, it seems that the both of us have no real life :lol:

No really, your problems are identical to mine - also are your findings.
I've also got the strong feeling, that the latest firmware and heavy I/O are no good friends.
This afternoon I've downgraded the firmware on my box to 3.1.1 Build 0815T and created a new raid.
Knocking on wood, but so far my problems seem to be gone. The box is running for some hours now without any problems and also the performance is very good (about 90 MB/s).

The last few days changed my opinion about reliability and the "hand on needs" of qnap devices. When I decided to buy a qnap device, I also expected a solution I dont have to play around with.
Spending the same money in computer parts, I could habe built myself a bigger and faster system but that is not what I wanted.
On the other hand, qnap devices offer a very good performance and in fact I use some of the features I was not able to find on other boxes :roll:

However, I can understand you very well - I was about to throw the box out of the window this weekend.
At the moment I'm still not trusting the box anymore...I have backups of all my data and only copying ** on the box at the moment.
The relationship between me and my nas is deranged at the moment...

Bro, I wish you more luck with your new storage solution!
~ Two hours of trial and error can save ten minutes of RTFM ~

User avatar
thanatos74
Starting out
Posts: 46
Joined: Wed Jan 21, 2009 5:46 pm
Location: Munich

Re: Disk Read/Write Error

Post by thanatos74 » Tue Dec 08, 2009 5:07 am

Ok, here are my findings, regarding the problems I've described in the posts above.

Maybe I should first start with the actual situation/solution - after downgrading to 3.1.1 Build 0815T everythings seems to be fine again. :D
Raid builds just fine and all the disk drop / raid drop problems are completely gone at the moment.

It seems that firmware 3.1.2 Build1014 causes problems in some situations. Especially when there is high I/O over some time, disks drop out, the device freezes or reboots.
Also the available Beta Version 3.2.2 shows the same behavior - at least on my device.
I also contacted Qnap Tech Support, but until now I've got no feedback at all. I think they should have a strong interest in that problem.

Of course, there is still a little chance that all my problems have been caused by faulty hardware.
If so, why are the problems all gone with the firmware downgrade?

I'm going to stay on firmware 3.1.1 for now - maybe Qnap can find the problem and solve it in future firmware releases.

@QNAP: If you need further informations, investigating the described problem dont hesitate to contact me.

_thanatos74
~ Two hours of trial and error can save ten minutes of RTFM ~

Post Reply

Return to “System & Disk Volume Management”