Volume "not active" after changing faulty harddrive

Questions about SNMP, Power, System, Logs, disk, & RAID.
Post Reply
herrhuber03
New here
Posts: 6
Joined: Wed May 27, 2015 2:47 pm

Volume "not active" after changing faulty harddrive

Post by herrhuber03 »

Hy!

If someone has an idea with my customers QNAP TS-869 Pro.

Installed are:
8x 1TB Seagate certified HD's, as a RAID 5 Volume (not Pool)
one drive is configured as Spare

When harddrive no 3 became "anormal" (with smart errors), i expected that the spare disk becomes active and takes over the data, but that didn't happen, so i got a new harddrive and changed it with the faulty drive in slot 3.
after that, the volume became "not active" and also didn't rebuild itself. so, i installed the old drive again to see if the volume becomes active again.
this worked one time, where i did an additional backup of the data, changed the harddrive again --> volume not active.
now, i tried to get the volume active again, but it's not working even even with the old drive

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Here is some basic information:

[/tmp] # echo "NAS Model: $(getsysinfo model)"
NAS Model: TS-869 Pro
[/tmp] # echo "Firmware: $(getcfg system version)"
Firmware: 4.1.3
[/tmp] # df -h
Filesystem Size Used Available Use% Mounted on
none 200.0M 136.0M 64.0M 68% /
devtmpfs 488.1M 4.0k 488.1M 0% /dev
tmpfs 64.0M 180.0k 63.8M 0% /tmp
tmpfs 492.6M 0 492.6M 0% /dev/shm
/dev/md9 509.5M 137.0M 372.5M 27% /mnt/HDA_ROOT
/dev/md13 371.0M 271.8M 99.2M 73% /mnt/ext
[/tmp] # mount
none on /new_root type tmpfs (rw,mode=0755,size=200M)
/proc on /proc type proc (rw)
devpts on /dev/pts type devpts (rw)
sysfs on /sys type sysfs (rw)
tmpfs on /tmp type tmpfs (rw,size=64M)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/bus/usb type usbfs (rw)
/dev/md9 on /mnt/HDA_ROOT type ext3 (rw,data=ordered)
/dev/md13 on /mnt/ext type ext3 (rw,data=ordered)
[/tmp] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md256 : active raid1 sdh2[7](S) sdg2[6](S) sdf2[5](S) sde2[4](S) sdd2[3](S) sdc2[2](S) sdb2[1] sda2[0]
530112 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk

md13 : active raid1 sdc4[2] sda4[0] sdh4[7] sdg4[6] sdf4[5] sde4[4] sdd4[3] sdb4[1]
458880 blocks [8/8] [UUUUUUUU]
bitmap: 0/57 pages [0KB], 4KB chunk

md9 : active raid1 sdc1[2] sda1[0] sdh1[7] sdg1[6] sdf1[5] sde1[4] sdd1[3] sdb1[1]
530048 blocks [8/8] [UUUUUUUU]
bitmap: 0/65 pages [0KB], 4KB chunk

unused devices: <none>
[/tmp] # cat /proc/meminfo | grep Mem
MemTotal: 1008892 kB
MemFree: 378464 kB
[/tmp] # fdisk -l

Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 1 66 530125 83 Linux
/dev/sda2 67 132 530142 83 Linux
/dev/sda3 133 121538 975193693 83 Linux
/dev/sda4 121539 121600 498012 83 Linux

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 1 66 530125 83 Linux
/dev/sdb2 67 132 530142 83 Linux
/dev/sdb3 133 121538 975193693 83 Linux
/dev/sdb4 121539 121600 498012 83 Linux

Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdc1 1 66 530125 83 Linux
/dev/sdc2 67 132 530142 83 Linux
/dev/sdc3 133 121538 975193693 83 Linux
/dev/sdc4 121539 121600 498012 83 Linux

Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdd1 1 66 530125 83 Linux
/dev/sdd2 67 132 530142 83 Linux
/dev/sdd3 133 121538 975193693 83 Linux
/dev/sdd4 121539 121600 498012 83 Linux

Disk /dev/sde: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sde1 1 66 530125 83 Linux
/dev/sde2 67 132 530142 83 Linux
/dev/sde3 133 121538 975193693 83 Linux
/dev/sde4 121539 121600 498012 83 Linux

Disk /dev/sdf: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdf1 1 66 530125 83 Linux
/dev/sdf2 67 132 530142 83 Linux
/dev/sdf3 133 121538 975193693 83 Linux
/dev/sdf4 121539 121600 498012 83 Linux

Disk /dev/sdg: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdg1 1 66 530125 83 Linux
/dev/sdg2 67 132 530142 83 Linux
/dev/sdg3 133 121538 975193693 83 Linux
/dev/sdg4 121539 121600 498012 83 Linux

Disk /dev/sdh: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdh1 1 66 530125 83 Linux
/dev/sdh2 67 132 530142 83 Linux
/dev/sdh3 133 121538 975193693 83 Linux
/dev/sdh4 121539 121600 498012 83 Linux

Disk /dev/sdi: 515 MB, 515899392 bytes
8 heads, 32 sectors/track, 3936 cylinders
Units = cylinders of 256 * 512 = 131072 bytes

Device Boot Start End Blocks Id System
/dev/sdi1 1 17 2160 83 Linux
/dev/sdi2 * 18 1910 242304 83 Linux
/dev/sdi3 1911 3803 242304 83 Linux
/dev/sdi4 3804 3936 17024 5 Extended
/dev/sdi5 3804 3868 8304 83 Linux
/dev/sdi6 3869 3936 8688 83 Linux

Disk /dev/md9: 542 MB, 542769152 bytes
2 heads, 4 sectors/track, 132512 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md9 doesn't contain a valid partition table

Disk /dev/md13: 469 MB, 469893120 bytes
2 heads, 4 sectors/track, 114720 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md13 doesn't contain a valid partition table

Disk /dev/md256: 542 MB, 542834688 bytes
2 heads, 4 sectors/track, 132528 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md256 doesn't contain a valid partition table
[/tmp] #

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
And here is the storage_lib.log when trying to rebuild the volume with the faulty drive installed:

get_md_string: Execute "/sys/block/md0/md/sync_completed" failed!
get_md_string: Execute "/sys/block/md0/md/sync_completed" failed!
Volume_Get_Info: "/dev/md0" is not existed!
Blk_Dev_Generate_Mount_Point: mount point for "/dev/sdc3" is "/share/5000C5005B794418_DATA", is_internal is 1.
get_md_string: Execute "/sys/block/md0/md/sync_completed" failed!
Volume_Get_Info: "/dev/md0" is not existed!
Blk_Dev_Generate_Mount_Point: mount point for "/dev/sdc3" is "/share/5000C5005B794418_DATA", is_internal is 1.
RAID_Is_Bitmap_Enabled:RAID(0):Bitmap already is NOT enabled.
Perform cmd "/sbin/qcfg -g "vol_encrypted_algorithm" "selection" -f /etc/default_config/volume_man.conf 2>>/dev/null" failed, cmd_rsp=1, reason code:0.
Volume_Get_Encrypted_Algorithm: Fail to get selection field.
Blk_Dev_Generate_Mount_Point: mount point for "/dev/md0" is "/share/MD0_DATA", is_internal is 1.
get_md_string: Execute "/sys/block/md0/md/sync_completed" failed!
get_md_string: Execute "/sys/block/md0/md/sync_completed" failed!
get_md_string: Execute "/sys/block/md0/md/sync_completed" failed!
RAID_Is_Bitmap_Enabled:RAID(0):Bitmap already is NOT enabled.
RAID_Get_PD_Member_Status:ret = -1, status = -9
Blk_Dev_Generate_Mount_Point: mount point for "/dev/md0" is "/share/MD0_DATA", is_internal is 1.
NAS_Check_If_JBOD_Can_Be_Recovered: got called. need_fork: 0
NAS_Check_If_JBOD_Can_Be_Recovered: enc_num: 4, internal_enc_num: 1, raid_num: 1
get_md_string: Execute "/sys/block/md0/md/sync_completed" failed!
get_md_string: Execute "/sys/block/md0/md/sync_completed" failed!
RAID_Is_Bitmap_Enabled:RAID(0):Bitmap already is NOT enabled.
RAID_Get_PD_Member_Status:ret = -1, status = -9
get_md_string: Execute "/sys/block/md0/md/sync_completed" failed!
SSDCacheGroup_Get_LV_Id: [SSDCacheGroup(0)] Unable to get LV ID.
LV_Get_Pool_Id: [LV(256)] Fail to get Pool ID.
Perform cmd "/bin/mount | grep "/mnt/HDA_ROOT" &>/dev/null" OK, cmd_rsp=0, reason code:0.
Volume_Is_Configure: Volume(1):status(19), and "is NOT" configured.
Volume_Is_Configure: Volume(2):status(-1), and "is NOT" configured.
Volume_Is_Configure: Volume(1):status(19), and "is NOT" configured.
Volume_Is_Configure: Volume(2):status(-1), and "is NOT" configured.
get_md_string: Execute "/sys/block/md0/md/sync_completed" failed!
Volume_Need_Recover: Volume(1): this volume needs to be recovered!
Volume_Set_Start_Time:Volume(1):set start time to (53867).
NAS_Ext_Swap_Enable is got called, count=1.
NAS_Ext_Swap_Enable:Disk(0x1) does not have ext swap partition!
NAS_Ext_Swap_Enable:Disk(0x4) does not have ext swap partition!
NAS_Ext_Swap_Enable:Disk(0x5) does not have ext swap partition!
NAS_Ext_Swap_Enable:Disk(0x6) does not have ext swap partition!
NAS_Ext_Swap_Enable:Disk(0x7) does not have ext swap partition!
NAS_Ext_Swap_Enable:Disk(0x8) does not have ext swap partition!
Pattern result is " 530108 0 530108
".
Perform command "free", and "Swap:" is found (1).
NAS_Ext_Swap_Check_Total_Size:Current swap size is 517 MB.
Pattern result is " 530108 0 530108
".
Perform command "free", and "Swap:" is found (1).
NAS_Ext_Swap_Check_Total_Size:Current swap size is 517 MB.
Perform cmd "/sbin/mdadm -CfR /dev/md322 -e 1.0 --name 322 --bitmap=internal --write-behind --level=1 --chunk=64 --raid-devices=6 missing missing missing missing missing missing --spare-devices=1 &>/dev/null" failed, cmd_rsp=256, reason code:1.
Perform command "/bin/cat /proc/swaps 2>>/dev/null", and "/dev/md322" is NOT found (0).
NAS_Ext_Swap_Enable:Fail to swap on with /dev/md322.
vol_set_mnt_options:[VOL_1]:Succeed to set (dealloc,read_only,write_cache)=(1,0,1).
Volume_Set_ReadOnly: Volume(1): volume is not mounted!
get_md_string: Execute "/sys/block/md0/md/sync_completed" failed!
Volume_Get_Info: "/dev/md0" is not existed!
Blk_Dev_Generate_Mount_Point: mount point for "/dev/sdc3" is "/share/5000C5005B794418_DATA", is_internal is 1.
get_md_string: Execute "/sys/block/md0/md/sync_completed" failed!
Volume_Get_Info: "/dev/md0" is not existed!
Blk_Dev_Generate_Mount_Point: mount point for "/dev/sdc3" is "/share/5000C5005B794418_DATA", is_internal is 1.
RAID_Is_Bitmap_Enabled:RAID(0):Bitmap already is NOT enabled.
Perform cmd "/sbin/qcfg -g "vol_encrypted_algorithm" "selection" -f /etc/default_config/volume_man.conf 2>>/dev/null" failed, cmd_rsp=1, reason code:0.
Volume_Get_Encrypted_Algorithm: Fail to get selection field.
Blk_Dev_Generate_Mount_Point: mount point for "/dev/md0" is "/share/MD0_DATA", is_internal is 1.
Perform cmd "/sbin/mdadm --q-examine /dev/sda3 /dev/sdd3 /dev/sde3 /dev/sdf3 /dev/sdg3 /dev/sdh3 >/tmp/temp.XZB4qv 2>>/dev/null" OK, cmd_rsp=0, reason code:0.
md_recovery: RAID ID(0): necessary count=4.
md_recovery: RAID ID(0): recovery examine failed, (Dev_Level,Degrade_Dev_Count,Degrade_Avail)=(5,0,no).
NAS_Ext_Swap_Enable is got called, count=1.
NAS_Ext_Swap_Enable:Disk(0x1) does not have ext swap partition!
NAS_Ext_Swap_Enable:Disk(0x4) does not have ext swap partition!
NAS_Ext_Swap_Enable:Disk(0x5) does not have ext swap partition!
NAS_Ext_Swap_Enable:Disk(0x6) does not have ext swap partition!
NAS_Ext_Swap_Enable:Disk(0x7) does not have ext swap partition!
NAS_Ext_Swap_Enable:Disk(0x8) does not have ext swap partition!
NAS_Ext_Swap_Enable:RAID ID not found, cannot disable!
Volume_Set_Start_Time:Volume(1):set start time to (0).
get_md_string: Execute "/sys/block/md0/md/sync_completed" failed!
Volume_Get_Info: "/dev/md0" is not existed!
Blk_Dev_Generate_Mount_Point: mount point for "/dev/sdc3" is "/share/5000C5005B794418_DATA", is_internal is 1.
RAID_Is_Bitmap_Enabled:RAID(0):Bitmap already is NOT enabled.
Perform cmd "/sbin/qcfg -g "vol_encrypted_algorithm" "selection" -f /etc/default_config/volume_man.conf 2>>/dev/null" failed, cmd_rsp=1, reason code:0.
Volume_Get_Encrypted_Algorithm: Fail to get selection field.
Blk_Dev_Generate_Mount_Point: mount point for "/dev/md0" is "/share/MD0_DATA", is_internal is 1.
get_md_string: Execute "/sys/block/md0/md/sync_completed" failed!
RAID_Is_Bitmap_Enabled:RAID(0):Bitmap already is NOT enabled.
RAID_Get_PD_Member_Status:ret = -1, status = -9
NAS_Check_If_JBOD_Can_Be_Recovered: got called. need_fork: 0
NAS_Check_If_JBOD_Can_Be_Recovered: enc_num: 4, internal_enc_num: 1, raid_num: 1
get_md_string: Execute "/sys/block/md0/md/sync_completed" failed!
get_md_string: Execute "/sys/block/md0/md/sync_completed" failed!
RAID_Is_Bitmap_Enabled:RAID(0):Bitmap already is NOT enabled.
RAID_Get_PD_Member_Status:ret = -1, status = -9
get_md_string: Execute "/sys/block/md0/md/sync_completed" failed!
Volume_Get_Info: "/dev/md0" is not existed!
Blk_Dev_Generate_Mount_Point: mount point for "/dev/sdc3" is "/share/5000C5005B794418_DATA", is_internal is 1.
get_md_string: Execute "/sys/block/md0/md/sync_completed" failed!
Volume_Get_Info: "/dev/md0" is not existed!
Blk_Dev_Generate_Mount_Point: mount point for "/dev/sdc3" is "/share/5000C5005B794418_DATA", is_internal is 1.
RAID_Is_Bitmap_Enabled:RAID(0):Bitmap already is NOT enabled.
Perform cmd "/sbin/qcfg -g "vol_encrypted_algorithm" "selection" -f /etc/default_config/volume_man.conf 2>>/dev/null" failed, cmd_rsp=1, reason code:0.
Volume_Get_Encrypted_Algorithm: Fail to get selection field.
Blk_Dev_Generate_Mount_Point: mount point for "/dev/md0" is "/share/MD0_DATA", is_internal is 1.
get_md_string: Execute "/sys/block/md0/md/sync_completed" failed!
Volume_Get_Info: "/dev/md0" is not existed!
Blk_Dev_Generate_Mount_Point: mount point for "/dev/sdc3" is "/share/5000C5005B794418_DATA", is_internal is 1.
RAID_Is_Bitmap_Enabled:RAID(0):Bitmap already is NOT enabled.
Perform cmd "/sbin/qcfg -g "vol_encrypted_algorithm" "selection" -f /etc/default_config/volume_man.conf 2>>/dev/null" failed, cmd_rsp=1, reason code:0.
Volume_Get_Encrypted_Algorithm: Fail to get selection field.
Blk_Dev_Generate_Mount_Point: mount point for "/dev/md0" is "/share/MD0_DATA", is_internal is 1.
get_md_string: Execute "/sys/block/md0/md/sync_completed" failed!
RAID_Is_Bitmap_Enabled:RAID(0):Bitmap already is NOT enabled.
RAID_Get_PD_Member_Status:ret = -1, status = -9
get_md_string: Execute "/sys/block/md0/md/sync_completed" failed!
SSDCacheGroup_Get_LV_Id: [SSDCacheGroup(0)] Unable to get LV ID.
LV_Get_Pool_Id: [LV(256)] Fail to get Pool ID.
Perform cmd "/bin/mount | grep "/mnt/HDA_ROOT" &>/dev/null" OK, cmd_rsp=0, reason code:0.
Volume_Is_Configure: Volume(1):status(19), and "is NOT" configured.
Volume_Is_Configure: Volume(2):status(-1), and "is NOT" configured.
Volume_Is_Configure: Volume(1):status(19), and "is NOT" configured.
Volume_Is_Configure: Volume(2):status(-1), and "is NOT" configured.
You do not have the required permissions to view the files attached to this post.
User avatar
forkless
Experience counts
Posts: 1907
Joined: Mon Nov 23, 2009 6:52 am
Location: The Netherlands

Re: Volume "not active" after changing faulty harddrive

Post by forkless »

When you pull the disk do you wait until the array says it is in degraded mode or does it go straight to not active? (It can take a minute in some instances for it to detect degraded mode)

EDIT: It also seems not to detect the spare according to your log. You could remove your cold spare as well so it can go 'degraded' because it looks like this is stopping the recovery. However, I would take some more precaution and contact https://helpdesk.qnap.com to get some instructions. They are far more equipped to help you in this situation.
herrhuber03
New here
Posts: 6
Joined: Wed May 27, 2015 2:47 pm

Re: Volume "not active" after changing faulty harddrive

Post by herrhuber03 »

forkless wrote:When you pull the disk do you wait until the array says it is in degraded mode or does it go straight to not active? (It can take a minute in some instances for it to detect degraded mode)

EDIT: It also seems not to detect the spare according to your log. You could remove your cold spare as well so it can go 'degraded' because it looks like this is stopping the recovery. However, I would take some more precaution and contact https://helpdesk.qnap.com to get some instructions. They are far more equipped to help you in this situation.

if i pull the disk out and wait for the beep, it goes into not active. i tried to wait some time, but the volume stays in this state.

i already wrote a ticket to the helpdesk, but till now i got no answer
User avatar
forkless
Experience counts
Posts: 1907
Joined: Mon Nov 23, 2009 6:52 am
Location: The Netherlands

Re: Volume "not active" after changing faulty harddrive

Post by forkless »

Yeah it looks like the problem with the spare not being properly detected is causing the system to interrupt/halt the recovery. I would suggest getting support through the formal support channels, they are usually quite good in their response times.
herrhuber03
New here
Posts: 6
Joined: Wed May 27, 2015 2:47 pm

Re: Volume "not active" after changing faulty harddrive

Post by herrhuber03 »

Support advised me to take following steps (maybe helpful for others):
.) Download Putty and connect to NAS via SSH
.) Enter following commands to stop NAS Services: cd .. cd etc/init.d/services.sh stop
.) umount /dev/md0 (first Raid).
.) MDADM -AF /DEV/MD0 /DEV/SDA3 /DEV/SDB3 /DEV/SDC3 /DEV/SDD3 ... etc.
or: mdadm -A /dev/md0 /dev/sd[a-z]3 (a-z number of disks)
.) Check rebuild with "cat proc/mdstat"
.) Mount StorageV1: mount -t ext3/4 /dev/md0 share/MD0_DATA/
(StorageV2 has to be mounted to /share/CACHDEV.... )
If there is an error like "Wrong Type, supberblock corrupted/bad" try to repair the superblock with "e2fsck -fy".


Hopefully the rebuild will bring the data back and i'm not sure about setting the spare drive again in this and other configurations. Maybe a RAID 6 will be the better solution...

Thanks for the help so far
User avatar
forkless
Experience counts
Posts: 1907
Joined: Mon Nov 23, 2009 6:52 am
Location: The Netherlands

Re: Volume "not active" after changing faulty harddrive

Post by forkless »

I'm actually surprised they didn't do this for you (unless you were not able to give access to your NAS).
herrhuber03
New here
Posts: 6
Joined: Wed May 27, 2015 2:47 pm

Re: Volume "not active" after changing faulty harddrive

Post by herrhuber03 »

forkless wrote:I'm actually surprised they didn't do this for you (unless you were not able to give access to your NAS).
I was waiting 2 days for this answer and as i'm experienced in linux it wasn't a problem to do that by myself.
Problem is that the rebuild was successful, but with the faulty harddrive installed.

Tomorrow i hope that the procedure will work after i change disks..
User avatar
pwilson
Guru
Posts: 22533
Joined: Fri Mar 06, 2009 11:20 am
Location: Victoria, BC, Canada (UTC-08:00)

Re: Volume "not active" after changing faulty harddrive

Post by pwilson »

herrhuber03 wrote:
forkless wrote:I'm actually surprised they didn't do this for you (unless you were not able to give access to your NAS).
I was waiting 2 days for this answer and as i'm experienced in linux it wasn't a problem to do that by myself.
Problem is that the rebuild was successful, but with the faulty harddrive installed.

Tomorrow i hope that the procedure will work after i change disks..
How did you replace the drive? (Procedure).

Did you follow QNAP Tutorial: Hot-swapping the hard drives when the RAID crashes.

BTW, you might get more responses from the Community if you temporarily switched your NAS Admin interface over to English prior to grabbing screenshots, and it you posted them in-line rather than as attachments. Non-Deutsch speakers may not understand your screenshots.

For example:

Code: Select all

[img]http://s8.tinypic.com/n6uk5_th.jpg[/img]
produces output like:

Image

Attachments appear as "postage stamp" sized images unless people actually "click" on them, and Community Helpers can't "quote" attachments.

Patrick M. Wilson
Victoria, BC Canada
QNAP TS-470 Pro w/ 4 * Western Digital WD30EFRX WD Reds (RAID5) - - Single 8.1TB Storage Pool FW: QTS 4.2.0 Build 20151023 - Kali Linux v1.06 (64bit)
Forums: View My Profile - Search My Posts - View My Photo - View My Location - Top Community Posters
QNAP: Turbo NAS User Manual - QNAP Wiki - QNAP Tutorials - QNAP FAQs

Please review: When you're asking a question, please include the following.
herrhuber03
New here
Posts: 6
Joined: Wed May 27, 2015 2:47 pm

Re: Volume "not active" after changing faulty harddrive

Post by herrhuber03 »

thanks for the info patrick!

after the rebuild, i now have the status that the gui says the volume is not mounted, but the mount command says something different.
before i change disks, the volume has to be in degraded status - how could i get to that status?
Image
herrhuber03
New here
Posts: 6
Joined: Wed May 27, 2015 2:47 pm

Re: Volume "not active" after changing faulty harddrive

Post by herrhuber03 »

ok... some news here:
the raid seems degraded but stuck with rebuilding because of the spare drive

[~] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md0 : active raid5 sda3[0] sdb3[8] sdh3[7] sdg3[6] sdf3[5] sde3[4] sdd3[3] sdc3[2]
6826355200 blocks level 5, 64k chunk, algorithm 2 [8/7] [U_UUUUUU]

md256 : active raid1 sdh2[7](S) sdg2[6](S) sdf2[5](S) sde2[4](S) sdd2[3](S) sdc2[2](S) sdb2[1] sda2[0]
530112 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk

md13 : active raid1 sda4[0] sdh4[7] sdg4[6] sdf4[5] sde4[4] sdd4[3] sdc4[2] sdb4[1]
458880 blocks [8/8] [UUUUUUUU]
bitmap: 0/57 pages [0KB], 4KB chunk

md9 : active raid1 sda1[0] sdh1[7] sdg1[6] sdf1[5] sde1[4] sdd1[3] sdc1[2] sdb1[1]
530048 blocks [8/8] [UUUUUUUU]
bitmap: 1/65 pages [4KB], 4KB chunk

unused devices: <none>



[~] # mdadm --detail /dev/md0
/dev/md0:
Version : 0.90
Creation Time : Wed Jan 2 19:23:20 2013
Raid Level : raid5
Array Size : 6826355200 (6510.12 GiB 6990.19 GB)
Used Dev Size : 975193600 (930.02 GiB 998.60 GB)
Raid Devices : 8
Total Devices : 8
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Mon Jun 1 09:50:57 2015
State : clean, degraded
Active Devices : 7
Working Devices : 8
Failed Devices : 0
Spare Devices : 1

Layout : left-symmetric
Chunk Size : 64K

UUID : c45e7082:d11c1426:5dfe9d8d:51fa3af4
Events : 0.810372

Number Major Minor RaidDevice State
0 8 3 0 active sync /dev/sda3
8 8 19 1 spare rebuilding /dev/sdb3
2 8 35 2 active sync /dev/sdc3
3 8 51 3 active sync /dev/sdd3
4 8 67 4 active sync /dev/sde3
5 8 83 5 active sync /dev/sdf3
6 8 99 6 active sync /dev/sdg3
7 8 115 7 active sync /dev/sdh3
Post Reply

Return to “System & Disk Volume Management”