Hello,
After a clean shutdown. Because we had to move the unit. The RAID volume wouldn`t come online?
So I did a check scan on all the disk. To be sure no disk had bad blocks or some like that. the SMART test also went okay.
After that I would like to mount te RAID set again. But all options are greyed out?
Did a check if the RAID set is still okay? It looks like it?
(sorry for my big post. But i would like to provide as much information as needed)
[~] # mdadm --detail /dev/md0
/dev/md0:
Version : 01.00.03
Creation Time : Wed Jul 27 21:18:29 2011
Raid Level : raid5
Array Size : 20500882752 (19551.17 GiB 20992.90 GB)
Used Dev Size : 2928697536 (2793.02 GiB 2998.99 GB)
Raid Devices : 8
Total Devices : 8
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Wed Jul 24 08:13:26 2013
State : clean
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
Name : 0
UUID : 832f5ed4:34ba9e55:5c11dcee:9b242669
Events : 12987935
Number Major Minor RaidDevice State
0 8 3 0 active sync /dev/sda3
1 8 19 1 active sync /dev/sdb3
2 8 35 2 active sync /dev/sdc3
3 8 51 3 active sync /dev/sdd3
4 8 67 4 active sync /dev/sde3
5 8 83 5 active sync /dev/sdf3
6 8 99 6 active sync /dev/sdg3
7 8 115 7 active sync /dev/sdh3
Next tryed to use the monut command but now luck?
[~] # mount /dev/md0 /share/MD0_DATA/
mount: wrong fs type, bad option, bad superblock on /dev/md0,
missing codepage or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
dmesg:
[ 926.053639] EXT3-fs (md0): error: couldn't mount because of unsupported optional features (2c0)
[ 1658.655351] EXT3-fs (md0): error: couldn't mount because of unsupported optional features (2c0)
Why does it say EXT3-fs? the unit always had EXT4?
Some extra information:
[~] # cat /etc/config/mdadm.conf
ARRAY /dev/md0 devices=/dev/sda3,/dev/sdb3,/dev/sdc3,/dev/sdd3,/dev/sde3,/dev/sdf3,/dev/sdg3,/dev/sdh3
[~] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md0 : active raid5 sda3[0] sdh3[7] sdg3[6] sdf3[5] sde3[4] sdd3[3] sdc3[2] sdb3[1]
20500882752 blocks super 1.0 level 5, 64k chunk, algorithm 2 [8/8] [UUUUUUUU]
md8 : active raid1 sdh2[2](S) sdg2[3](S) sdf2[4](S) sde2[5](S) sdd2[6](S) sdc2[7](S) sdb2[1] sda2[0]
530048 blocks [2/2] [UU]
md13 : active raid1 sda4[0] sdh4[7] sdg4[6] sdf4[5] sde4[4] sdd4[3] sdc4[2] sdb4[1]
458880 blocks [8/8] [UUUUUUUU]
bitmap: 1/57 pages [4KB], 4KB chunk
md9 : active raid1 sda1[0] sdh1[7] sdd1[6] sde1[5] sdf1[4] sdg1[3] sdc1[2] sdb1[1]
530048 blocks [8/8] [UUUUUUUU]
bitmap: 0/65 pages [0KB], 4KB chunk
[~] # cat /etc/raidtab
raiddev /dev/md0
raid-level 5
nr-raid-disks 8
nr-spare-disks 0
chunk-size 4
persistent-superblock 1
device /dev/sda3
raid-disk 0
device /dev/sdb3
raid-disk 1
device /dev/sdc3
raid-disk 2
device /dev/sdd3
raid-disk 3
device /dev/sde3
raid-disk 4
device /dev/sdf3
raid-disk 5
device /dev/sdg3
raid-disk 6
device /dev/sdh3
raid-disk 7
[~] # fdisk -lu
Disk /dev/sde: 3000.5 GB, 3000592982016 bytes
255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Device Boot Start End Blocks Id System
/dev/sde1 1 4294967295 2147483647+ ee EFI GPT
Disk /dev/sdf: 3000.5 GB, 3000592982016 bytes
255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Device Boot Start End Blocks Id System
/dev/sdf1 1 4294967295 2147483647+ ee EFI GPT
Disk /dev/sdg: 3000.5 GB, 3000592982016 bytes
255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Device Boot Start End Blocks Id System
/dev/sdg1 1 4294967295 2147483647+ ee EFI GPT
Disk /dev/sdh: 3000.5 GB, 3000592982016 bytes
255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Device Boot Start End Blocks Id System
/dev/sdh1 1 4294967295 2147483647+ ee EFI GPT
Disk /dev/sdb: 3000.5 GB, 3000592982016 bytes
255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 4294967295 2147483647+ ee EFI GPT
Disk /dev/sdc: 3000.5 GB, 3000592982016 bytes
255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Device Boot Start End Blocks Id System
/dev/sdc1 1 4294967295 2147483647+ ee EFI GPT
Disk /dev/sda: 3000.5 GB, 3000592982016 bytes
255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Device Boot Start End Blocks Id System
/dev/sda1 1 4294967295 2147483647+ ee EFI GPT
Disk /dev/sda4: 469 MB, 469893120 bytes
2 heads, 4 sectors/track, 114720 cylinders, total 917760 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk /dev/sda4 doesn't contain a valid partition table
Disk /dev/sdd: 3000.5 GB, 3000592982016 bytes
255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Device Boot Start End Blocks Id System
/dev/sdd1 1 4294967295 2147483647+ ee EFI GPT
Disk /dev/sdx: 515 MB, 515899392 bytes
8 heads, 32 sectors/track, 3936 cylinders, total 1007616 sectors
Units = sectors of 1 * 512 = 512 bytes
Device Boot Start End Blocks Id System
/dev/sdx1 32 4351 2160 83 Linux
/dev/sdx2 4352 488959 242304 83 Linux
/dev/sdx3 488960 973567 242304 83 Linux
/dev/sdx4 973568 1007615 17024 5 Extended
/dev/sdx5 973600 990207 8304 83 Linux
/dev/sdx6 990240 1007615 8688 83 Linux
Disk /dev/md9: 542 MB, 542769152 bytes
2 heads, 4 sectors/track, 132512 cylinders, total 1060096 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk /dev/md9 doesn't contain a valid partition table
Disk /dev/md8: 542 MB, 542769152 bytes
2 heads, 4 sectors/track, 132512 cylinders, total 1060096 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk /dev/md8 doesn't contain a valid partition table
Disk /dev/md0: 20992.9 GB, 20992903938048 bytes
2 heads, 4 sectors/track, -1 cylinders, total 41001765504 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk /dev/md0 doesn't contain a valid partition table
here some extra info regarding the unit
Type: QNAP TS859U-RP+
RAID: 5
Firmware: 4.0.1 (latest)
OS using the QNAP: Windows 2008 R2 SP1
Function: Backup device B2D (replica)
Could some help me with this?? If more information is needed please let me know. I did al lot a reseach on but no luck for now?
[SOLVED] Raid volume Unmounted
-
- Starting out
- Posts: 13
- Joined: Mon Jul 04, 2011 8:37 pm
[SOLVED] Raid volume Unmounted
Last edited by MaartenKa on Wed Jul 24, 2013 7:22 pm, edited 2 times in total.
- pwilson
- Guru
- Posts: 22533
- Joined: Fri Mar 06, 2009 11:20 am
- Location: Victoria, BC, Canada (UTC-08:00)
Re: Raid volume Unmounted
Could we perhaps start with some more basic information?
Please login to your NAS via "SSH" (login as "admin") and type the following commands:
Please cut&paste these commands to your NAS, and please cut&paste the output back to this message thread. Type "exit" to end your SSH session.
Please login to your NAS via "SSH" (login as "admin") and type the following commands:
Code: Select all
getcfg system version
df -h
cat /proc/mdstat
hdparm -i /dev/sd[a-h] 2>/dev/null | grep Model
Patrick M. Wilson
Victoria, BC Canada
QNAP TS-470 Pro w/ 4 * Western Digital WD30EFRX WD Reds (RAID5) - - Single 8.1TB Storage Pool FW: QTS 4.2.0 Build 20151023 - Kali Linux v1.06 (64bit)
Forums: View My Profile - Search My Posts - View My Photo - View My Location - Top Community Posters
QNAP: Turbo NAS User Manual - QNAP Wiki - QNAP Tutorials - QNAP FAQs
Please review: When you're asking a question, please include the following.
-
- Starting out
- Posts: 13
- Joined: Mon Jul 04, 2011 8:37 pm
Re: Raid volume Unmounted
Hi patrick,
thank you for your reply
Firmware: 4.0.1
[~] # df -h
Filesystem Size Used Available Use% Mounted on
/dev/ramdisk 139.5M 115.3M 24.1M 83% /
tmpfs 64.0M 392.0k 63.6M 1% /tmp
/dev/sda4 364.2M 224.7M 139.5M 62% /mnt/ext
/dev/md9 509.5M 99.4M 410.0M 20% /mnt/HDA_ROOT
tmpfs 32.0M 0 32.0M 0% /.eaccelerator.tmp
[~] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md0 : active raid5 sda3[0] sdh3[7] sdg3[6] sdf3[5] sde3[4] sdd3[3] sdc3[2] sdb3[1]
20500882752 blocks super 1.0 level 5, 64k chunk, algorithm 2 [8/8] [UUUUUUUU]
md8 : active raid1 sdh2[2](S) sdg2[3](S) sdf2[4](S) sde2[5](S) sdd2[6](S) sdc2[7](S) sdb2[1] sda2[0]
530048 blocks [2/2] [UU]
md13 : active raid1 sda4[0] sdh4[7] sdg4[6] sdf4[5] sde4[4] sdd4[3] sdc4[2] sdb4[1]
458880 blocks [8/8] [UUUUUUUU]
bitmap: 0/57 pages [0KB], 4KB chunk
md9 : active raid1 sda1[0] sdh1[7] sdd1[6] sde1[5] sdf1[4] sdg1[3] sdc1[2] sdb1[1]
530048 blocks [8/8] [UUUUUUUU]
bitmap: 0/65 pages [0KB], 4KB chunk
unused devices: <none>
[~] # hdparm -i /dev/sd[a-h] 2>/dev/null | grep Model
Model=Hitachi HDS723030ALA640 , FwRev=MKAOA580, SerialNo= MK0331YHGAE3YA
Model=Hitachi HDS723030ALA640 , FwRev=MKAOA580, SerialNo= MK0331YHGA4T4A
Model=Hitachi HDS723030ALA640 , FwRev=MKAOA580, SerialNo= MK0311YHG2632A
Model=Hitachi HDS723030ALA640 , FwRev=MKAOA580, SerialNo= MK0331YHGA8W4A
thank you for your reply
Firmware: 4.0.1
[~] # df -h
Filesystem Size Used Available Use% Mounted on
/dev/ramdisk 139.5M 115.3M 24.1M 83% /
tmpfs 64.0M 392.0k 63.6M 1% /tmp
/dev/sda4 364.2M 224.7M 139.5M 62% /mnt/ext
/dev/md9 509.5M 99.4M 410.0M 20% /mnt/HDA_ROOT
tmpfs 32.0M 0 32.0M 0% /.eaccelerator.tmp
[~] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md0 : active raid5 sda3[0] sdh3[7] sdg3[6] sdf3[5] sde3[4] sdd3[3] sdc3[2] sdb3[1]
20500882752 blocks super 1.0 level 5, 64k chunk, algorithm 2 [8/8] [UUUUUUUU]
md8 : active raid1 sdh2[2](S) sdg2[3](S) sdf2[4](S) sde2[5](S) sdd2[6](S) sdc2[7](S) sdb2[1] sda2[0]
530048 blocks [2/2] [UU]
md13 : active raid1 sda4[0] sdh4[7] sdg4[6] sdf4[5] sde4[4] sdd4[3] sdc4[2] sdb4[1]
458880 blocks [8/8] [UUUUUUUU]
bitmap: 0/57 pages [0KB], 4KB chunk
md9 : active raid1 sda1[0] sdh1[7] sdd1[6] sde1[5] sdf1[4] sdg1[3] sdc1[2] sdb1[1]
530048 blocks [8/8] [UUUUUUUU]
bitmap: 0/65 pages [0KB], 4KB chunk
unused devices: <none>
[~] # hdparm -i /dev/sd[a-h] 2>/dev/null | grep Model
Model=Hitachi HDS723030ALA640 , FwRev=MKAOA580, SerialNo= MK0331YHGAE3YA
Model=Hitachi HDS723030ALA640 , FwRev=MKAOA580, SerialNo= MK0331YHGA4T4A
Model=Hitachi HDS723030ALA640 , FwRev=MKAOA580, SerialNo= MK0311YHG2632A
Model=Hitachi HDS723030ALA640 , FwRev=MKAOA580, SerialNo= MK0331YHGA8W4A
- pwilson
- Guru
- Posts: 22533
- Joined: Fri Mar 06, 2009 11:20 am
- Location: Victoria, BC, Canada (UTC-08:00)
Re: Raid volume Unmounted
I find it curious that it only listed 4 of your 8 drives. Everything else looks normal, excepting that /dev/md0 isn't mounted as /share/MD0_DATA.MaartenKa wrote:Code: Select all
[~] # hdparm -i /dev/sd[a-h] 2>/dev/null | grep Model Model=Hitachi HDS723030ALA640 , FwRev=MKAOA580, SerialNo= MK0331YHGAE3YA Model=Hitachi HDS723030ALA640 , FwRev=MKAOA580, SerialNo= MK0331YHGA4T4A Model=Hitachi HDS723030ALA640 , FwRev=MKAOA580, SerialNo= MK0311YHG2632A Model=Hitachi HDS723030ALA640 , FwRev=MKAOA580, SerialNo= MK0331YHGA8W4A
I would recommend contacting QNAP Customer Service for further assistance, or fill out the Online Support Form, please refer them to this message thread, as it already contains all the information they are likely to desire. Please refer them to: http://forum.qnap.com/viewtopic.php?f=25&t=79123
Patrick M. Wilson
Victoria, BC Canada
QNAP TS-470 Pro w/ 4 * Western Digital WD30EFRX WD Reds (RAID5) - - Single 8.1TB Storage Pool FW: QTS 4.2.0 Build 20151023 - Kali Linux v1.06 (64bit)
Forums: View My Profile - Search My Posts - View My Photo - View My Location - Top Community Posters
QNAP: Turbo NAS User Manual - QNAP Wiki - QNAP Tutorials - QNAP FAQs
Please review: When you're asking a question, please include the following.
-
- Starting out
- Posts: 13
- Joined: Mon Jul 04, 2011 8:37 pm
Re: Raid volume Unmounted
Okay I will contact support.
about the disks it is strange.
thank you for your help. So far.
about the disks it is strange.
thank you for your help. So far.
-
- Starting out
- Posts: 13
- Joined: Mon Jul 04, 2011 8:37 pm
Re: [SOLVED] Raid volume Unmounted
I did so more research and I tried the following.
http://forum.qnap.com/viewtopic.php?p=224731
Only change the disks I had and let it run
the e2fsck_64 takes a long time. depeding on the amount of disks you have
http://forum.qnap.com/viewtopic.php?p=224731
Only change the disks I had and let it run
Code: Select all
swapoff /dev/md8
mdadm -S /dev/md8
mkswap /dev/sda2
mkswap /dev/sdb2
mkswap /dev/sdc2
mkswap /dev/sdd2
mkswap /dev/sde2
mkswap /dev/sdf2
mkswap /dev/sdg2
mkswap /dev/sdh2
swapon /dev/sda2
swapon /dev/sdb2
swapon /dev/sdc2
swapon /dev/sdd2
swapon /dev/sde2
swapon /dev/sdf2
swapon /dev/sdg2
swapon /dev/sdh2
e2fsck_64 -f /dev/md0
mount /dev/md0 /share/MD0_DATA/ -t ext4
reboot
-
- First post
- Posts: 1
- Joined: Sun Jul 13, 2014 4:13 am
Re: [SOLVED] Raid volume Unmounted
This is perfect. Came back from vacation and booted up all the gadgets, and the NAS didn't come up. Told me to run a file check, but it was unavailable since the volume was unmounted. It also only listed the first 4 drives. Quite annoying, to say the least. Found this post, and now the NAS box is happy again (and so I am).MaartenKa wrote:I did so more research and I tried the following.
http://forum.qnap.com/viewtopic.php?p=224731
Only change the disks I had and let it run
the e2fsck_64 takes a long time. depeding on the amount of disks you haveCode: Select all
swapoff /dev/md8 mdadm -S /dev/md8 mkswap /dev/sda2 mkswap /dev/sdb2 mkswap /dev/sdc2 mkswap /dev/sdd2 mkswap /dev/sde2 mkswap /dev/sdf2 mkswap /dev/sdg2 mkswap /dev/sdh2 swapon /dev/sda2 swapon /dev/sdb2 swapon /dev/sdc2 swapon /dev/sdd2 swapon /dev/sde2 swapon /dev/sdf2 swapon /dev/sdg2 swapon /dev/sdh2 e2fsck_64 -f /dev/md0 mount /dev/md0 /share/MD0_DATA/ -t ext4 reboot
So, thanks for having my exact issue, and doing the "heavy lifting" for me (although I guess you would have preferred not to have to do the research in the first place )
-
- First post
- Posts: 1
- Joined: Fri Mar 27, 2015 10:36 pm
- Location: Brasov, Romania
Re: [SOLVED] Raid volume Unmounted
I had the same problem with TS-809U after a firmware update, MaartenKa solution worked just fine, thanks!
-
- New here
- Posts: 6
- Joined: Tue May 12, 2009 5:05 am
Re: [SOLVED] Raid volume Unmounted
I had the same issue today after applying an update to 4.2.0 on a 639 PRO RAID 6 with 6 drives.
You will notice my swapoff command changed and I had less drives mkswap on. Also, my machine has e2fsck instead of e2fsck_64.
swapoff /dev/md6
mdadm -S /dev/md6
mkswap /dev/sda2
mkswap /dev/sdb2
mkswap /dev/sdc2
mkswap /dev/sdd2
mkswap /dev/sde2
mkswap /dev/sdf2
swapon /dev/sda2
swapon /dev/sdb2
swapon /dev/sdc2
swapon /dev/sdd2
swapon /dev/sde2
swapon /dev/sdf2
e2fsck -f /dev/md0
mount /dev/md0 /share/MD0_DATA/ -t ext4
reboot
Good Luck
You will notice my swapoff command changed and I had less drives mkswap on. Also, my machine has e2fsck instead of e2fsck_64.
swapoff /dev/md6
mdadm -S /dev/md6
mkswap /dev/sda2
mkswap /dev/sdb2
mkswap /dev/sdc2
mkswap /dev/sdd2
mkswap /dev/sde2
mkswap /dev/sdf2
swapon /dev/sda2
swapon /dev/sdb2
swapon /dev/sdc2
swapon /dev/sdd2
swapon /dev/sde2
swapon /dev/sdf2
e2fsck -f /dev/md0
mount /dev/md0 /share/MD0_DATA/ -t ext4
reboot
Good Luck
-------------------------------------------------------
TS-639 PRO
6 X Western Digital Red WD30EFRX 3TB Raid 6
Firmware Version 4.1.0
TS-212E
2 X Western Digital Red WD30EFRX 3TB Raid 1
Firmware Version 4.1.0
TS-639 PRO
6 X Western Digital Red WD30EFRX 3TB Raid 6
Firmware Version 4.1.0
TS-212E
2 X Western Digital Red WD30EFRX 3TB Raid 1
Firmware Version 4.1.0