TS-470 Pro RAID 5 (4 disks) Degradated, plugged out, etc.

Questions about SNMP, Power, System, Logs, disk, & RAID.
Post Reply
grzyp
New here
Posts: 6
Joined: Fri Oct 10, 2014 4:49 am

TS-470 Pro RAID 5 (4 disks) Degradated, plugged out, etc.

Post by grzyp »

Hey,
From few days I have a problem with HDD4 in my NAS.
From some reason disk is failed, plugged out and i have to rebuild raid on it (disable application etc. to finish rebuild task).
Maybe anyone have an idea what to do with this case ?
In logs i have such events :

23,"Error","2014-11-26","13:25:31","System","127.0.0.1","localhost","Host: Drive4 plugged out."
22,"Error","2014-11-26","13:25:25","System","127.0.0.1","localhost","[Volume DataVol1, Pool 1] Host: Drive4 failed."
21,"Error","2014-11-26","13:25:10","System","127.0.0.1","localhost","Plugged drive failed to work."
20,"Information","2014-11-26","02:00:27","System","127.0.0.1","localhost","[Volume DataVol1, Pool 1] Succeed to reclaim volume."
19,"Information","2014-11-26","02:00:01","System","127.0.0.1","localhost","[Volume DataVol1, Pool 1] Start to reclaim volume."
18,"Information","2014-11-25","02:00:19","System","127.0.0.1","localhost","[Volume DataVol1, Pool 1] Succeed to reclaim volume."
17,"Information","2014-11-25","02:00:01","System","127.0.0.1","localhost","[Volume DataVol1, Pool 1] Start to reclaim volume."
16,"Information","2014-11-24","16:38:56","System","127.0.0.1","localhost"," [Pool 1] Rebuilding completed with RAID Group 1."
15,"Information","2014-11-24","09:29:56","System","127.0.0.1","localhost"," [Pool 1] Start rebuilding with RAID Group 1."
14,"Information","2014-11-24","09:26:37","System","127.0.0.1","localhost","Host: Drive4 plugged in."
13,"Information","2014-11-24","09:26:37","System","127.0.0.1","localhost"," [RAID Group 1] Degraded. Try to rebuild.
"
12"",""Warning"",""2014-11-24"",""09:26:37"",""System"",""127.0.0.1"",""localhost"","" [Pool 1] Rebuilding skipped with RAID Group 1."""
11,"Error","2014-11-24","09:26:36","System","127.0.0.1","localhost","Host: Drive4 plugged out."
10,"Error","2014-11-24","09:26:31","System","127.0.0.1","localhost","[Volume DataVol1, Pool 1] Host: Drive4 failed."
9,"Warning","2014-11-24","09:26:19","System","127.0.0.1","localhost","Host: Drive4 Write I/O error, ABORTED COMMAND sense_key=0xb, asc=0x0, ascq=0x0, CDB=2a 00 12 a8 70 88 00 00 80 00 ."
8,"Error","2014-11-24","09:26:19","System","127.0.0.1","localhost","Plugged drive failed to work."
7,"Information","2014-11-24","09:01:14","System","127.0.0.1","localhost"," [Pool 1] Start rebuilding with RAID Group 1."
6,"Information","2014-11-24","08:58:16","System","127.0.0.1","localhost","Host: Drive4 plugged in."
5,"Error","2014-11-24","08:58:10","System","127.0.0.1","localhost","Host: Drive4 plugged out."
4,"Error","2014-11-24","08:58:05","System","127.0.0.1","localhost","[Volume DataVol1, Pool 1] Host: Drive4 failed."
3,"Error","2014-11-24","08:57:53","System","127.0.0.1","localhost","Plugged drive failed to work."

I run also scrript from previous post :

echo "Firmware: $(getcfg system version) Build $(getcfg system 'Build Number')"
for i in {a..d}; do echo -n /dev/sd$i ; hdparm -i /dev/sd$i | grep "Model"; done
cat /proc/mdstat
mdadm -D /dev/md0

And I have such output :


[~] # echo "Firmware: $(getcfg system version) Build $(getcfg system 'Build Number')"
Firmware: 4.1.1 Build 20141101
[~] # for i in {a..d}; do echo -n /dev/sd$i ; hdparm -i /dev/sd$i | grep "Model"; done
/dev/sda/dev/sda: No such file or directory
/dev/sdb Model=WDC WD20EFRX-68EUZN0 , FwRev=80.00A80, SerialNo= WD-WCC4MPU4K41X
/dev/sdc Model=WDC WD20EFRX-68EUZN0 , FwRev=82.00A82, SerialNo= WD-WCC4M8NXCNVA
/dev/sdd Model=WDC WD20EFRX-68EUZN0 , FwRev=80.00A80, SerialNo= WD-WCC4MHK4ESTN
[~] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md1 : active raid5 sdf3[4] sdd3[0] sdb3[2] sdc3[1]
5830678848 blocks super 1.0 level 5, 64k chunk, algorithm 2 [4/3] [UUU_]
[========>............] recovery = 40.9% (795981056/1943559616) finish=184.4min speed=103666K/sec

md256 : active raid1 sdf2[3](S) sdb2[2](S) sdc2[1] sdd2[0]
530112 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk

md13 : active raid1 sdf4[3] sdd4[0] sdb4[2] sdc4[1]
458880 blocks super 1.0 [24/4] [UUUU____________________]
bitmap: 1/1 pages [4KB], 65536KB chunk

md9 : active raid1 sdf1[3] sdd1[0] sdb1[2] sdc1[1]
530112 blocks super 1.0 [24/4] [UUUU____________________]
bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>

Any ideas ?
User avatar
pwilson
Guru
Posts: 22533
Joined: Fri Mar 06, 2009 11:20 am
Location: Victoria, BC, Canada (UTC-08:00)

Re: TS-470 Pro RAID 5 (4 disks) Degradated, plugged out, etc

Post by pwilson »

grzyp wrote:Hey,
From few days I have a problem with HDD4 in my NAS.
From some reason disk is failed, plugged out and i have to rebuild raid on it (disable application etc. to finish rebuild task).
Maybe anyone have an idea what to do with this case ?
In logs i have such events :

23,"Error","2014-11-26","13:25:31","System","127.0.0.1","localhost","Host: Drive4 plugged out."
22,"Error","2014-11-26","13:25:25","System","127.0.0.1","localhost","[Volume DataVol1, Pool 1] Host: Drive4 failed."
21,"Error","2014-11-26","13:25:10","System","127.0.0.1","localhost","Plugged drive failed to work."
20,"Information","2014-11-26","02:00:27","System","127.0.0.1","localhost","[Volume DataVol1, Pool 1] Succeed to reclaim volume."
19,"Information","2014-11-26","02:00:01","System","127.0.0.1","localhost","[Volume DataVol1, Pool 1] Start to reclaim volume."
18,"Information","2014-11-25","02:00:19","System","127.0.0.1","localhost","[Volume DataVol1, Pool 1] Succeed to reclaim volume."
17,"Information","2014-11-25","02:00:01","System","127.0.0.1","localhost","[Volume DataVol1, Pool 1] Start to reclaim volume."
16,"Information","2014-11-24","16:38:56","System","127.0.0.1","localhost"," [Pool 1] Rebuilding completed with RAID Group 1."
15,"Information","2014-11-24","09:29:56","System","127.0.0.1","localhost"," [Pool 1] Start rebuilding with RAID Group 1."
14,"Information","2014-11-24","09:26:37","System","127.0.0.1","localhost","Host: Drive4 plugged in."
13,"Information","2014-11-24","09:26:37","System","127.0.0.1","localhost"," [RAID Group 1] Degraded. Try to rebuild.
"
12"",""Warning"",""2014-11-24"",""09:26:37"",""System"",""127.0.0.1"",""localhost"","" [Pool 1] Rebuilding skipped with RAID Group 1."""
11,"Error","2014-11-24","09:26:36","System","127.0.0.1","localhost","Host: Drive4 plugged out."
10,"Error","2014-11-24","09:26:31","System","127.0.0.1","localhost","[Volume DataVol1, Pool 1] Host: Drive4 failed."
9,"Warning","2014-11-24","09:26:19","System","127.0.0.1","localhost","Host: Drive4 Write I/O error, ABORTED COMMAND sense_key=0xb, asc=0x0, ascq=0x0, CDB=2a 00 12 a8 70 88 00 00 80 00 ."
8,"Error","2014-11-24","09:26:19","System","127.0.0.1","localhost","Plugged drive failed to work."
7,"Information","2014-11-24","09:01:14","System","127.0.0.1","localhost"," [Pool 1] Start rebuilding with RAID Group 1."
6,"Information","2014-11-24","08:58:16","System","127.0.0.1","localhost","Host: Drive4 plugged in."
5,"Error","2014-11-24","08:58:10","System","127.0.0.1","localhost","Host: Drive4 plugged out."
4,"Error","2014-11-24","08:58:05","System","127.0.0.1","localhost","[Volume DataVol1, Pool 1] Host: Drive4 failed."
3,"Error","2014-11-24","08:57:53","System","127.0.0.1","localhost","Plugged drive failed to work."

I run also scrript from previous post :

echo "Firmware: $(getcfg system version) Build $(getcfg system 'Build Number')"
for i in {a..d}; do echo -n /dev/sd$i ; hdparm -i /dev/sd$i | grep "Model"; done
cat /proc/mdstat
mdadm -D /dev/md0

And I have such output :


[~] # echo "Firmware: $(getcfg system version) Build $(getcfg system 'Build Number')"
Firmware: 4.1.1 Build 20141101
[~] # for i in {a..d}; do echo -n /dev/sd$i ; hdparm -i /dev/sd$i | grep "Model"; done
/dev/sda/dev/sda: No such file or directory
/dev/sdb Model=WDC WD20EFRX-68EUZN0 , FwRev=80.00A80, SerialNo= WD-WCC4MPU4K41X
/dev/sdc Model=WDC WD20EFRX-68EUZN0 , FwRev=82.00A82, SerialNo= WD-WCC4M8NXCNVA
/dev/sdd Model=WDC WD20EFRX-68EUZN0 , FwRev=80.00A80, SerialNo= WD-WCC4MHK4ESTN
[~] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md1 : active raid5 sdf3[4] sdd3[0] sdb3[2] sdc3[1]
5830678848 blocks super 1.0 level 5, 64k chunk, algorithm 2 [4/3] [UUU_]
[========>............] recovery = 40.9% (795981056/1943559616) finish=184.4min speed=103666K/sec

md256 : active raid1 sdf2[3](S) sdb2[2](S) sdc2[1] sdd2[0]
530112 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk

md13 : active raid1 sdf4[3] sdd4[0] sdb4[2] sdc4[1]
458880 blocks super 1.0 [24/4] [UUUU____________________]
bitmap: 1/1 pages [4KB], 65536KB chunk

md9 : active raid1 sdf1[3] sdd1[0] sdb1[2] sdc1[1]
530112 blocks super 1.0 [24/4] [UUUU____________________]
bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>

Any ideas ?
  1. No NAS model provided. (Unless you can explain how you got 6 dtrives into a TS-470)
  2. No drive Make/Model information provided.
  3. Please review: When you're asking a question, please include the following
If you login to your NAS via SSH, login as "admin", and run the following commands, it will provide most of the "missing" information.

Code: Select all

#!/bin/sh
rm -f /tmp/nasreport
touch /tmp/nasreport
chmod +x /tmp/nasreport
cat <<EOF >>/tmp/nasreport
#!/bin/sh
#
# NAS Report by Patrick Wilson
# see: http://forum.qnap.com/viewtopic.php?f=185&t=82260#p366188
#
# 
echo "*********************"
echo "** QNAP NAS Report **"
echo "*********************"
echo " "
echo "NAS Model:      \$(getsysinfo model)"
echo "Firmware:       \$(getcfg system version) Build \$(getcfg system 'Build Number')"
echo "System Name:    \$(/bin/hostname)"
echo "Workgroup:      \$(getcfg system workgroup)"
echo "Base Directory: \$(dirname \$(getcfg -f /etc/config/smb.conf Public path))"
echo "NAS IP address: \$(ifconfig \$(getcfg network 'Default GW Device') | grep addr: | awk '{ print \$2 }' | cut -d: -f2)"
echo " " 
echo "Default Gateway Device: \$(getcfg network 'Default GW Device')" 
echo " "
ifconfig \$(getcfg network 'Default GW Device') | grep -v HWaddr
echo " "
echo -n "DNS Nameserver(s):" 
cat /etc/resolv.conf | grep nameserver | cut -d' ' -f2
echo " "
echo " "
echo "HDD Information:"
echo " "
if [ -x /sbin/hdparm ]; then
   for i in {a..f}; do echo -n /dev/sd\$i ; hdparm -i /dev/sd\$i | grep "Model"; done
else 
   echo "   /sbin/hdparm is not present" 
fi
echo " "
echo "Disk Space:"
echo " "
df -h | grep -v qpkg
echo " "
echo "Mount Status:" 
echo " "
mount | grep -v qpkg
echo " " 
echo "RAID Status:" 
echo " " 
cat /proc/mdstat
echo " " 
#echo "QNAP Media Scanner / Transcoder processes running: "
#echo " " 
#/bin/ps | grep medialibrary | grep -v grep
#echo " " 
#echo -n "MediaLibrary Configuration file: " 
#ls -alF /etc/config/medialibrary.conf
#echo " " 
#echo "/etc/config/medialibrary.conf:"
#cat /etc/config/medialibrary.conf
echo " "
echo "Memory Information:" 
echo " "
cat /proc/meminfo | grep Mem
echo " "
echo "NASReport completed on \$(date +'%Y-%m-%d %T') ($0)" 
EOF
sleep 2
clear
/tmp/nasreport
echo "Done." 
#done

Please cut&paste the output of the resulting NASReport back to this message thread. With this information in front of us, we might actually be able to assist you. I sure hope those Seagates aren't DL-series or DM-series not RAID-certified drives.
Last edited by pwilson on Thu Nov 27, 2014 4:06 am, edited 1 time in total.

Patrick M. Wilson
Victoria, BC Canada
QNAP TS-470 Pro w/ 4 * Western Digital WD30EFRX WD Reds (RAID5) - - Single 8.1TB Storage Pool FW: QTS 4.2.0 Build 20151023 - Kali Linux v1.06 (64bit)
Forums: View My Profile - Search My Posts - View My Photo - View My Location - Top Community Posters
QNAP: Turbo NAS User Manual - QNAP Wiki - QNAP Tutorials - QNAP FAQs

Please review: When you're asking a question, please include the following.
grzyp
New here
Posts: 6
Joined: Fri Oct 10, 2014 4:49 am

Re: TS-470 Pro RAID 5 (4 disks) Degradated, plugged out, etc

Post by grzyp »

[/] # #!/bin/sh
[/] # rm -f /tmp/nasreport
touch /tmp/nasreport
[/] # touch /tmp/nasreport
chmod +x /tmp/nasreport
cat <<EOF >>/tmp/nasreport
[/] # chmod +x /tmp/nasreport
#!/bin/sh
#
[/] # cat <<EOF >>/tmp/nasreport
> #!/bin/sh
> #
> # NAS Report by Patrick Wilson
> # see: http://forum.qnap.com/viewtopic.php?f=1 ... 60#p366188
> #
> #
> echo "*********************"
> echo "** QNAP NAS Report **"
> echo "*********************"
> echo " "
> echo "NAS Model: \$(getsysinfo model)"
> echo "Firmware: \$(getcfg system version) Build \$(getcfg system 'Build Number')"
> echo "System Name: \$(/bin/hostname)"
> echo "Workgroup: \$(getcfg system workgroup)"
> echo "Base Directory: \$(dirname \$(getcfg -f /etc/config/smb.conf Public path))"
> echo "NAS IP address: \$(ifconfig \$(getcfg network 'Default GW Device') | grep addr: | awk '{ print \$2 }' | cut -d: -f2)"
> echo " "
> echo "Default Gateway Device: \$(getcfg network 'Default GW Device')"
> echo " "
> ifconfig \$(getcfg network 'Default GW Device') | grep -v HWaddr
> echo " "
> echo -n "DNS Nameserver(s):"
> cat /etc/resolv.conf | grep nameserver | cut -d' ' -f2
> echo " "
> echo " "
> echo "HDD Information:"
> echo " "
> if [ -x /sbin/hdparm ]; then
> for i in {a..f}; do echo -n /dev/sd\$i ; hdparm -i /dev/sd\$i | grep "Model"; done
> else
> echo " /sbin/hdparm is not present"
> fi
> echo " "
> echo "Disk Space:"
> echo " "
> df -h | grep -v qpkg
> echo " "
> echo "Mount Status:"
> echo " "
> mount | grep -v qpkg
> echo " "
> echo "RAID Status:"
> echo " "
> cat /proc/mdstat
> echo " "
> #echo "QNAP Media Scanner / Transcoder processes running: "
> #echo " "
> #/bin/ps | grep medialibrary | grep -v grep
> #echo " "
> #echo -n "MediaLibrary Configuration file: "
> #ls -alF /etc/config/medialibrary.conf
> #echo " "
> #echo "/etc/config/medialibrary.conf:"
> #cat /etc/config/medialibrary.conf
> echo " "
> echo "Memory Information:"
> echo " "
> cat /proc/meminfo | grep Mem
> echo " "
> echo "NASReport completed on \$(date +'%Y-%m-%d %T') ($0)"
> EOF
[/] # sleep 2

[/] # clear
[/] # /tmp/nasreport
*********************
** QNAP NAS Report **
*********************

NAS Model: TS-470 Pro
Firmware: 4.1.1 Build 20141101
System Name: QNAP
Workgroup: WORKGROUP
Base Directory: /share/CACHEDEV1_DATA
NAS IP address: 192.168.2.222

Default Gateway Device: eth0

inet addr:192.168.2.222 Bcast:192.168.2.255 Mask:255.255.255.0
inet6 addr: fe80::208:9bff:fee5:3acc/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:6965619 errors:0 dropped:0 overruns:0 frame:0
TX packets:11692474 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1181284815 (1.0 GiB) TX bytes:14006737117 (13.0 GiB)


DNS Nameserver(s):192.168.2.1


HDD Information:

/dev/sda/dev/sda: No such file or directory
/dev/sdb Model=WDC WD20EFRX-68EUZN0 , FwRev=80.00A80, SerialNo= WD-WCC4MPU4K41X
/dev/sdc Model=WDC WD20EFRX-68EUZN0 , FwRev=82.00A82, SerialNo= WD-WCC4M8NXCNVA
/dev/sdd Model=WDC WD20EFRX-68EUZN0 , FwRev=80.00A80, SerialNo= WD-WCC4MHK4ESTN
/dev/sde Model= , FwRev=, SerialNo=910054FB4BCA3051
/dev/sdf Model=WDC WD20EFRX-68EUZN0 , FwRev=80.00A80, SerialNo= WD-WCC4MPU4K1RA

Disk Space:

Filesystem Size Used Available Use% Mounted on
/dev/ram0 151.1M 137.0M 14.1M 91% /
devtmpfs 1.4G 8.0k 1.4G 0% /dev
tmpfs 64.0M 392.0k 63.6M 1% /tmp
tmpfs 1.4G 28.0k 1.4G 0% /dev/shm
/dev/md9 509.5M 141.5M 368.0M 28% /mnt/HDA_ROOT
/dev/mapper/cachedev1 5.4T 1.9T 3.4T 36% /share/CACHEDEV1_DATA
/dev/md13 371.0M 281.8M 89.2M 76% /mnt/ext
tmpfs 32.0M 1.7M 30.3M 5% /.eaccelerator.tmp

Mount Status:

/proc on /proc type proc (rw)
devpts on /dev/pts type devpts (rw)
sysfs on /sys type sysfs (rw)
tmpfs on /tmp type tmpfs (rw,size=64M)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/bus/usb type usbfs (rw)
/dev/md9 on /mnt/HDA_ROOT type ext3 (rw,data=ordered)
/dev/mapper/cachedev1 on /share/CACHEDEV1_DATA type ext4 (rw,usrjquota=aquota.user,jqfmt=vfsv0,user_xattr,data=ordered,delalloc,noacl)
/dev/md13 on /mnt/ext type ext3 (rw,data=ordered)
tmpfs on /.eaccelerator.tmp type tmpfs (rw,size=32M)

RAID Status:

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md1 : active raid5 sdf3[4] sdd3[0] sdb3[2] sdc3[1]
5830678848 blocks super 1.0 level 5, 64k chunk, algorithm 2 [4/3] [UUU_]
[=>...................] recovery = 7.8% (152435328/1943559616) finish=5968.0min speed=5001K/sec

md256 : active raid1 sdf2[3](S) sdb2[2](S) sdc2[1] sdd2[0]
530112 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk

md13 : active raid1 sdf4[3] sdd4[0] sdb4[2] sdc4[1]
458880 blocks super 1.0 [24/4] [UUUU____________________]
bitmap: 1/1 pages [4KB], 65536KB chunk

md9 : active raid1 sdf1[3] sdd1[0] sdb1[2] sdc1[1]
530112 blocks super 1.0 [24/4] [UUUU____________________]
bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>


Memory Information:

MemTotal: 2907420 kB
MemFree: 335812 kB

NASReport completed on 2014-11-26 21:01:43 (-sh)
[/] # echo "Done."
Done.
User avatar
pwilson
Guru
Posts: 22533
Joined: Fri Mar 06, 2009 11:20 am
Location: Victoria, BC, Canada (UTC-08:00)

Re: TS-470 Pro RAID 5 (4 disks) Degradated, plugged out, etc

Post by pwilson »

grzyp wrote:

Code: Select all

*********************
** QNAP NAS Report **
*********************

NAS Model:      TS-470 Pro
Firmware:       4.1.1 Build 20141101
System Name:    QNAP
Workgroup:      WORKGROUP
Base Directory: /share/CACHEDEV1_DATA
NAS IP address: 192.168.2.222

Default Gateway Device: eth0

          inet addr:192.168.2.222  Bcast:192.168.2.255  Mask:255.255.255.0
          inet6 addr: fe80::208:9bff:fee5:3acc/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:6965619 errors:0 dropped:0 overruns:0 frame:0
          TX packets:11692474 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:1181284815 (1.0 GiB)  TX bytes:14006737117 (13.0 GiB)


DNS Nameserver(s):192.168.2.1


HDD Information:

/dev/sda/dev/sda: No such file or directory
/dev/sdb Model=WDC WD20EFRX-68EUZN0                    , FwRev=80.00A80, SerialNo=     WD-WCC4MPU4K41X
/dev/sdc Model=WDC WD20EFRX-68EUZN0                    , FwRev=82.00A82, SerialNo=     WD-WCC4M8NXCNVA
/dev/sdd Model=WDC WD20EFRX-68EUZN0                    , FwRev=80.00A80, SerialNo=     WD-WCC4MHK4ESTN
/dev/sde Model=        , FwRev=, SerialNo=910054FB4BCA3051
/dev/sdf Model=WDC WD20EFRX-68EUZN0                    , FwRev=80.00A80, SerialNo=     WD-WCC4MPU4K1RA

Disk Space:

Filesystem                Size      Used Available Use% Mounted on
/dev/ram0               151.1M    137.0M     14.1M  91% /
devtmpfs                  1.4G      8.0k      1.4G   0% /dev
tmpfs                    64.0M    392.0k     63.6M   1% /tmp
tmpfs                     1.4G     28.0k      1.4G   0% /dev/shm
/dev/md9                509.5M    141.5M    368.0M  28% /mnt/HDA_ROOT
/dev/mapper/cachedev1     5.4T      1.9T      3.4T  36% /share/CACHEDEV1_DATA
/dev/md13               371.0M    281.8M     89.2M  76% /mnt/ext
tmpfs                    32.0M      1.7M     30.3M   5% /.eaccelerator.tmp

Mount Status:

/proc on /proc type proc (rw)
devpts on /dev/pts type devpts (rw)
sysfs on /sys type sysfs (rw)
tmpfs on /tmp type tmpfs (rw,size=64M)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/bus/usb type usbfs (rw)
/dev/md9 on /mnt/HDA_ROOT type ext3 (rw,data=ordered)
/dev/mapper/cachedev1 on /share/CACHEDEV1_DATA type ext4 (rw,usrjquota=aquota.user,jqfmt=vfsv0,user_xattr,data=ordered,delalloc,noacl)
/dev/md13 on /mnt/ext type ext3 (rw,data=ordered)
tmpfs on /.eaccelerator.tmp type tmpfs (rw,size=32M)

RAID Status:

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md1 : active raid5 sdf3[4] sdd3[0] sdb3[2] sdc3[1]
                 5830678848 blocks super 1.0 level 5, 64k chunk, algorithm 2 [4/3] [UUU_]
                 [=>...................]  recovery =  7.8% (152435328/1943559616) finish=5968.0min speed=5001K/sec

md256 : active raid1 sdf2[3](S) sdb2[2](S) sdc2[1] sdd2[0]
                 530112 blocks super 1.0 [2/2] [UU]
                 bitmap: 0/1 pages [0KB], 65536KB chunk

md13 : active raid1 sdf4[3] sdd4[0] sdb4[2] sdc4[1]
                 458880 blocks super 1.0 [24/4] [UUUU____________________]
                 bitmap: 1/1 pages [4KB], 65536KB chunk

md9 : active raid1 sdf1[3] sdd1[0] sdb1[2] sdc1[1]
                 530112 blocks super 1.0 [24/4] [UUUU____________________]
                 bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>


Memory Information:

MemTotal:        2907420 kB
MemFree:          335812 kB

NASReport completed on 2014-11-26 21:01:43 (-sh)
I only wanted the report output. Thank-you for providing it. Your NAS is indeed a TS-470. Your initial message was hard to interpret because it showed /dev/sdf as being present, which would be "HDD6" on a TS-670 for example).

Your HDD1 (/dev/sda) is not showing up., yet somehow the NAS is trying to add /dev/sdf (HDD6) to your RAID5 array. There is something wrong here, and resolving this issue has the potential for data loss. Please submit a ticket with the QNAP Helpdesk, and provide them with remote access to your NAS so that they can manually rebuild your RAID5 array for you.

Patrick M. Wilson
Victoria, BC Canada
QNAP TS-470 Pro w/ 4 * Western Digital WD30EFRX WD Reds (RAID5) - - Single 8.1TB Storage Pool FW: QTS 4.2.0 Build 20151023 - Kali Linux v1.06 (64bit)
Forums: View My Profile - Search My Posts - View My Photo - View My Location - Top Community Posters
QNAP: Turbo NAS User Manual - QNAP Wiki - QNAP Tutorials - QNAP FAQs

Please review: When you're asking a question, please include the following.
Post Reply

Return to “System & Disk Volume Management”