RAID Group 1 recovery failed.

Questions about SNMP, Power, System, Logs, disk, & RAID.
Post Reply
RauserT
New here
Posts: 2
Joined: Mon Jan 12, 2015 5:55 pm

RAID Group 1 recovery failed.

Post by RauserT »

Hi to all,

we have a problem since this morning on our QNAP TS-870U-RP (RAID-Type: 5).

We had a disconnected disk.
After a new insert of the same disk, the rebuild process started.
The next hint within the system log is "Mount the file system read-only" and "Rebuilding skipped with RAID Group 1"
I replaced Drive5 and tried to rebuild RAID Group. Result: "RAID Group 1 recovery failed:"

Please see also attached logfile:
QNAP_log2.jpg
Could anyone help us please?
Thank you very much.
You do not have the required permissions to view the files attached to this post.
cquinonez
New here
Posts: 7
Joined: Fri Aug 21, 2015 4:12 am

Re: RAID Group 1 recovery failed.

Post by cquinonez »

I have the some problem, How did you do to fix it?
User avatar
dolbyman
Guru
Posts: 35275
Joined: Sat Feb 12, 2011 2:11 am
Location: Vancouver BC , Canada

Re: RAID Group 1 recovery failed.

Post by dolbyman »

cquinonez wrote:I have the some problem, How did you do to fix it?
more info than that please ...
cquinonez
New here
Posts: 7
Joined: Fri Aug 21, 2015 4:12 am

Re: RAID Group 1 recovery failed.

Post by cquinonez »

Hello.
We have a NAS with 8 disk, we have created a raid 5 group with 7 disks and 1 disk as spare.
The spare disk failed this weekend, and today the disk number 3 of the raid group also failed.
In that moment, the volume mounted over these raid group, was going as read only state.
Then, I replaced the disk number 3, and the system recognized the disk as good, but the raid group is still inactive.
I have tried to reactive it, but always returns me the message: "raid group 1 recovery failed"

Please help me, I need to access to my information.

Regards.

NAS Model:TS-EC879U-RP
Firmware version: 4.2.2 Build 20160812
User avatar
dolbyman
Guru
Posts: 35275
Joined: Sat Feb 12, 2011 2:11 am
Location: Vancouver BC , Canada

Re: RAID Group 1 recovery failed.

Post by dolbyman »

are you certain that the 8th disk was not a member disk ?

because if a hot/coldspare fails and after that a normal disk, that should be a degraded RAID not a read only one

your scenario sounds more like another failed drive during rebuild
User avatar
MrVideo
Experience counts
Posts: 4742
Joined: Fri May 03, 2013 2:26 pm

Re: RAID Group 1 recovery failed.

Post by MrVideo »

Yes, it most certain does. Using RAID5 with that many discs was just asking for trouble.
QTS MANUALS
Submit QNAP Support Ticket - QNAP Tutorials, FAQs, Downloads, Wiki - Product Support Status - Moogle's QNAP FAQ help V2
Asking a question, include the following
(Thanks to Toxic17)
QNAP md_checker nasreport (release 20210309)
===============================
Model: TS-869L -- RAM: 3G -- FW: QTS 4.1.4 Build 20150522 (used for data storage)
WD60EFRX-68L0BN1(x1)/68MYMN1(x7) Red HDDs -- RAID6: 8x6TB -- Cold spare: 1x6TB
Entware
===============================
Model: TS-451A -- RAM: 2G -- FW: QTS 4.5.2 Build 20210202 (used as a video server)
WL3000GSA6472(x3) White label NAS HDDs -- RAID5: 3x3TB
Entware -- MyKodi 17.3 (default is Kodi 16)
===============================
My 2017 Total Solar Eclipse Photos | My 2019 N. Ireland Game of Thrones tour
cquinonez
New here
Posts: 7
Joined: Fri Aug 21, 2015 4:12 am

Re: RAID Group 1 recovery failed.

Post by cquinonez »

dolbyman wrote:are you certain that the 8th disk was not a member disk ?

because if a hot/coldspare fails and after that a normal disk, that should be a degraded RAID not a read only one

your scenario sounds more like another failed drive during rebuild
Yes, I'm sure of that. The 8th disk was an spare disk, and when this disk failed the raid group continued operating normally.

I'm attaching to this message some images from the admin console, when you can see the raid group, the syslog and the message of the inactive raid group.
I need to know if exists something to do for resolve this trouble.

Thanks.
You do not have the required permissions to view the files attached to this post.
User avatar
OneCD
Guru
Posts: 12161
Joined: Sun Aug 21, 2016 10:48 am
Location: "... there, behind that sofa!"

Re: RAID Group 1 recovery failed.

Post by OneCD »

cquinonez wrote:I need to know if exists something to do for resolve this trouble.
This won't fix your problem, but may provide more information about your current configuration.

You'll need to SSH / PuTTY into your NAS as the 'admin' user to access the command line.

Then run the following command:

Code: Select all

cat /proc/mdstat
... and post the output back here.

ImageImageImageImageImageImageImageImageImageImageImageImageImageImageImageImageImageImage
cquinonez
New here
Posts: 7
Joined: Fri Aug 21, 2015 4:12 am

Re: RAID Group 1 recovery failed.

Post by cquinonez »

OneCD wrote:
cquinonez wrote:I need to know if exists something to do for resolve this trouble.
This won't fix your problem, but may provide more information about your current configuration.

You'll need to SSH / PuTTY into your NAS as the 'admin' user to access the command line.

Then run the following command:

Code: Select all

cat /proc/mdstat
... and post the output back here.
Thanks, here is the output of the command...
You do not have the required permissions to view the files attached to this post.
cquinonez
New here
Posts: 7
Joined: Fri Aug 21, 2015 4:12 am

Re: RAID Group 1 recovery failed.

Post by cquinonez »

This solve the problem...thanks

[~] # df -h
Filesystem Size Used Available Use% Mounted on
none 250.0M 183.1M 66.9M 73% /
devtmpfs 1.9G 8.0k 1.9G 0% /dev
tmpfs 64.0M 284.0k 63.7M 0% /tmp
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 16.0M 0 16.0M 0% /share
/dev/md9 509.5M 135.6M 373.9M 27% /mnt/HDA_ROOT
cgroup_root 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/md13 371.0M 281.7M 89.3M 76% /mnt/ext
tmpfs 1.0M 0 1.0M 0% /mnt/rf/nd
[~] # md_checker

Welcome to MD superblock checker (v1.4) - have a nice day~

Scanning system...

HAL firmware detected!
Scanning Enclosure 0...

RAID metadata found!
UUID: 2e2c2d0d:254e12fb:60fd48fc:c0a5cd8f
Level: raid5
Devices: 7
Name: md1
Chunk Size: 64K
md Version: 1.0
Creation Time: Jun 23 18:25:36 2015
Status: OFFLINE
===============================================================================
Disk | Device | # | Status | Last Update Time | Events | Array State
===============================================================================
1 /dev/sdc3 0 Rebuild Jan 9 13:07:50 2017 1333 AAAAAAA
2 /dev/sdd3 1 Active Jan 9 13:07:50 2017 1333 AAAAAAA
-------------- 2 Missing -------------------------------------------
4 /dev/sda3 3 Active Jan 9 13:07:50 2017 1333 AAAAAAA
5 /dev/sde3 4 Active Jan 9 13:07:50 2017 1333 AAAAAAA
6 /dev/sdf3 5 Active Jan 9 13:07:50 2017 1333 AAAAAAA
7 /dev/sdg3 6 Active Jan 9 13:07:50 2017 1333 AAAAAAA
===============================================================================

[~] # md_checker

Welcome to MD superblock checker (v1.4) - have a nice day~

Scanning system...

HAL firmware detected!
Scanning Enclosure 0...

RAID metadata found!
UUID: 2e2c2d0d:254e12fb:60fd48fc:c0a5cd8f
Level: raid5
Devices: 7
Name: md1
Chunk Size: 64K
md Version: 1.0
Creation Time: Jun 23 18:25:36 2015
Status: OFFLINE
===============================================================================
Disk | Device | # | Status | Last Update Time | Events | Array State
===============================================================================
1 /dev/sdc3 0 Rebuild Jan 9 13:07:50 2017 1333 AAAAAAA
2 /dev/sdd3 1 Active Jan 9 13:07:50 2017 1333 AAAAAAA
3 /dev/sdb3 2 Active Jan 9 13:07:50 2017 1333 AAAAAAA
4 /dev/sda3 3 Active Jan 9 13:07:50 2017 1333 AAAAAAA
5 /dev/sde3 4 Active Jan 9 13:07:50 2017 1333 AAAAAAA
6 /dev/sdf3 5 Active Jan 9 13:07:50 2017 1333 AAAAAAA
7 /dev/sdg3 6 Active Jan 9 13:07:50 2017 1333 AAAAAAA
===============================================================================

[~] # mdadm -A /dev/md1 /dev/sd[dbaefg]3
mdadm: failed to get exclusive lock on mapfile - continue anyway...
mdadm: /dev/md1 assembled from 6 drives - need all 7 to start it (use --run to i nsist).
[~] # md_checker

Welcome to MD superblock checker (v1.4) - have a nice day~

Scanning system...

HAL firmware detected!
Scanning Enclosure 0...

RAID metadata found!
UUID: 2e2c2d0d:254e12fb:60fd48fc:c0a5cd8f
Level: raid5
Devices: 7
Name: md1
Chunk Size: 64K
md Version: 1.0
Creation Time: Jun 23 18:25:36 2015
Status: OFFLINE
===============================================================================
Disk | Device | # | Status | Last Update Time | Events | Array State
===============================================================================
-------------- 0 Missing -------------------------------------------
2 /dev/sdd3 1 Active Jan 9 13:07:50 2017 1333 AAAAAAA
3 /dev/sdb3 2 Active Jan 9 13:07:50 2017 1333 AAAAAAA
4 /dev/sda3 3 Active Jan 9 13:07:50 2017 1333 AAAAAAA
5 /dev/sde3 4 Active Jan 9 13:07:50 2017 1333 AAAAAAA
6 /dev/sdf3 5 Active Jan 9 13:07:50 2017 1333 AAAAAAA
7 /dev/sdg3 6 Active Jan 9 13:07:50 2017 1333 AAAAAAA
===============================================================================

[~] # mdadm -A /dev/md1 /dev/sd[dbaefg]3 --run
mdadm: failed to get exclusive lock on mapfile - continue anyway...
mdadm: /dev/md1 has been started with 6 drives (out of 7).
[~] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multi path]
md1 : active raid5 sdd3[1] sdg3[6] sdf3[5] sde3[4] sda3[3] sdb3[2]
11661357696 blocks super 1.0 level 5, 64k chunk, algorithm 2 [7/6] [_UUUUU U]

md256 : active raid1 sdb2[2] sdg2[6](S) sdf2[5](S) sde2[4](S) sda2[3](S) sdd2[1]
530112 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk

md13 : active raid1 sdb4[2] sdg4[6] sdf4[5] sde4[4] sda4[3] sdd4[1]
458880 blocks [8/6] [_UUUUUU_]
bitmap: 48/57 pages [192KB], 4KB chunk

md9 : active raid1 sdb1[2] sdg1[6] sdf1[5] sde1[4] sda1[3] sdd1[1]
530048 blocks [8/6] [_UUUUUU_]
bitmap: 49/65 pages [196KB], 4KB chunk

unused devices: <none>
[~] # /etc/init.d/init_lvm.sh
Changing old config name...
mv: unable to rename `/etc/config/qdrbd.conf': No such file or directory
Reinitialing...
Detect disk(8, 0)...
dev_count ++ = 0Detect disk(8, 16)...
dev_count ++ = 1Detect disk(8, 48)...
dev_count ++ = 2Detect disk(8, 64)...
dev_count ++ = 3Detect disk(8, 80)...
dev_count ++ = 4Detect disk(8, 96)...
dev_count ++ = 5Detect disk(8, 112)...
ignore non-root enclosure disk(8, 112).
Detect disk(8, 0)...
Detect disk(8, 16)...
Detect disk(8, 48)...
Detect disk(8, 64)...
Detect disk(8, 80)...
Detect disk(8, 96)...
Detect disk(8, 112)...
ignore non-root enclosure disk(8, 112).
sys_startup_p2:got called count = -1
Command failed
Done

[~] # df -h
Filesystem Size Used Available Use% Mounted on
none 250.0M 183.4M 66.6M 73% /
devtmpfs 1.9G 8.0k 1.9G 0% /dev
tmpfs 64.0M 316.0k 63.7M 0% /tmp
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 16.0M 0 16.0M 0% /share
/dev/md9 509.5M 135.6M 373.8M 27% /mnt/HDA_ROOT
cgroup_root 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/md13 371.0M 281.7M 89.3M 76% /mnt/ext
tmpfs 1.0M 0 1.0M 0% /mnt/rf/nd
/dev/mapper/cachedev1 156.6T 7.9T 148.6T 5% /share/CACHEDEV1_DATA
[~] # mount
none on /new_root type tmpfs (rw,mode=0755,size=256000k)
/proc on /proc type proc (rw)
devpts on /dev/pts type devpts (rw)
sysfs on /sys type sysfs (rw)
tmpfs on /tmp type tmpfs (rw,size=64M)
tmpfs on /dev/shm type tmpfs (rw)
tmpfs on /share type tmpfs (rw,size=16M)
/dev/md9 on /mnt/HDA_ROOT type ext3 (rw,data=ordered)
cgroup_root on /sys/fs/cgroup type tmpfs (rw)
none on /sys/fs/cgroup/memory type cgroup (rw,memory)
/dev/md13 on /mnt/ext type ext3 (rw,data=ordered)
tmpfs on /mnt/rf/nd type tmpfs (rw,size=1m)
none on /sys/kernel/config type configfs (rw)
/dev/mapper/cachedev1 on /share/CACHEDEV1_DATA type ext4 (rw,usrjquota=aquota.us er,jqfmt=vfsv0,user_xattr,data=ordered,delalloc,acl)
[~] # umount /share/CACHEDEV1_DATA/
umount: /share/CACHEDEV1_DATA: device is busy
umount: /share/CACHEDEV1_DATA: device is busy
User avatar
MrVideo
Experience counts
Posts: 4742
Joined: Fri May 03, 2013 2:26 pm

Re: RAID Group 1 recovery failed.

Post by MrVideo »

It would have been nice if you would have enclosed all of your report within a code block, like this (as a minimum):

Code: Select all

[~] # df -h
Filesystem                Size      Used Available Use% Mounted on
none                    250.0M    183.1M     66.9M  73% /
devtmpfs                  1.9G      8.0k      1.9G   0% /dev
tmpfs                    64.0M    284.0k     63.7M   0% /tmp
tmpfs                     1.9G         0      1.9G   0% /dev/shm
tmpfs                    16.0M         0     16.0M   0% /share
/dev/md9                509.5M    135.6M    373.9M  27% /mnt/HDA_ROOT
cgroup_root               1.9G         0      1.9G   0% /sys/fs/cgroup
/dev/md13               371.0M    281.7M     89.3M  76% /mnt/ext
tmpfs                     1.0M         0      1.0M   0% /mnt/rf/nd
[~] # md_checker

Welcome to MD superblock checker (v1.4) - have a nice day~

Scanning system...

HAL firmware detected!
Scanning Enclosure 0...

RAID metadata found!
UUID:           2e2c2d0d:254e12fb:60fd48fc:c0a5cd8f
Level:          raid5
Devices:        7
Name:           md1
Chunk Size:     64K
md Version:     1.0
Creation Time:  Jun 23 18:25:36 2015
Status:         OFFLINE
===============================================================================
 Disk | Device | # | Status |   Last Update Time   | Events | Array State
===============================================================================
   1  /dev/sdc3  0  Rebuild   Jan  9 13:07:50 2017     1333   AAAAAAA                                                                                                                                                                        
   2  /dev/sdd3  1   Active   Jan  9 13:07:50 2017     1333   AAAAAAA                                                                                                                                                                        
 --------------  2  Missing   -------------------------------------------
   4  /dev/sda3  3   Active   Jan  9 13:07:50 2017     1333   AAAAAAA                                                                                                                                                                        
   5  /dev/sde3  4   Active   Jan  9 13:07:50 2017     1333   AAAAAAA                                                                                                                                                                        
   6  /dev/sdf3  5   Active   Jan  9 13:07:50 2017     1333   AAAAAAA                                                                                                                                                                        
   7  /dev/sdg3  6   Active   Jan  9 13:07:50 2017     1333   AAAAAAA                                                                                                                                                                        
===============================================================================

[~] # md_checker

Welcome to MD superblock checker (v1.4) - have a nice day~

Scanning system...

HAL firmware detected!
Scanning Enclosure 0...

RAID metadata found!
UUID:           2e2c2d0d:254e12fb:60fd48fc:c0a5cd8f
Level:          raid5
Devices:        7
Name:           md1
Chunk Size:     64K
md Version:     1.0
Creation Time:  Jun 23 18:25:36 2015
Status:         OFFLINE
===============================================================================
 Disk | Device | # | Status |   Last Update Time   | Events | Array State
===============================================================================
   1  /dev/sdc3  0  Rebuild   Jan  9 13:07:50 2017     1333   AAAAAAA                                                                                                                                                                        
   2  /dev/sdd3  1   Active   Jan  9 13:07:50 2017     1333   AAAAAAA                                                                                                                                                                        
   3  /dev/sdb3  2   Active   Jan  9 13:07:50 2017     1333   AAAAAAA                                                                                                                                                                        
   4  /dev/sda3  3   Active   Jan  9 13:07:50 2017     1333   AAAAAAA                                                                                                                                                                        
   5  /dev/sde3  4   Active   Jan  9 13:07:50 2017     1333   AAAAAAA                                                                                                                                                                        
   6  /dev/sdf3  5   Active   Jan  9 13:07:50 2017     1333   AAAAAAA                                                                                                                                                                        
   7  /dev/sdg3  6   Active   Jan  9 13:07:50 2017     1333   AAAAAAA                                                                                                                                                                        
===============================================================================

[~] # mdadm -A /dev/md1 /dev/sd[dbaefg]3
mdadm: failed to get exclusive lock on mapfile - continue anyway...
mdadm: /dev/md1 assembled from 6 drives - need all 7 to start it (use --run to i                                                                                                                                                             nsist).
[~] # md_checker

Welcome to MD superblock checker (v1.4) - have a nice day~

Scanning system...

HAL firmware detected!
Scanning Enclosure 0...

RAID metadata found!
UUID:           2e2c2d0d:254e12fb:60fd48fc:c0a5cd8f
Level:          raid5
Devices:        7
Name:           md1
Chunk Size:     64K
md Version:     1.0
Creation Time:  Jun 23 18:25:36 2015
Status:         OFFLINE
===============================================================================
 Disk | Device | # | Status |   Last Update Time   | Events | Array State
===============================================================================
 --------------  0  Missing   -------------------------------------------
   2  /dev/sdd3  1   Active   Jan  9 13:07:50 2017     1333   AAAAAAA                                                                                                                                                                        
   3  /dev/sdb3  2   Active   Jan  9 13:07:50 2017     1333   AAAAAAA                                                                                                                                                                        
   4  /dev/sda3  3   Active   Jan  9 13:07:50 2017     1333   AAAAAAA                                                                                                                                                                        
   5  /dev/sde3  4   Active   Jan  9 13:07:50 2017     1333   AAAAAAA                                                                                                                                                                        
   6  /dev/sdf3  5   Active   Jan  9 13:07:50 2017     1333   AAAAAAA                                                                                                                                                                        
   7  /dev/sdg3  6   Active   Jan  9 13:07:50 2017     1333   AAAAAAA                                                                                                                                                                        
===============================================================================

[~] # mdadm -A /dev/md1 /dev/sd[dbaefg]3 --run
mdadm: failed to get exclusive lock on mapfile - continue anyway...
mdadm: /dev/md1 has been started with 6 drives (out of 7).
[~] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multi                                                                                                                                                             path]
md1 : active raid5 sdd3[1] sdg3[6] sdf3[5] sde3[4] sda3[3] sdb3[2]
      11661357696 blocks super 1.0 level 5, 64k chunk, algorithm 2 [7/6] [_UUUUU                                                                                                                                                             U]

md256 : active raid1 sdb2[2] sdg2[6](S) sdf2[5](S) sde2[4](S) sda2[3](S) sdd2[1]
      530112 blocks super 1.0 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md13 : active raid1 sdb4[2] sdg4[6] sdf4[5] sde4[4] sda4[3] sdd4[1]
      458880 blocks [8/6] [_UUUUUU_]
      bitmap: 48/57 pages [192KB], 4KB chunk

md9 : active raid1 sdb1[2] sdg1[6] sdf1[5] sde1[4] sda1[3] sdd1[1]
      530048 blocks [8/6] [_UUUUUU_]
      bitmap: 49/65 pages [196KB], 4KB chunk

unused devices: <none>
[~] # /etc/init.d/init_lvm.sh
Changing old config name...
mv: unable to rename `/etc/config/qdrbd.conf': No such file or directory
Reinitialing...
Detect disk(8, 0)...
dev_count ++ = 0Detect disk(8, 16)...
dev_count ++ = 1Detect disk(8, 48)...
dev_count ++ = 2Detect disk(8, 64)...
dev_count ++ = 3Detect disk(8, 80)...
dev_count ++ = 4Detect disk(8, 96)...
dev_count ++ = 5Detect disk(8, 112)...
ignore non-root enclosure disk(8, 112).
Detect disk(8, 0)...
Detect disk(8, 16)...
Detect disk(8, 48)...
Detect disk(8, 64)...
Detect disk(8, 80)...
Detect disk(8, 96)...
Detect disk(8, 112)...
ignore non-root enclosure disk(8, 112).
sys_startup_p2:got called count = -1
Command failed
Done

[~] # df -h
Filesystem                Size      Used Available Use% Mounted on
none                    250.0M    183.4M     66.6M  73% /
devtmpfs                  1.9G      8.0k      1.9G   0% /dev
tmpfs                    64.0M    316.0k     63.7M   0% /tmp
tmpfs                     1.9G         0      1.9G   0% /dev/shm
tmpfs                    16.0M         0     16.0M   0% /share
/dev/md9                509.5M    135.6M    373.8M  27% /mnt/HDA_ROOT
cgroup_root               1.9G         0      1.9G   0% /sys/fs/cgroup
/dev/md13               371.0M    281.7M     89.3M  76% /mnt/ext
tmpfs                     1.0M         0      1.0M   0% /mnt/rf/nd
/dev/mapper/cachedev1   156.6T      7.9T    148.6T   5% /share/CACHEDEV1_DATA
[~] # mount
none on /new_root type tmpfs (rw,mode=0755,size=256000k)
/proc on /proc type proc (rw)
devpts on /dev/pts type devpts (rw)
sysfs on /sys type sysfs (rw)
tmpfs on /tmp type tmpfs (rw,size=64M)
tmpfs on /dev/shm type tmpfs (rw)
tmpfs on /share type tmpfs (rw,size=16M)
/dev/md9 on /mnt/HDA_ROOT type ext3 (rw,data=ordered)
cgroup_root on /sys/fs/cgroup type tmpfs (rw)
none on /sys/fs/cgroup/memory type cgroup (rw,memory)
/dev/md13 on /mnt/ext type ext3 (rw,data=ordered)
tmpfs on /mnt/rf/nd type tmpfs (rw,size=1m)
none on /sys/kernel/config type configfs (rw)
/dev/mapper/cachedev1 on /share/CACHEDEV1_DATA type ext4 (rw,usrjquota=aquota.us                                                                                                                                                             er,jqfmt=vfsv0,user_xattr,data=ordered,delalloc,acl)
[~] # umount /share/CACHEDEV1_DATA/
umount: /share/CACHEDEV1_DATA: device is busy
umount: /share/CACHEDEV1_DATA: device is busy
Or you could have enclosed each command run in its own code block. It just makes it all easier to read.

But, what has really caught my eye is that you are running 7 drives under RAID5. Five, or more, drives should be using RAID6.
QTS MANUALS
Submit QNAP Support Ticket - QNAP Tutorials, FAQs, Downloads, Wiki - Product Support Status - Moogle's QNAP FAQ help V2
Asking a question, include the following
(Thanks to Toxic17)
QNAP md_checker nasreport (release 20210309)
===============================
Model: TS-869L -- RAM: 3G -- FW: QTS 4.1.4 Build 20150522 (used for data storage)
WD60EFRX-68L0BN1(x1)/68MYMN1(x7) Red HDDs -- RAID6: 8x6TB -- Cold spare: 1x6TB
Entware
===============================
Model: TS-451A -- RAM: 2G -- FW: QTS 4.5.2 Build 20210202 (used as a video server)
WL3000GSA6472(x3) White label NAS HDDs -- RAID5: 3x3TB
Entware -- MyKodi 17.3 (default is Kodi 16)
===============================
My 2017 Total Solar Eclipse Photos | My 2019 N. Ireland Game of Thrones tour
Post Reply

Return to “System & Disk Volume Management”