TS-809 Pro running FW 3.8.2. After 3 years, it was time to upgrade storage capacity. Came across this thread indicating that the new Seagate 4TB drives are supported.
Original Setup
Bays 1-4: RAID5 using an ext3 FS on Seagate 1.5TB disks (ST31500341AS)
Bay 8: Single volume on ext4 (750GB Seagate)
Target Setup
Bays 1-5: RAID6 using an ext4 FS on ST4000DM000s
Bays 7-8: RAID1 using an ext4 FS on 2 of the 1.5TB disks
I ordered my drives from 2 vendors in 3 different orders. Paid expedited shipping for the first drive so that I could move most of my data there. Then ordered 2 more drives from each vendor just in case any single batch had problems. Strangely enough, each order had different firmware revisions, but
https://apps1.seagate.com/downloads/request.html says there is no newer firmware for these serial numbers. All drives show
Product of Thailand.
Order 1 (temp staging drive):
Date: 13311; FW: CC51
Order 2:
Date: 13401; FW: CC52
Date: 13401; FW: CC52
Order 3:
Date: 13213; FW: CC43
Date: 13232; FW: CC43
First, I moved the data to the temp 4TB drive and removed the RAID5 array. I took two of the 1.5TB drives and used them to upgrade my single drive Bay 8 volume to a RAID1 volume using Bays 7 & 8. This volume is primarily used for TimeMachine backups.
While that volume was being created and set up, I tried to run bad block scans on all the new 4TB drives that arrived. At one point, the scans were near 80% complete, and the server rebooted itself. Not sure what happened, but I didn't get any alerts.
Also, with the 4TB drives in their new homes, I was not able to get the NAS to recognize a drive in Bay 3. The light would come on and drive would spin up, but the web console said no drive present. I eventually powered down the system and restarted after reseating all drives. That worked.
Created the RAID6 array with 4 drives, ran bad block scans on all drives in the array, and moved the data back. Took about 3 days to do all of that. Bad block scans took about 23 hrs when running all 4 in parallel. Now, adding the 5th drive into the array. You can see how slow it is
http://forum.qnap.com/viewtopic.php?f=1 ... 48#p339548.
Because I created the RAID1 volume in Bays 7 & 8 before the new RAID6 volume was created, the NAS numbered the RAID1 volume md0 and mounted it at /share/MD0_DATA. I have other things that expected MD0_DATA to be on the RAID6 volume. I was able to renumber the volumes by doing the following.
Code: Select all
/etc/init.d/services.sh stop
umount /share/MD0_DATA
umount /share/MD1_DATA
Renumbering link I found -
http://www.jross.org/recreation/compute ... md-device/ which includes the parameter for updating the superblocks.
Code: Select all
mdadm --stop /dev/md0
mdadm --remove /dev/md0
mdadm --stop /dev/md1
mdadm --remove /dev/md1
mdadm --assemble /dev/md0 /dev/sd[abcd]3 --update=name
mdadm --assemble /dev/md1 /dev/sd[gh]3 --update=name
I also updated /etc/config/raidtab and /etc/config/smb.conf to correct references to the devices & shares. A reboot later, and the volumes were mounted where I wanted them.
Will be glad when it's done and I have the extra space. Other than speed of setup, it's been pretty uneventful.