There was a power outage in my area, causing my NAS to crash. When the power came back on, I got a "degraded mode" error message and the RAID began rebuilding. I ordered a UPS to keep this from happening again, but to my horror we had another power issue while the RAID was rebuilding (and before I got the UPS). This time, when the machine came back on, it booted up, but there's no RAID. The drives are fine (green lights all across), there' s just no RAID or data that I can find. My UPS came in, but I feel like throwing it and the NAS through a window. I'm kinda sick to my stomach.
==SYMPTOMS==
Green lights on all hard disks, Green blinking status button, Amber LAN light.
The NAS powers up and is accessible both by web console and SSH. In the web console under Storage Manager => Volume Management, all 4 hard drives are connected & detected. All 4 of them say "READY" as status and "GOOD" as SMART Information. At the bottom of that page it says "RAID 5 Disk Volume: Drive 1 2 3 4" but the File System, Total Size, and Free Size columns are empty. The "Status" Column says Not Active.
==TECH DETAILS==
Model = TS-412
Version = 4.3.3
Build Number = 20220623
A few commands....
Code: Select all
[/mnt/ext/home] # config_util 1
Start to mirror ROOT part...
config_util: ret=-1, /dev/sda1 CANNOT be mounted on /mnt/HDA_ROOT.
config_util: ret=-1, /dev/sdb1 CANNOT be mounted on /mnt/HDB_ROOT.
config_util: ret=-1, /dev/sdc1 CANNOT be mounted on /mnt/HDC_ROOT.
config_util: ret=-1, /dev/sdd1 CANNOT be mounted on /mnt/HDD_ROOT.
config_util: No valid HD exists.
Mirror of ROOT failed
Code: Select all
[/mnt/HDA_ROOT] # mount /dev/sda1
mount: can't find /dev/sda1 in /etc/fstab or /etc/mtab
[/mnt/HDA_ROOT] # mount /dev/sdb1
mount: can't find /dev/sdb1 in /etc/fstab or /etc/mtab
[/mnt/HDA_ROOT] # mount /dev/sdc1
mount: can't find /dev/sdc1 in /etc/fstab or /etc/mtab
[/mnt/HDA_ROOT] # mount /dev/sdd1
mount: can't find /dev/sdd1 in /etc/fstab or /etc/mtab
Code: Select all
[/mnt/ext/home] # cat /proc/mdstat
Personalities : [raid1] [linear] [raid0] [raid10] [raid6] [raid5] [raid4]
md4 : active raid1 sdd2[4](S) sdc2[3](S) sdb2[2] sda2[0]
530128 blocks super 1.0 [2/2] [UU]
md13 : active raid1 sdc4[0] sdb4[3] sda4[2] sdd4[1]
458880 blocks [4/4] [UUUU]
bitmap: 0/57 pages [0KB], 4KB chunk
md9 : active raid1 sdc1[0] sda1[3] sdd1[2] sdb1[1]
530048 blocks [4/4] [UUUU]
bitmap: 0/65 pages [0KB], 4KB chunk
unused devices: <none>
Code: Select all
[/share/MD0_DATA/homes/admin] # mount
/proc on /proc type proc (rw)
none on /dev/pts type devpts (rw,gid=5,mode=620)
sysfs on /sys type sysfs (rw)
tmpfs on /tmp type tmpfs (rw,size=32M)
none on /proc/bus/usb type usbfs (rw)
/dev/sda4 on /mnt/ext type ext3 (rw)
/dev/md9 on /mnt/HDA_ROOT type ext3 (rw,data=ordered)
/dev/ram2 on /mnt/update type ext2 (rw)
tmpfs on /samba type tmpfs (rw,size=64M)
tmpfs on /samba/.samba/lock/msg.lock type tmpfs (rw,size=16M)
tmpfs on /mnt/ext/opt/samba/private/msg.sock type tmpfs (rw,size=16M)
tmpfs on /mnt/rf/nd type tmpfs (rw,size=1m)
nfsd on /proc/fs/nfsd type nfsd (rw)
Code: Select all
[/] # mdadm --detail /dev/sda4
/dev/sda4:
Version : 00.90.03
Creation Time : Fri Dec 14 05:56:58 2012
Raid Level : raid1
Array Size : 458880 (448.20 MiB 469.89 MB)
Used Dev Size : 458880 (448.20 MiB 469.89 MB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 13
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sun Dec 4 18:06:38 2022
State : active
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
UUID : 82b6f1ad:2496c146:43a6b383:e47dbbe7
Events : 0.53075
Number Major Minor RaidDevice State
0 8 36 0 active sync /dev/sdc4
1 8 52 1 active sync /dev/sdd4
2 8 4 2 active sync /dev/sdareal4
3 8 20 3 active sync /dev/sdb4