I had 2 disks (disk 1 and disk 3) with warning (bad blocks) so I started to replace one by one. After the first one and the rebuild disk 1 was in error and disk 2 was now also in warning). After all disk replaces and rebuild the system show the static LV but the raid unmounted. I tried to do a check but the check always failed (command e2fsck: The super lock could not be read…). Following different replies on the forum I ended up running this command: /etc/init.d/init_lvm.sh but now the static LV disappear from the storage/snapshots tab.
Any idea what can I do next to first remap my raid group 1 to a static LV and to successfully remount it.
Here some details:
TVS-671
Fw: 5.0.0.1891
6 WD RED/RED Plus 3TB drives (RAID 5)
Code: Select all
$ cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md1 : active raid5 sda3[7] sde3[4] sdf3[5] sdd3[3] sdc3[6] sdb3[8]
14601558080 blocks super 1.0 level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU]
bitmap: 0/22 pages [0KB], 65536KB chunk
md322 : active raid1 sdf5[5](S) sde5[4](S) sdd5[3](S) sdc5[2](S) sdb5[1] sda5[0]
7235136 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md256 : active raid1 sdf2[5](S) sde2[4](S) sdd2[3](S) sdc2[2](S) sdb2[1] sda2[0]
530112 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md13 : active raid1 sda4[27] sdf4[25] sde4[24] sdd4[3] sdc4[26] sdb4[28]
458880 blocks super 1.0 [24/6] [UUUUUU__________________]
bitmap: 1/1 pages [4KB], 65536KB chunk
md9 : active raid1 sda1[27] sdf1[25] sde1[24] sdd1[3] sdc1[26] sdb1[28]
530048 blocks super 1.0 [24/6] [UUUUUU__________________]
bitmap: 1/1 pages [4KB], 65536KB chunk
unused devices: <none>
$ sudo mdadm --detail /dev/md1
/dev/md1:
Version : 1.0
Creation Time : Tue May 19 04:47:41 2015
Raid Level : raid5
Array Size : 14601558080 (13925.13 GiB 14952.00 GB)
Used Dev Size : 2920311616 (2785.03 GiB 2990.40 GB)
Raid Devices : 6
Total Devices : 6
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sat Jan 22 19:07:51 2022
State : clean
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
Name : 1
UUID : ef32ec33:a21b4184:bfc35837:dfc7c4c2
Events : 50479
Number Major Minor RaidDevice State
7 8 3 0 active sync /dev/sda3
8 8 19 1 active sync /dev/sdb3
6 8 35 2 active sync /dev/sdc3
3 8 51 3 active sync /dev/sdd3
5 8 83 4 active sync /dev/sdf3
4 8 67 5 active sync /dev/sde3
$ sudo blkid /dev/md1
/dev/md1: UUID="FEKGOr-Mj0B-vQDg-zpsG-MlHP-BSye-9QVrl0" TYPE="lvm2pv"
$ sudo pvdisplay
"/dev/md1" is a new physical volume of "13.60 TiB"
--- NEW Physical volume ---
PV Name /dev/md1
VG Name
PV Size 13.60 TiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID FEKGOr-Mj0B-vQDg-zpsG-MlHP-BSye-9QVrl0
$ sudo e2fsck_64 -fp -C 0 /dev/md1
e2fsck_64: Bad magic number in super-block while trying to open /dev/md1
/dev/md1:
The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem. If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
or
e2fsck -b 32768 <device>
/dev/md1 contains a lvm2pv file system
$ sudo vgdisplay
$ sudo lvdisplay
PS: I also created a Helpdesk ticket