RAID Group 1 "Not active"
-
- New here
- Posts: 6
- Joined: Tue Mar 24, 2020 11:00 pm
Re: RAID Group 1 "Not active"
I'm in complete agreement, not a good setup - as is now proof
I see you have been able to offer suggestions to others for commands to try and get the array back online. There were long time frames between disk failures and this whole unit going down. I read in an earlier post that if there is a big time gap, the unit takes the array offline. This could be the case here?
As mentioned I have reliable data in each drive bay now. I would be most grateful if you were able to offer your wisdom? I am fully aware of the risks involved.
I see you have been able to offer suggestions to others for commands to try and get the array back online. There were long time frames between disk failures and this whole unit going down. I read in an earlier post that if there is a big time gap, the unit takes the array offline. This could be the case here?
As mentioned I have reliable data in each drive bay now. I would be most grateful if you were able to offer your wisdom? I am fully aware of the risks involved.
- dolbyman
- Guru
- Posts: 35024
- Joined: Sat Feb 12, 2011 2:11 am
- Location: Vancouver BC , Canada
Re: RAID Group 1 "Not active"
with two broken drives (or at least partially broken data) I don;t see much sense in it but
what is the result of
(output please in code tags)
what is the result of
Code: Select all
md_checker
-
- New here
- Posts: 6
- Joined: Tue Mar 24, 2020 11:00 pm
Re: RAID Group 1 "Not active"
The command does not work for me:
[admin@NASE5C3B8 ~]# md_checker
-bash: md_checker: command not found
[admin@NASE5C3B8 ~]# md_checker
-bash: md_checker: command not found
- dolbyman
- Guru
- Posts: 35024
- Joined: Sat Feb 12, 2011 2:11 am
- Location: Vancouver BC , Canada
Re: RAID Group 1 "Not active"
should be there .. what firmware are you running ?
-
- New here
- Posts: 4
- Joined: Thu Oct 17, 2013 6:59 pm
Re: RAID Group 1 "Not active"
I have problem with NAS TS-1232XU
I lost my volume.
RAID6 looks alive.
How can I connect the drive? Or recover data from it?
I lost my volume.
Code: Select all
[~] # md_checker
Welcome to MD superblock checker (v2.0) - have a nice day~
Scanning system...
RAID metadata found!
UUID: ce4362b8:7656aa11:6481e543:bf0b1968
Level: raid6
Devices: 12
Name: md1
Chunk Size: 512K
md Version: 1.0
Creation Time: Jul 26 17:21:01 2019
Status: ONLINE (md1) [UUUUUUUUUUUU]
===============================================================================================
Enclosure | Port | Block Dev Name | # | Status | Last Update Time | Events | Array State
===============================================================================================
NAS_HOST 1 /dev/sdb3 0 Active Jul 28 11:06:15 2020 14397 AAAAAAAAAAAA
NAS_HOST 2 /dev/sda3 1 Active Jul 28 11:06:15 2020 14397 AAAAAAAAAAAA
NAS_HOST 3 /dev/sdd3 2 Active Jul 28 11:06:15 2020 14397 AAAAAAAAAAAA
NAS_HOST 4 /dev/sdc3 3 Active Jul 28 11:06:15 2020 14397 AAAAAAAAAAAA
NAS_HOST 5 /dev/sdh3 4 Active Jul 28 11:06:15 2020 14397 AAAAAAAAAAAA
NAS_HOST 6 /dev/sde3 5 Active Jul 28 11:06:15 2020 14397 AAAAAAAAAAAA
NAS_HOST 7 /dev/sdl3 6 Active Jul 28 11:06:15 2020 14397 AAAAAAAAAAAA
NAS_HOST 8 /dev/sdi3 7 Active Jul 28 11:06:15 2020 14397 AAAAAAAAAAAA
NAS_HOST 9 /dev/sdg3 8 Active Jul 28 11:06:15 2020 14397 AAAAAAAAAAAA
NAS_HOST 10 /dev/sdf3 9 Active Jul 28 11:06:15 2020 14397 AAAAAAAAAAAA
NAS_HOST 11 /dev/sdk3 10 Active Jul 28 11:06:15 2020 14397 AAAAAAAAAAAA
NAS_HOST 12 /dev/sdj3 11 Active Jul 28 11:06:15 2020 14397 AAAAAAAAAAAA
===============================================================================================
How can I connect the drive? Or recover data from it?
-
- New here
- Posts: 4
- Joined: Thu Oct 17, 2013 6:59 pm
Re: RAID Group 1 "Not active"
Code: Select all
[~] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md1 : active raid6 sdb3[0] sdj3[11] sdk3[10] sdf3[9] sdg3[8] sdi3[7] sdl3[6] sde3[5] sdh3[4] sdc3[3] sdd3[2] sda3[1]
38970634240 blocks super 1.0 level 6, 512k chunk, algorithm 2 [12/12] [UUUUUUUUUUUU]
md322 : active raid1 sdj5[11](S) sdk5[10](S) sdf5[9](S) sdg5[8](S) sdi5[7](S) sdl5[6](S) sde5[5](S) sdh5[4](S) sdc5[3](S) sdd5[2](S) sda5[1] sdb5[0]
7235136 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md256 : active raid1 sdj2[11](S) sdk2[10](S) sdf2[9](S) sdg2[8](S) sdi2[7](S) sdl2[6](S) sde2[5](S) sdh2[4](S) sdc2[3](S) sdd2[2](S) sda2[1] sdb2[0]
530112 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md13 : active raid1 sdb4[0] sdj4[11] sdk4[10] sdh4[4] sde4[5] sdi4[7] sdl4[6] sdg4[8] sdf4[9] sdc4[3] sdd4[2] sda4[1]
458880 blocks super 1.0 [32/12] [UUUUUUUUUUUU____________________]
bitmap: 1/1 pages [64KB], 65536KB chunk
md9 : active raid1 sdb1[0] sdj1[11] sdk1[10] sdh1[4] sde1[5] sdi1[7] sdl1[6] sdg1[8] sdf1[9] sdc1[3] sdd1[2] sda1[1]
530048 blocks super 1.0 [32/12] [UUUUUUUUUUUU____________________]
bitmap: 1/1 pages [64KB], 65536KB chunk
unused devices: <none>
You do not have the required permissions to view the files attached to this post.
-
- New here
- Posts: 4
- Joined: Thu Oct 17, 2013 6:59 pm
Re: RAID Group 1 "Not active"
Code: Select all
[~] # /etc/init.d/init_lvm.sh
Changing old config name...
mv: can't rename '/etc/config/qdrbd.conf': No such file or directory
Reinitialing...
Detect disk(8, 0)...
dev_count ++ = 0Detect disk(8, 16)...
dev_count ++ = 1Detect disk(8, 32)...
dev_count ++ = 2Detect disk(8, 48)...
dev_count ++ = 3Detect disk(8, 64)...
dev_count ++ = 4Detect disk(8, 80)...
dev_count ++ = 5Detect disk(8, 96)...
dev_count ++ = 6Detect disk(8, 112)...
dev_count ++ = 7Detect disk(8, 128)...
dev_count ++ = 8Detect disk(8, 144)...
dev_count ++ = 9Detect disk(8, 160)...
dev_count ++ = 10Detect disk(8, 176)...
dev_count ++ = 11Detect disk(8, 0)...
Detect disk(8, 16)...
Detect disk(8, 32)...
Detect disk(8, 48)...
Detect disk(8, 64)...
Detect disk(8, 80)...
Detect disk(8, 96)...
Detect disk(8, 112)...
Detect disk(8, 128)...
Detect disk(8, 144)...
Detect disk(8, 160)...
Detect disk(8, 176)...
sh: /sys/block/sdb/device/qnap_param_latency: Permission denied
sh: /sys/block/sda/device/qnap_param_latency: Permission denied
sh: /sys/block/sdd/device/qnap_param_latency: Permission denied
sh: /sys/block/sdc/device/qnap_param_latency: Permission denied
sh: /sys/block/sdh/device/qnap_param_latency: Permission denied
sh: /sys/block/sde/device/qnap_param_latency: Permission denied
sh: /sys/block/sdl/device/qnap_param_latency: Permission denied
sh: /sys/block/sdi/device/qnap_param_latency: Permission denied
sh: /sys/block/sdg/device/qnap_param_latency: Permission denied
sh: /sys/block/sdf/device/qnap_param_latency: Permission denied
sh: /sys/block/sdk/device/qnap_param_latency: Permission denied
sh: /sys/block/sdj/device/qnap_param_latency: Permission denied
sys_startup_p2:got called count = -1
LV Status NOT available
... many of the same lines
LV Status NOT available
Done
-
- New here
- Posts: 4
- Joined: Thu Oct 17, 2013 6:59 pm
Re: RAID Group 1 "Not active"
Code: Select all
mdadm --detail /dev/md1
/dev/md1:
Version : 1.0
Creation Time : Fri Jul 26 17:21:01 2019
Raid Level : raid6
Array Size : 38970634240 (37165.29 GiB 39905.93 GB)
Used Dev Size : 3897063424 (3716.53 GiB 3990.59 GB)
Raid Devices : 12
Total Devices : 12
Persistence : Superblock is persistent
Update Time : Tue Jul 28 11:21:23 2020
State : clean
Active Devices : 12
Working Devices : 12
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : 1
UUID : ce4362b8:7656aa11:6481e543:bf0b1968
Events : 14397
Number Major Minor RaidDevice State
0 8 19 0 active sync /dev/sdb3
1 8 3 1 active sync /dev/sda3
2 8 51 2 active sync /dev/sdd3
3 8 35 3 active sync /dev/sdc3
4 8 115 4 active sync /dev/sdh3
5 8 67 5 active sync /dev/sde3
6 8 179 6 active sync /dev/sdl3
7 8 131 7 active sync /dev/sdi3
8 8 99 8 active sync /dev/sdg3
9 8 83 9 active sync /dev/sdf3
10 8 163 10 active sync /dev/sdk3
11 8 147 11 active sync /dev/sdj3
-
- First post
- Posts: 1
- Joined: Sun May 23, 2021 11:00 am
Re: RAID Group 1 "Not active"
I'm having the same 'Not active' problem with my TS-431 with 4 disks in RAID5
md_checker gave below info:
RAID metadata found!
UUID: e536faa3:fb8c2dc4:ca794381:b38e82f7
Level: raid5
Devices: 4
Name: md1
Chunk Size: 64K
md Version: 1.0
Creation Time: Dec 1 04:48:36 2015
Status: OFFLINE
===============================================================================
Disk | Device | # | Status | Last Update Time | Events | Array State
===============================================================================
1 /dev/sda3 0 Active May 15 11:29:21 2021 1103 AA.A
2 /dev/sdb3 1 Active May 15 11:29:21 2021 1103 AA.A
3 /dev/sdc3 2 Active May 15 01:40:56 2021 555 AAAA
4 /dev/sdd3 3 Active May 15 01:40:56 2021 555 AAAA
===============================================================================
Found this thread and tried mdadm -CfR --assume-clean /dev/md1 -l 5 -n 4 -c 64 -e 1.0 /dev/sda3 /dev/sdb3 /dev/sdc3 /dev/sdd3
status became ONLINE but still not able to access files.
Restarted NAS, problem remained. The recover option in Storage Manager is greyed out.
md_checker message is now
RAID metadata found!
UUID: 98ce9f2e:bfc7265f:6f1e718d:e742fee4
Level: raid5
Devices: 4
Name: md1
Chunk Size: 64K
md Version: 1.0
Creation Time: May 22 19:05:14 2021
Status: OFFLINE
===============================================================================
Disk | Device | # | Status | Last Update Time | Events | Array State
===============================================================================
1 /dev/sda3 0 Active May 22 19:08:53 2021 2 AAAA
2 /dev/sdb3 1 Active May 22 19:08:53 2021 2 AAAA
3 /dev/sdc3 2 Active May 22 19:08:53 2021 2 AAAA
4 /dev/sdd3 3 Active May 22 19:08:53 2021 2 AAAA
===============================================================================
Any suggestion what can be done to save it?
md_checker gave below info:
RAID metadata found!
UUID: e536faa3:fb8c2dc4:ca794381:b38e82f7
Level: raid5
Devices: 4
Name: md1
Chunk Size: 64K
md Version: 1.0
Creation Time: Dec 1 04:48:36 2015
Status: OFFLINE
===============================================================================
Disk | Device | # | Status | Last Update Time | Events | Array State
===============================================================================
1 /dev/sda3 0 Active May 15 11:29:21 2021 1103 AA.A
2 /dev/sdb3 1 Active May 15 11:29:21 2021 1103 AA.A
3 /dev/sdc3 2 Active May 15 01:40:56 2021 555 AAAA
4 /dev/sdd3 3 Active May 15 01:40:56 2021 555 AAAA
===============================================================================
Found this thread and tried mdadm -CfR --assume-clean /dev/md1 -l 5 -n 4 -c 64 -e 1.0 /dev/sda3 /dev/sdb3 /dev/sdc3 /dev/sdd3
status became ONLINE but still not able to access files.
Restarted NAS, problem remained. The recover option in Storage Manager is greyed out.
md_checker message is now
RAID metadata found!
UUID: 98ce9f2e:bfc7265f:6f1e718d:e742fee4
Level: raid5
Devices: 4
Name: md1
Chunk Size: 64K
md Version: 1.0
Creation Time: May 22 19:05:14 2021
Status: OFFLINE
===============================================================================
Disk | Device | # | Status | Last Update Time | Events | Array State
===============================================================================
1 /dev/sda3 0 Active May 22 19:08:53 2021 2 AAAA
2 /dev/sdb3 1 Active May 22 19:08:53 2021 2 AAAA
3 /dev/sdc3 2 Active May 22 19:08:53 2021 2 AAAA
4 /dev/sdd3 3 Active May 22 19:08:53 2021 2 AAAA
===============================================================================
Any suggestion what can be done to save it?
-
- New here
- Posts: 2
- Joined: Sat Aug 28, 2021 9:42 am
Re: RAID Group 1 "Not active"
I'm having the same 'Not active' problem with my TS-431 with 4 disks in Raid 10
md_checker gave below info:
RAID metadata found!
UUID: 41d52c2f:e2d3c30d:98d0d075:478155db
Level: raid10
Devices: 4
Name: md1
Chunk Size: 512K
md Version: 1.0
Creation Time: Sep 15 11:36:53 2020
Status: OFFLINE
================================================================================ ===============
Enclosure | Port | Block Dev Name | # | Status | Last Update Time | Events | Array State
================================================================================ ===============
NAS_HOST 1 /dev/sda3 0 Active Aug 27 05:26:10 2021 441 AA.A
NAS_HOST 2 /dev/sdb3 1 Active Aug 27 05:23:51 2021 441 AAAA
NAS_HOST 3 /dev/sdc3 2 Active Aug 27 05:23:47 2021 438 AAAA
NAS_HOST 4 /dev/sdd3 3 Active Aug 27 05:23:47 2021 438 AAAA
===========================
I executed the command /etc/init.d/init_lvm.sh
result:
changing old config name...
mv: unable to rename '/etc/config/qdrbd.conf' : No such file or directory
Reinitialing...
Detect disk (8,0)...
dev_count ++ = 0Detect disk (8, 16)...
dev_count ++ = 1Detect disk (8, 32)...
dev_count ++ = 2Detect disk (8, 48)...
dev_count ++ = 3Detect disk (8, 0)...
Detect disk(8,16)...
Detect disk(8,32)...
Detect disk(8,48)...
Unable to open module list!: No such file or directory
Unable to open module list!: No such file or directory
sh: /sys/block/sda/device/qnap_param_latency: Permission denied
sh: /sys/block/sda/device/qnap_param_latency: Permission denied
sh: /sys/block/sda/device/qnap_param_latency: Permission denied
sh: /sys/block/sda/device/qnap_param_latency: Permission denied
Unable to open module list!: No such file or directory
Unable to open module list!: No such file or directory
sys_startup_p2:got called count = -1
Done
how can access be restored and Raid10
md_checker gave below info:
RAID metadata found!
UUID: 41d52c2f:e2d3c30d:98d0d075:478155db
Level: raid10
Devices: 4
Name: md1
Chunk Size: 512K
md Version: 1.0
Creation Time: Sep 15 11:36:53 2020
Status: OFFLINE
================================================================================ ===============
Enclosure | Port | Block Dev Name | # | Status | Last Update Time | Events | Array State
================================================================================ ===============
NAS_HOST 1 /dev/sda3 0 Active Aug 27 05:26:10 2021 441 AA.A
NAS_HOST 2 /dev/sdb3 1 Active Aug 27 05:23:51 2021 441 AAAA
NAS_HOST 3 /dev/sdc3 2 Active Aug 27 05:23:47 2021 438 AAAA
NAS_HOST 4 /dev/sdd3 3 Active Aug 27 05:23:47 2021 438 AAAA
===========================
I executed the command /etc/init.d/init_lvm.sh
result:
changing old config name...
mv: unable to rename '/etc/config/qdrbd.conf' : No such file or directory
Reinitialing...
Detect disk (8,0)...
dev_count ++ = 0Detect disk (8, 16)...
dev_count ++ = 1Detect disk (8, 32)...
dev_count ++ = 2Detect disk (8, 48)...
dev_count ++ = 3Detect disk (8, 0)...
Detect disk(8,16)...
Detect disk(8,32)...
Detect disk(8,48)...
Unable to open module list!: No such file or directory
Unable to open module list!: No such file or directory
sh: /sys/block/sda/device/qnap_param_latency: Permission denied
sh: /sys/block/sda/device/qnap_param_latency: Permission denied
sh: /sys/block/sda/device/qnap_param_latency: Permission denied
sh: /sys/block/sda/device/qnap_param_latency: Permission denied
Unable to open module list!: No such file or directory
Unable to open module list!: No such file or directory
sys_startup_p2:got called count = -1
Done
how can access be restored and Raid10
- dolbyman
- Guru
- Posts: 35024
- Joined: Sat Feb 12, 2011 2:11 am
- Location: Vancouver BC , Canada
Re: RAID Group 1 "Not active"
looks like two disks are desynced..if they were the same raid0 side, your data is toast (raid6 would have been better)..hope you have backups
-
- New here
- Posts: 2
- Joined: Sat Aug 28, 2021 9:42 am
Re: RAID Group 1 "Not active"
the fact of the matter is that there are no backups
-
- First post
- Posts: 1
- Joined: Tue Oct 26, 2021 7:48 am
Re: RAID Group 1 "Not active"
Hi there!
i am having the same issue with a TS-863U-RP.
Any advice and assistance on how to proceed would be appreciated.
MD Checker is as follows
Thanks in advance!
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[~] # md_checker
Welcome to MD superblock checker (v1.4) - have a nice day~
Scanning system...
HAL firmware detected!
Scanning Enclosure 0...
RAID metadata found!
UUID: 48d80b7c:16f603f0:598cca1b:14b1cf75
Level: raid10
Devices: 8
Name: md1
Chunk Size: 512K
md Version: 1.0
Creation Time: Jun 14 18:10:33 2017
Status: OFFLINE
===============================================================================
Disk | Device | # | Status | Last Update Time | Events | Array State
===============================================================================
1 /dev/sda3 0 Active Oct 25 20:51:55 2021 1309152 AAAA.A.A
2 /dev/sdb3 1 Active Oct 25 20:51:55 2021 1309152 AAAA.A.A
3 /dev/sde3 2 Active Oct 25 20:51:55 2021 1309152 AAAA.A.A
4 /dev/sdc3 3 Active Oct 25 20:51:55 2021 1309152 AAAA.A.A
5 /dev/sdh3 4 Rebuild Oct 1 20:06:27 2021 726792 AAAAAAAA
7 /dev/sdf3 4 Rebuild Oct 25 20:35:34 2021 1308851 AAAAAA.A
6 /dev/sdg3 5 Active Oct 25 20:49:15 2021 1309115 AAAA.A.A
-------------- 6 Missing -------------------------------------------
8 /dev/sdd3 7 Active Oct 25 20:51:55 2021 1309152 AAAA.A.A
===============================================================================
WARNING: Duplicate device detected for #(4)!
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
i am having the same issue with a TS-863U-RP.
Any advice and assistance on how to proceed would be appreciated.
MD Checker is as follows
Thanks in advance!
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[~] # md_checker
Welcome to MD superblock checker (v1.4) - have a nice day~
Scanning system...
HAL firmware detected!
Scanning Enclosure 0...
RAID metadata found!
UUID: 48d80b7c:16f603f0:598cca1b:14b1cf75
Level: raid10
Devices: 8
Name: md1
Chunk Size: 512K
md Version: 1.0
Creation Time: Jun 14 18:10:33 2017
Status: OFFLINE
===============================================================================
Disk | Device | # | Status | Last Update Time | Events | Array State
===============================================================================
1 /dev/sda3 0 Active Oct 25 20:51:55 2021 1309152 AAAA.A.A
2 /dev/sdb3 1 Active Oct 25 20:51:55 2021 1309152 AAAA.A.A
3 /dev/sde3 2 Active Oct 25 20:51:55 2021 1309152 AAAA.A.A
4 /dev/sdc3 3 Active Oct 25 20:51:55 2021 1309152 AAAA.A.A
5 /dev/sdh3 4 Rebuild Oct 1 20:06:27 2021 726792 AAAAAAAA
7 /dev/sdf3 4 Rebuild Oct 25 20:35:34 2021 1308851 AAAAAA.A
6 /dev/sdg3 5 Active Oct 25 20:49:15 2021 1309115 AAAA.A.A
-------------- 6 Missing -------------------------------------------
8 /dev/sdd3 7 Active Oct 25 20:51:55 2021 1309152 AAAA.A.A
===============================================================================
WARNING: Duplicate device detected for #(4)!
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-
- New here
- Posts: 5
- Joined: Mon Mar 07, 2022 10:06 am
Re: RAID Group 1 "Not active"
hi, i did not see code for raid 10. can some one post what mdadm -CfR --assume-clean i should be using thx
Welcome to MD superblock checker (v1.4) - have a nice day~
Scanning system...
HAL firmware detected!
Scanning Enclosure 0...
RAID metadata found!
UUID: 091e81ab:3e9434ab:11791435:1bfbeb59
Level: raid10
Devices: 8
Name: md1
Chunk Size: 512K
md Version: 1.0
Creation Time: Oct 13 14:58:44 2021
Status: OFFLINE
===============================================================================
Disk | Device | # | Status | Last Update Time | Events | Array State
===============================================================================
1 /dev/sdg3 0 Active Mar 6 16:41:52 2022 96 AAAA.AAA
2 /dev/sdh3 1 Active Mar 6 16:41:52 2022 96 AAAA.AAA
3 /dev/sde3 2 Active Mar 6 16:41:52 2022 96 AAAA.AAA
4 /dev/sdf3 3 Active Mar 6 16:41:52 2022 96 AAAA.AAA
5 /dev/sdc3 4 Active Mar 6 16:40:40 2022 94 AAAAAAAA
6 /dev/sdd3 5 Active Mar 6 16:40:40 2022 94 AAAAAAAA
7 /dev/sdb3 6 Active Mar 6 16:41:52 2022 96 AAAA.AAA
8 /dev/sda3 7 Active Mar 6 16:41:52 2022 96 AAAA.AAA
===============================================================================
Welcome to MD superblock checker (v1.4) - have a nice day~
Scanning system...
HAL firmware detected!
Scanning Enclosure 0...
RAID metadata found!
UUID: 091e81ab:3e9434ab:11791435:1bfbeb59
Level: raid10
Devices: 8
Name: md1
Chunk Size: 512K
md Version: 1.0
Creation Time: Oct 13 14:58:44 2021
Status: OFFLINE
===============================================================================
Disk | Device | # | Status | Last Update Time | Events | Array State
===============================================================================
1 /dev/sdg3 0 Active Mar 6 16:41:52 2022 96 AAAA.AAA
2 /dev/sdh3 1 Active Mar 6 16:41:52 2022 96 AAAA.AAA
3 /dev/sde3 2 Active Mar 6 16:41:52 2022 96 AAAA.AAA
4 /dev/sdf3 3 Active Mar 6 16:41:52 2022 96 AAAA.AAA
5 /dev/sdc3 4 Active Mar 6 16:40:40 2022 94 AAAAAAAA
6 /dev/sdd3 5 Active Mar 6 16:40:40 2022 94 AAAAAAAA
7 /dev/sdb3 6 Active Mar 6 16:41:52 2022 96 AAAA.AAA
8 /dev/sda3 7 Active Mar 6 16:41:52 2022 96 AAAA.AAA
===============================================================================
- dolbyman
- Guru
- Posts: 35024
- Joined: Sat Feb 12, 2011 2:11 am
- Location: Vancouver BC , Canada
Re: RAID Group 1 "Not active"
I'd try a
first
Code: Select all
mdadm --assemble --scan