RAID ok, Storage Pool error, init_lvm.sh crashes

Questions about SNMP, Power, System, Logs, disk, & RAID.
Post Reply
ptobb
New here
Posts: 8
Joined: Tue Oct 16, 2018 1:49 pm

RAID ok, Storage Pool error, init_lvm.sh crashes

Post by ptobb » Thu Mar 26, 2020 1:21 pm

Dear all,

A few months ago I upgraded from a TVS-682 to a TS-1277XU-RP (firmware 4.4.1.1216). The main storage consists of two RAID10 (4x10 TB each), both in one storage pool. A few nights ago the system stopped responding. After restart, the RAIDs were offline:

Code: Select all

[~] # md_checker 

Welcome to MD superblock checker (v2.0) - have a nice day~

Scanning system...


RAID metadata found!
UUID:		a7369ea4:0cfa782c:aa0569d7:225141b9
Level:		raid10
Devices:	4
Name:		md1
Chunk Size:	512K
md Version:	1.0
Creation Time:	Jan 29 16:30:20 2020
Status:		OFFLINE
===============================================================================================
 Enclosure | Port | Block Dev Name | # | Status |   Last Update Time   | Events | Array State
===============================================================================================
 NAS_HOST       1        /dev/sdk3   0   Active   Mar 25 06:19:43 2020   572184   AAAA                     
 NAS_HOST       2        /dev/sdi3   1   Active   Mar 25 06:19:43 2020   572184   AAAA                     
 NAS_HOST       4        /dev/sdg3   2   Active   Mar 25 06:19:43 2020   572184   AAAA                     
 NAS_HOST       3        /dev/sdj3   3   Active   Mar 25 06:19:43 2020   572184   AAAA                     
===============================================================================================


RAID metadata found!
UUID:		deb39eb0:45aa18b2:5a0a9b23:601157d0
Level:		raid10
Devices:	4
Name:		md2
Chunk Size:	512K
md Version:	1.0
Creation Time:	Feb 2 15:07:49 2020
Status:		OFFLINE
===============================================================================================
 Enclosure | Port | Block Dev Name | # | Status |   Last Update Time   | Events | Array State
===============================================================================================
 NAS_HOST       5        /dev/sdd3   0   Active   Mar 25 06:19:12 2020      611   AAAA                     
 NAS_HOST       6        /dev/sde3   1   Active   Mar 25 06:19:12 2020      611   AAAA                     
 NAS_HOST       7        /dev/sdf3   2   Active   Mar 25 06:19:12 2020      611   AAAA                     
 NAS_HOST       8        /dev/sdh3   3   Active   Mar 25 06:19:12 2020      611   AAAA                     
===============================================================================================
I was able to get them online again:

Code: Select all

[~] # mdadm -AfR /dev/md1 /dev/sdk3 /dev/sdi3 /dev/sdg3 /dev/sdj3      
mdadm: failed to get exclusive lock on mapfile - continue anyway...
mdadm: /dev/md1 has been started with 4 drives.
[~] # mdadm -AfR /dev/md2 /dev/sdd3 /dev/sde3 /dev/sdf3 /dev/sdh3 
mdadm: failed to get exclusive lock on mapfile - continue anyway...
mdadm: /dev/md2 has been started with 4 drives.
[~] # md_checker                                                       

Welcome to MD superblock checker (v2.0) - have a nice day~

Scanning system...


RAID metadata found!
UUID:		a7369ea4:0cfa782c:aa0569d7:225141b9
Level:		raid10
Devices:	4
Name:		md1
Chunk Size:	512K
md Version:	1.0
Creation Time:	Jan 29 16:30:20 2020
Status:         ONLINE (md1) [UUUU]
===============================================================================================
 Enclosure | Port | Block Dev Name | # | Status |   Last Update Time   | Events | Array State
===============================================================================================
 NAS_HOST       1        /dev/sdk3   0   Active   Mar 25 06:19:43 2020   572184   AAAA                     
 NAS_HOST       2        /dev/sdi3   1   Active   Mar 25 06:19:43 2020   572184   AAAA                     
 NAS_HOST       4        /dev/sdg3   2   Active   Mar 25 06:19:43 2020   572184   AAAA                     
 NAS_HOST       3        /dev/sdj3   3   Active   Mar 25 06:19:43 2020   572184   AAAA                     
===============================================================================================


RAID metadata found!
UUID:		deb39eb0:45aa18b2:5a0a9b23:601157d0
Level:		raid10
Devices:	4
Name:		md2
Chunk Size:	512K
md Version:	1.0
Creation Time:	Feb 2 15:07:49 2020
Status:         ONLINE (md2) [UUUU]
===============================================================================================
 Enclosure | Port | Block Dev Name | # | Status |   Last Update Time   | Events | Array State
===============================================================================================
 NAS_HOST       5        /dev/sdd3   0   Active   Mar 25 06:19:12 2020      611   AAAA                     
 NAS_HOST       6        /dev/sde3   1   Active   Mar 25 06:19:12 2020      611   AAAA                     
 NAS_HOST       7        /dev/sdf3   2   Active   Mar 25 06:19:12 2020      611   AAAA                     
 NAS_HOST       8        /dev/sdh3   3   Active   Mar 25 06:19:12 2020      611   AAAA                     
===============================================================================================
However, restarting the thick volume failed since init_lvm.sh crashes and doesn't terminate:

Code: Select all

[~] # /etc/init.d/init_lvm.sh
Changing old config name...
Reinitialing...
Detect disk(8, 80)...
dev_count ++ = 0Detect disk(8, 48)...
dev_count ++ = 1Detect disk(8, 16)...
dev_count ++ = 2Detect disk(259, 1)...
dev_count ++ = 3Detect disk(8, 160)...
dev_count ++ = 4Detect disk(8, 128)...
dev_count ++ = 5Detect disk(8, 96)...
dev_count ++ = 6Detect disk(259, 0)...
dev_count ++ = 7Detect disk(8, 64)...
dev_count ++ = 8Detect disk(8, 32)...
ignore non-root enclosure disk(8, 32).
Detect disk(8, 0)...
dev_count ++ = 9Detect disk(8, 144)...
dev_count ++ = 10Detect disk(259, 18)...
dev_count ++ = 11Detect disk(8, 112)...
dev_count ++ = 12Detect disk(259, 12)...
dev_count ++ = 13Detect disk(8, 80)...
Detect disk(8, 48)...
Detect disk(8, 16)...
Detect disk(259, 1)...
Detect disk(8, 160)...
Detect disk(8, 128)...
Detect disk(8, 96)...
Detect disk(259, 0)...
Detect disk(8, 64)...
Detect disk(8, 32)...
ignore non-root enclosure disk(8, 32).
Detect disk(8, 0)...
Detect disk(8, 144)...
Detect disk(259, 18)...
Detect disk(8, 112)...
Detect disk(259, 12)...
sys_startup_p2:got called count = -1
  Found duplicate PV E1CVOoNDE2Eb4Q6LfaQ4GrEwLVxHafii: using /dev/drbd1 not /dev/md1
  Using duplicate PV /dev/drbd1 from subsystem DRBD, ignoring /dev/md1
  Found duplicate PV 6bWub2Y9lr8nNsdyYfXQz2OdYhFmbVkW: using existing dev /dev/drbd2
  LV Status              NOT available
sh: /sys/block/dm-5/dm/pool/tier/relocation_rate: Permission denied
With LVM not starting, I have no idea what to try next. Any suggestions? Thanks in advance!

User avatar
storageman
Ask me anything
Posts: 5511
Joined: Thu Sep 22, 2011 10:57 pm

Re: RAID ok, Storage Pool error, init_lvm.sh crashes

Post by storageman » Thu Mar 26, 2020 4:54 pm

"mount -t ext4 /dev/mapper/cachedev1 /share/CACHEDEV1_DATA"

Does it mount using above?

Is there an SSD cache turned on?

ptobb
New here
Posts: 8
Joined: Tue Oct 16, 2018 1:49 pm

Re: RAID ok, Storage Pool error, init_lvm.sh crashes

Post by ptobb » Fri Mar 27, 2020 1:07 am

Thanks a lot for your swift reply! There are a couple of SSDs in the system, but they are in no Qtier and SSD caching is turned off.

Your suggested mount command didn't work as /dev/mapper/cachedev1 appears to be missing:

Code: Select all

[~] # mount -t ext4 /dev/mapper/cachedev1 /share/CACHEDEV1_DATA
mount: special device /dev/mapper/cachedev1 does not exist
[~] # ls /dev/mapper/
control

ptobb
New here
Posts: 8
Joined: Tue Oct 16, 2018 1:49 pm

Re: RAID ok, Storage Pool error, init_lvm.sh crashes

Post by ptobb » Fri Mar 27, 2020 1:23 am

I now tried a bit more, but got stuck similarly as via init_lvm.sh:

Code: Select all

[~] # qcli_storage 
Enclosure Port Sys_Name      Size      Type   RAID        RAID_Type    Pool TMeta  VolType      VolName 
NAS_HOST  1    /dev/sdk      9.10 TB   data   /dev/md1    RAID 10,512  1(X) 64 GB  flexible     DataVol1(X)
NAS_HOST  2    /dev/sdj      9.10 TB   data   /dev/md1    RAID 10,512  1(X) 64 GB  flexible     DataVol1(X)
NAS_HOST  3    /dev/sdi      9.10 TB   data   /dev/md1    RAID 10,512  1(X) 64 GB  flexible     DataVol1(X)
NAS_HOST  4    /dev/sdh      9.10 TB   data   /dev/md1    RAID 10,512  1(X) 64 GB  flexible     DataVol1(X)
NAS_HOST  5    /dev/sdd      9.10 TB   data   /dev/md2    RAID 10,512  1(X) 64 GB  flexible     DataVol1(X)
NAS_HOST  6    /dev/sde      9.10 TB   data   /dev/md2    RAID 10,512  1(X) 64 GB  flexible     DataVol1(X)
NAS_HOST  7    /dev/sdf      9.10 TB   data   /dev/md2    RAID 10,512  1(X) 64 GB  flexible     DataVol1(X)
NAS_HOST  8    /dev/sdg      9.10 TB   data   /dev/md2    RAID 10,512  1(X) 64 GB  flexible     DataVol1(X)
NAS_HOST  9    /dev/sdb      931.51 GB free   --          --           --   --     --           --      
NAS_HOST  10   /dev/sda      931.51 GB free   --          --           --   --     --           --      
NAS_HOST  P1-1 /dev/nvme0n1  931.51 GB free   --          --           --   --     --           --      
NAS_HOST  P1-2 /dev/nvme1n1  931.51 GB free   --          --           --   --     --           --      
NAS_HOST  P1-3 /dev/nvme2n1  931.51 GB free   --          --           --   --     --           --      
NAS_HOST  P1-4 /dev/nvme3n1  931.51 GB free   --          --           --   --     --           --      
(The six 1TB SSDs at Port 9, 10, P1-1, ..., P1-4 are indeed not formatted yet.)

Code: Select all

[~] # storage_util --sys_startup
Detect disk(8, 80)...
dev_count ++ = 0Detect disk(8, 48)...
dev_count ++ = 1Detect disk(8, 16)...
dev_count ++ = 2Detect disk(259, 6)...
dev_count ++ = 3Detect disk(8, 160)...
dev_count ++ = 4Detect disk(8, 128)...
dev_count ++ = 5Detect disk(8, 96)...
dev_count ++ = 6Detect disk(259, 18)...
dev_count ++ = 7Detect disk(8, 64)...
dev_count ++ = 8Detect disk(8, 32)...
ignore non-root enclosure disk(8, 32).
Detect disk(8, 0)...
dev_count ++ = 9Detect disk(8, 144)...
dev_count ++ = 10Detect disk(259, 0)...
dev_count ++ = 11Detect disk(8, 112)...
dev_count ++ = 12Detect disk(259, 12)...
dev_count ++ = 13Detect disk(8, 80)...
Detect disk(8, 48)...
Detect disk(8, 16)...
Detect disk(259, 6)...
Detect disk(8, 160)...
Detect disk(8, 128)...
Detect disk(8, 96)...
Detect disk(259, 18)...
Detect disk(8, 64)...
Detect disk(8, 32)...
ignore non-root enclosure disk(8, 32).
Detect disk(8, 0)...
Detect disk(8, 144)...
Detect disk(259, 0)...
Detect disk(8, 112)...
Detect disk(259, 12)...
Then again this crashes and doesn't terminate:

Code: Select all

[~] # storage_util --sys_startup_p2
sys_startup_p2:got called count = -1
sh: /sys/block/dm-6/dm/pool/tier/relocation_rate: Permission denied
Afterwards I have to power off the system in order to restart.
Last edited by ptobb on Fri Mar 27, 2020 1:42 am, edited 1 time in total.

dolbyman
Guru
Posts: 18074
Joined: Sat Feb 12, 2011 2:11 am
Location: Vancouver BC , Canada

Re: RAID ok, Storage Pool error, init_lvm.sh crashes

Post by dolbyman » Fri Mar 27, 2020 1:34 am

ptobb wrote:
Fri Mar 27, 2020 1:23 am
I now tried to follow XYZ, but got stuck similarly as via init_lvm.sh:
Don't link to that website .. it gets autocensored to a spam site (used to even push out malware)

ptobb
New here
Posts: 8
Joined: Tue Oct 16, 2018 1:49 pm

Re: RAID ok, Storage Pool error, init_lvm.sh crashes

Post by ptobb » Fri Mar 27, 2020 1:44 am

dolbyman wrote:
Fri Mar 27, 2020 1:34 am
Don't link to that website .. it gets autocensored to a spam site (used to even push out malware)
Ohh -- I didn't know. Thanks for noting. I edited my last post and removed the link.

ptobb
New here
Posts: 8
Joined: Tue Oct 16, 2018 1:49 pm

Re: RAID ok, Storage Pool error, init_lvm.sh crashes

Post by ptobb » Fri Mar 27, 2020 6:11 am

Here another attempt to start init_lvm.sh. It crashes with yet another error message:

Code: Select all

[~] # vgs
  Found duplicate PV 6bWub2Y9lr8nNsdyYfXQz2OdYhFmbVkW: using existing dev /dev/drbd2
  VG   #PV #LV #SN Attr   VSize  VFree
  vg1    2   4   0 wz--n- 36.35t    0 

[~] # lvdisplay vg1
  Found duplicate PV 6bWub2Y9lr8nNsdyYfXQz2OdYhFmbVkW: using existing dev /dev/drbd2
  --- Logical volume ---
  LV Path                /dev/vg1/lv544
  LV Name                lv544
  VG Name                vg1
  LV UUID                7Vs7lH-CPIc-d1gg-OOGK-kQjS-9aWV-dYKTBI
  LV Write Access        read/write
  LV Creation host, time TS-1277XU, 2020-01-29 16:30:22 +0100
  LV Status              NOT available
  LV Size                144.00 GiB
  Current LE             36864
  Segments               1
  Allocation             inherit
  Read ahead sectors     8192
   
  --- Logical volume ---
  LV Name                tp1
  VG Name                vg1
  LV UUID                Up7aXW-KzJP-2i8m-yWbq-jikO-8G3b-6sy9c9
  LV Write Access        read/write
  LV Creation host, time TS-1277XU, 2020-01-29 16:30:23 +0100
  LV Pool metadata       tp1_tmeta
  LV Pool data           tp1_tierdata_0
  LV Status              NOT available
  LV Size                36.14 TiB
  Current LE             9473212
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
   
  --- Logical volume ---
  LV Path                /dev/vg1/lv1
  LV Name                lv1
  VG Name                vg1
  LV UUID                EdXPQ2-a6n3-f0rE-ogVM-vSqg-fxzx-2hqnbY
  LV Write Access        read/write
  LV Creation host, time TS-1277XU, 2020-01-29 17:33:15 +0100
  LV Pool name           tp1
  LV Status              NOT available
  LV Size                35.00 TiB
  Current LE             9175040
  Segments               1
  Allocation             inherit
  Read ahead sectors     8192
   
  --- Logical volume ---
  LV Path                /dev/vg1/lv1312
  LV Name                lv1312
  VG Name                vg1
  LV UUID                BUYYXb-P7M5-Zlmc-mNec-Fx8S-3Gd2-nUDWJt
  LV Write Access        read/write
  LV Creation host, time TS-1277XU, 2020-02-02 15:07:51 +0100
  LV Status              NOT available
  LV Size                3.72 GiB
  Current LE             952
  Segments               1
  Allocation             inherit
  Read ahead sectors     8192
   
[~] # lvchange -ay vg1/lv1                                                                                                                                       
  Found duplicate PV 6bWub2Y9lr8nNsdyYfXQz2OdYhFmbVkW: using existing dev /dev/drbd2
   
[~] # lvchange -ay vg1/tp1
  WARNING: duplicate PV 6bWub2Y9lr8nNsdyYfXQz2OdYhFmbVkW is being used from both devices /dev/drbd2 and /dev/md2
  Found duplicate PV 6bWub2Y9lr8nNsdyYfXQz2OdYhFmbVkW: using existing dev /dev/drbd2

[~] # lvscan -v
    Using logical volume(s) on command line.
    PV on device /dev/drbd2 (147:2 37634) is also on device /dev/md2 (9:2 2306) 6bWub2-Y9lr-8nNs-dyYf-XQz2-OdYh-FmbVkW
  WARNING: duplicate PV 6bWub2Y9lr8nNsdyYfXQz2OdYhFmbVkW is being used from both devices /dev/drbd2 and /dev/md2
  Found duplicate PV 6bWub2Y9lr8nNsdyYfXQz2OdYhFmbVkW: using existing dev /dev/drbd2
  inactive          '/dev/vg1/lv544' [144.00 GiB] inherit
  ACTIVE            '/dev/vg1/tp1' [36.14 TiB] inherit
  ACTIVE            '/dev/vg1/lv1' [35.00 TiB] inherit
  inactive          '/dev/vg1/lv1312' [3.72 GiB] inherit

[~] # vgscan --mknodes -v
    Wiping cache of LVM-capable devices
    Wiping internal VG cache
  Reading all physical volumes.  This may take a while...
    Using volume group(s) on command line.
    PV on device /dev/drbd2 (147:2 37634) is also on device /dev/md2 (9:2 2306) 6bWub2-Y9lr-8nNs-dyYf-XQz2-OdYh-FmbVkW
  WARNING: duplicate PV 6bWub2Y9lr8nNsdyYfXQz2OdYhFmbVkW is being used from both devices /dev/drbd2 and /dev/md2
  Found duplicate PV 6bWub2Y9lr8nNsdyYfXQz2OdYhFmbVkW: using existing dev /dev/drbd2
  Found volume group "vg1" using metadata type lvm2
    Using logical volume(s) on command line.
    PV on device /dev/drbd2 (147:2 37634) is also on device /dev/md2 (9:2 2306) 6bWub2-Y9lr-8nNs-dyYf-XQz2-OdYh-FmbVkW
  WARNING: duplicate PV 6bWub2Y9lr8nNsdyYfXQz2OdYhFmbVkW is being used from both devices /dev/drbd2 and /dev/md2
  Found duplicate PV 6bWub2Y9lr8nNsdyYfXQz2OdYhFmbVkW: using existing dev /dev/drbd2

[~] # qcli_storage -d
Enclosure  Port  Sys_Name     Type      Size      Alias          Signature   Partitions  Model  
NAS_HOST   1     /dev/sda     HDD:data  9.10 TB   Disk 1         QNAP        5           Seagate ST10000NM0478-2H7100
NAS_HOST   2     /dev/sdb     HDD:data  9.10 TB   Disk 2         QNAP        5           Seagate ST10000NM0478-2H7100
NAS_HOST   3     /dev/sde     HDD:data  9.10 TB   Disk 3         QNAP        5           Seagate ST10000NM0478-2H7100
NAS_HOST   4     /dev/sdd     HDD:data  9.10 TB   Disk 4         QNAP        5           Seagate ST10000NM0478-2H7100
NAS_HOST   5     /dev/sdj     HDD:data  9.10 TB   Disk 5         QNAP FLEX   5           WDC WD101KFBX-68R56N0
NAS_HOST   6     /dev/sdi     HDD:data  9.10 TB   Disk 6         QNAP FLEX   5           WDC WD101KFBX-68R56N0
NAS_HOST   7     /dev/sdc     HDD:data  9.10 TB   Disk 7         QNAP FLEX   5           WDC WD100EFAX-68LHPN0
NAS_HOST   8     /dev/sdf     HDD:data  9.10 TB   Disk 8         QNAP FLEX   5           WDC WD100EFAX-68LHPN0
NAS_HOST   9     /dev/sdh     SSD:free  931.51 GB Disk 9         QNAP        5           Samsung SSD 850 EVO 1TB
NAS_HOST   10    /dev/sdg     SSD:free  931.51 GB Disk 10        QNAP        5           Samsung SSD 850 EVO 1TB
NAS_HOST   P1-1  /dev/nvme0n1 SSD:free  931.51 GB --             QNAP        5           Samsung SSD 970 EVO 1TB
NAS_HOST   P1-2  /dev/nvme1n1 SSD:free  931.51 GB --             QNAP        5           Samsung SSD 970 EVO 1TB
NAS_HOST   P1-3  /dev/nvme2n1 SSD:free  931.51 GB --             QNAP        5           Samsung SSD 970 EVO 1TB
NAS_HOST   P1-4  /dev/nvme3n1 SSD:free  931.51 GB --             QNAP        5           Samsung SSD 970 EVO 1TB

[~] # cat /proc/mdstat  
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] 
md1 : active raid10 sda3[0] sde3[3] sdd3[2] sdb3[1]
      19512966144 blocks super 1.0 512K chunks 2 near-copies [4/4] [UUUU]
      
md2 : active raid10 sdj3[0] sdf3[3] sdc3[2] sdi3[1]
      19512966144 blocks super 1.0 512K chunks 2 near-copies [4/4] [UUUU]
      
md322 : active raid1 sdf5[7](S) sdc5[6](S) sdi5[5](S) sdj5[4](S) sdd5[3](S) sde5[2](S) sdb5[1] sda5[0]
      7235136 blocks super 1.0 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md256 : active raid1 sdf2[7](S) sdc2[6](S) sdi2[5](S) sdj2[4](S) sdd2[3](S) sde2[2](S) sdb2[1] sda2[0]
      530112 blocks super 1.0 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md321 : active raid1 nvme3n1p5[4](S) nvme2n1p5[3](S) nvme1n1p5[2] nvme0n1p5[0]
      8283712 blocks super 1.0 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md13 : active raid1 sda4[0] nvme2n1p4[36] sdg4[37] nvme1n1p4[38] nvme0n1p4[39] sdh4[40] nvme3n1p4[41] sdf4[35] sdc4[34] sdj4[33] sdi4[32] sdd4[3] sde4[2] sdb4[1]
      458880 blocks super 1.0 [32/14] [UUUUUUUUUUUUUU__________________]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md9 : active raid1 sda1[0] nvme2n1p1[36] sdg1[37] nvme1n1p1[38] nvme0n1p1[39] sdh1[40] nvme3n1p1[41] sdf1[35] sdc1[34] sdj1[33] sdi1[32] sdd1[3] sde1[2] sdb1[1]
      530048 blocks super 1.0 [32/14] [UUUUUUUUUUUUUU__________________]
      bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>

[~] #  /etc/init.d/init_lvm.sh
Changing old config name...
Reinitialing...
Detect disk(8, 80)...
dev_count ++ = 0Detect disk(8, 48)...
dev_count ++ = 1Detect disk(8, 16)...
dev_count ++ = 2Detect disk(259, 18)...
dev_count ++ = 3Detect disk(8, 160)...
ignore non-root enclosure disk(8, 160).
Detect disk(8, 128)...
dev_count ++ = 4Detect disk(8, 96)...
dev_count ++ = 5Detect disk(259, 0)...
dev_count ++ = 6Detect disk(8, 64)...
dev_count ++ = 7Detect disk(8, 32)...
dev_count ++ = 8Detect disk(8, 0)...
dev_count ++ = 9Detect disk(8, 144)...
dev_count ++ = 10Detect disk(259, 1)...
dev_count ++ = 11Detect disk(8, 112)...
dev_count ++ = 12Detect disk(259, 12)...
dev_count ++ = 13Detect disk(8, 80)...
Detect disk(8, 48)...
Detect disk(8, 16)...
Detect disk(259, 18)...
Detect disk(8, 160)...
ignore non-root enclosure disk(8, 160).
Detect disk(8, 128)...
Detect disk(8, 96)...
Detect disk(259, 0)...
Detect disk(8, 64)...
Detect disk(8, 32)...
Detect disk(8, 0)...
Detect disk(8, 144)...
Detect disk(259, 1)...
Detect disk(8, 112)...
Detect disk(259, 12)...
sys_startup_p2:got called count = -1
1: Failure: (127) Device minor not allocated
additional info from kernel:
unknown minor
Command 'drbdsetup-84 disk-options 1 --disk-bypass=yes' terminated with exit code 10

Post Reply

Return to “System & Disk Volume Management”