Hi Guys,
We have a serious problem with our RAID10 on our NAS extension REXP-1220U-RP.
After hard disk 6 was classified as red by the QNAP, we exchanged it. Unfortunately, the QNAP has not built up the RAID10 since then and is in READ ONLY. The RAID rebuild was unsuccessful.
Any Ideas how to fix the RAID and get the Pool writable again?
[~] # md_checker
Welcome to MD superblock checker (v2.0) - have a nice day~
Scanning system...
RAID metadata found!
UUID: 288d996f:7de1152e:b6968e7c:6c82da8f
Level: raid10
Devices: 8
Name: md3
Chunk Size: 512K
md Version: 1.0
Creation Time: Sep 27 10:51:02 2018
Status: ONLINE (md3) [UUUUUUUU]
===============================================================================================
Enclosure | Port | Block Dev Name | # | Status | Last Update Time | Events | Array State
===============================================================================================
NAS_HOST 1 /dev/sdd3 0 Active May 9 08:56:06 2021 164 AAAAAAAA
NAS_HOST 2 /dev/sdi3 1 Active May 9 08:56:06 2021 164 AAAAAAAA
NAS_HOST 3 /dev/sdh3 2 Active May 9 08:56:06 2021 164 AAAAAAAA
NAS_HOST 4 /dev/sdk3 3 Active May 9 08:56:06 2021 164 AAAAAAAA
NAS_HOST 5 /dev/sde3 4 Active May 9 08:56:06 2021 164 AAAAAAAA
NAS_HOST 6 /dev/sdf3 5 Active May 9 08:56:06 2021 164 AAAAAAAA
NAS_HOST 7 /dev/sdg3 6 Active May 9 08:56:06 2021 164 AAAAAAAA
NAS_HOST 8 /dev/sdj3 7 Active May 9 08:56:06 2021 164 AAAAAAAA
===============================================================================================
RAID metadata found!
UUID: cf8152aa:30e29287:8a7b41a2:c3605860
Level: raid10
Devices: 4
Name: md2
Chunk Size: 512K
md Version: 1.0
Creation Time: Sep 27 10:50:23 2018
Status: ONLINE (md2) [UUUU]
===============================================================================================
Enclosure | Port | Block Dev Name | # | Status | Last Update Time | Events | Array State
===============================================================================================
NAS_HOST 9 /dev/sdl3 0 Active May 9 08:56:06 2021 212 AAAA
NAS_HOST 10 /dev/sdm3 1 Active May 9 08:56:06 2021 212 AAAA
NAS_HOST 11 /dev/sdn3 2 Active May 9 08:56:06 2021 212 AAAA
NAS_HOST 12 /dev/sdo3 3 Active May 9 08:56:06 2021 212 AAAA
===============================================================================================
RAID metadata found!
UUID: 1951faed:c8171e77:26faeeb8:ec2c1e47
Level: raid0
Devices: 2
Name: md1
Chunk Size: 512K
md Version: 1.0
Creation Time: Sep 27 10:32:04 2018
Status: ONLINE (md1) raid0
===============================================================================================
Enclosure | Port | Block Dev Name | # | Status | Last Update Time | Events | Array State
===============================================================================================
NAS_HOST C1 /dev/sda3 0 Active Sep 27 10:32:04 2018 0 AA
NAS_HOST C2 /dev/sdb3 1 Active Sep 27 10:32:04 2018 0 AA
===============================================================================================
RAID metadata found!
UUID: 9acf1586:d331819c:d20c62d8:a105320d
Level: raid10
Devices: 12
Name: md4
Chunk Size: 512K
md Version: 1.0
Creation Time: Nov 12 14:35:57 2018
Status: ONLINE (md4) [UUUUU_UUUUUU]
===============================================================================================
Enclosure | Port | Block Dev Name | # | Status | Last Update Time | Events | Array State
===============================================================================================
EXDR#1 1 /dev/dm-9 0 Active May 9 07:07:10 2021 470072 AAAAA.AAAAAA
EXDR#1 2 /dev/dm-57 1 Active May 9 07:07:10 2021 470072 AAAAA.AAAAAA
EXDR#1 3 /dev/dm-51 2 Active May 9 07:07:10 2021 470072 AAAAA.AAAAAA
EXDR#1 4 /dev/dm-3 3 Active May 9 07:07:10 2021 470072 AAAAA.AAAAAA
EXDR#1 5 /dev/dm-15 4 Active May 9 07:06:27 2021 469601 AAAAAAAAAAAA
---------------------------------- 5 Missing -------------------------------------------
EXDR#1 7 /dev/dm-45 6 Active May 9 07:07:10 2021 470072 AAAAA.AAAAAA
EXDR#1 8 /dev/dm-69 7 Active May 9 07:07:10 2021 470072 AAAAA.AAAAAA
EXDR#1 9 /dev/dm-21 8 Active May 9 07:07:10 2021 470072 AAAAA.AAAAAA
EXDR#1 10 /dev/dm-27 9 Active May 9 07:07:10 2021 470072 AAAAA.AAAAAA
EXDR#1 11 /dev/dm-39 10 Active May 9 07:07:10 2021 470072 AAAAA.AAAAAA
EXDR#1 12 /dev/dm-63 11 Active May 9 07:07:10 2021 470072 AAAAA.AAAAAA
===============================================================================================
[~] # /etc/init.d/init_lvm.sh
Changing old config name...
Reinitialing...
Detect disk(65, 128)...
ignore non-root enclosure disk(65, 128).
Detect disk(8, 80)...
dev_count ++ = 0Detect disk(8, 224)...
dev_count ++ = 1Detect disk(65, 96)...
ignore non-root enclosure disk(65, 96).
Detect disk(8, 48)...
dev_count ++ = 2Detect disk(8, 192)...
dev_count ++ = 3Detect disk(65, 64)...
ignore non-root enclosure disk(65, 64).
Detect disk(8, 16)...
dev_count ++ = 4Detect disk(8, 160)...
dev_count ++ = 5Detect disk(65, 32)...
ignore non-root enclosure disk(65, 32).
Detect disk(8, 128)...
dev_count ++ = 6Detect disk(65, 0)...
ignore non-root enclosure disk(65, 0).
Detect disk(65, 144)...
ignore non-root enclosure disk(65, 144).
Detect disk(8, 96)...
dev_count ++ = 7Detect disk(65, 112)...
ignore non-root enclosure disk(65, 112).
Detect disk(253, 0)...
ignore non-root enclosure disk(253, 0).
Detect disk(8, 64)...
dev_count ++ = 8Detect disk(65, 160)...
ignore non-root enclosure disk(65, 160).
Detect disk(8, 208)...
dev_count ++ = 9Detect disk(65, 80)...
ignore non-root enclosure disk(65, 80).
Detect disk(8, 32)...
ignore non-root enclosure disk(8, 32).
Detect disk(8, 176)...
dev_count ++ = 10Detect disk(65, 48)...
ignore non-root enclosure disk(65, 48).
Detect disk(8, 0)...
dev_count ++ = 11Detect disk(8, 144)...
dev_count ++ = 12Detect disk(65, 16)...
ignore non-root enclosure disk(65, 16).
Detect disk(8, 112)...
dev_count ++ = 13Detect disk(8, 240)...
ignore non-root enclosure disk(8, 240).
Detect disk(65, 128)...
ignore non-root enclosure disk(65, 128).
Detect disk(8, 80)...
Detect disk(8, 224)...
Detect disk(65, 96)...
ignore non-root enclosure disk(65, 96).
Detect disk(8, 48)...
Detect disk(8, 192)...
Detect disk(65, 64)...
ignore non-root enclosure disk(65, 64).
Detect disk(8, 16)...
Detect disk(8, 160)...
Detect disk(65, 32)...
ignore non-root enclosure disk(65, 32).
Detect disk(8, 128)...
Detect disk(65, 0)...
ignore non-root enclosure disk(65, 0).
Detect disk(65, 144)...
ignore non-root enclosure disk(65, 144).
Detect disk(8, 96)...
Detect disk(65, 112)...
ignore non-root enclosure disk(65, 112).
Detect disk(253, 0)...
ignore non-root enclosure disk(253, 0).
Detect disk(8, 64)...
Detect disk(65, 160)...
ignore non-root enclosure disk(65, 160).
Detect disk(8, 208)...
Detect disk(65, 80)...
ignore non-root enclosure disk(65, 80).
Detect disk(8, 32)...
ignore non-root enclosure disk(8, 32).
Detect disk(8, 176)...
Detect disk(65, 48)...
ignore non-root enclosure disk(65, 48).
Detect disk(8, 0)...
Detect disk(8, 144)...
Detect disk(65, 16)...
ignore non-root enclosure disk(65, 16).
Detect disk(8, 112)...
Detect disk(8, 240)...
ignore non-root enclosure disk(8, 240).
sys_startup_p2:got called count = -1
WARNING: duplicate PV zkrKUG0l1OFW4OUcYSybG64pm5ytCS1B is being used from both devices /dev/drbd2 and /dev/md2
Found duplicate PV zkrKUG0l1OFW4OUcYSybG64pm5ytCS1B: using /dev/drbd2 not /dev/md2
Using duplicate PV /dev/drbd2 from subsystem DRBD, ignoring /dev/md2
WARNING: duplicate PV 034XRMRD0rSGp0vT0SXd3bLvcvN31dMk is being used from both devices /dev/drbd3 and /dev/md3
Found duplicate PV 034XRMRD0rSGp0vT0SXd3bLvcvN31dMk: using existing dev /dev/drbd3
WARNING: duplicate PV UffjwVcHkR6c4XVAsEUSD1FN3X9iQfi1 is being used from both devices /dev/drbd4 and /dev/md4
Found duplicate PV UffjwVcHkR6c4XVAsEUSD1FN3X9iQfi1: using existing dev /dev/drbd4
sh: /sys/block/dm-81/dm/pool/tier/relocation_rate: Permission denied
sh: /sys/block/dm-81/dm/pool/tier/hro/reserve_percentage: Permission denied
Done
REXP-1220U-RP | RAID10 Read only - Disk missing after replacement
Questions about SNMP, Power, System, Logs, disk, & RAID.
-
- New here
- Posts: 2
- Joined: Sun May 09, 2021 3:15 pm
Return to “System & Disk Volume Management”
Jump to
- QNAP General
- ↳ Announcements
- ↳ Features Wanted
- ↳ Users' Corner
- ↳ Official Apps
- ↳ Prestashop
- ↳ Webalizer
- ↳ Virtualization Station
- ↳ Notes Station
- ↳ SocialLink Station
- ↳ McAfee Antivirus
- ↳ IT Management Station
- ↳ Container Station
- ↳ Qsirch & Qfiling
- ↳ Community Apps
- ↳ Apps Wanted
- ↳ Partner Apps
- ↳ BitTorrent Sync
- ↳ EZPhone
- ↳ Plex Media Server
- ↳ Ragic
- ↳ Tonido
- Getting Started
- ↳ Frequently Asked Questions
- ↳ Presales
- ↳ Turbo Station Installation & Setup
- General
- ↳ Hardware & Software Compatibility
- ↳ HDD Spin Down (HDD Standby)
- ↳ Seagate Drive Discussion
- ↳ Western Digital Drive Discussion
- ↳ File Sharing
- ↳ Mac OS
- ↳ Linux & Unix (NFS)
- ↳ Windows
- ↳ Backup & Restore
- ↳ Symform
- ↳ Microsoft Azure
- ↳ OpenStack Swift
- ↳ Amazon Glacier
- ↳ Amazon S3
- ↳ WebDAV-based Backup
- ↳ Google Cloud Storage
- ↳ Object Storage Server
- ↳ ElephantDrive
- ↳ Xopero
- ↳ System & Disk Volume Management
- ↳ Web Server & Applications (Apache + PHP + MySQL / SQLite)
- ↳ Download Station and QGet
- ↳ myQNAPcloud service
- ↳ Surveillance Solution
- ↳ Miscellaneous
- ↳ QIoT
- ↳ QuAI
- ↳ QVR Face
- Business
- ↳ Windows Domain & Active Directory
- ↳ iSCSI – Target & Virtual Disk
- ↳ Remote Replication/ Disaster Recovery
- ↳ Server Virtualization & Clustering
- ↳ NAS Management
- ↳ QES Operating System (QNAP Enterprise Storage OS)
- Multimedia
- ↳ Photo Station, Music Station, Video Station
- ↳ Media Streaming
- ↳ Mobile Devices