Volume not in active mode
-
- Starting out
- Posts: 12
- Joined: Tue Dec 11, 2018 7:48 pm
Volume not in active mode
Hi to all
Yesterday I had HDD4 failure, so I replaced with a new one and the volume start rebuilding.
After seven hours there was the message: [Harddisk 2] medium error. Please run bad block scan on this drive or replace the drive if the error persists.
After that the qnap volume not in active mode and I cannot see any files.
The GUI is OK and I can see the directories filenames at the network drive but I cannot go inside to see any files.
There is also the message: Out of disk space on ramdisk.
Please help with the problem because it's urgent.
(Priority NOT to have any data loss!)
At first the 3 from 4 disks where in Status GOOD.
Now I cannot see any disk but the error messages still coming up very fast...
It seems that the GUI isn't working properly.
Many Regards
Christos
Yesterday I had HDD4 failure, so I replaced with a new one and the volume start rebuilding.
After seven hours there was the message: [Harddisk 2] medium error. Please run bad block scan on this drive or replace the drive if the error persists.
After that the qnap volume not in active mode and I cannot see any files.
The GUI is OK and I can see the directories filenames at the network drive but I cannot go inside to see any files.
There is also the message: Out of disk space on ramdisk.
Please help with the problem because it's urgent.
(Priority NOT to have any data loss!)
At first the 3 from 4 disks where in Status GOOD.
Now I cannot see any disk but the error messages still coming up very fast...
It seems that the GUI isn't working properly.
Many Regards
Christos
- MrVideo
- Experience counts
- Posts: 4742
- Joined: Fri May 03, 2013 2:26 pm
Re: Volume not in active mode
Firmware version and build date?
NAS model?
RAID level?
No backups?
NAS model?
RAID level?
No backups?
QTS MANUALS
Submit QNAP Support Ticket - QNAP Tutorials, FAQs, Downloads, Wiki - Product Support Status - Moogle's QNAP FAQ help V2
Asking a question, include the following (Thanks to Toxic17)
QNAP md_checker nasreport (release 20210309)
===============================
Model: TS-869L -- RAM: 3G -- FW: QTS 4.1.4 Build 20150522 (used for data storage)
WD60EFRX-68L0BN1(x1)/68MYMN1(x7) Red HDDs -- RAID6: 8x6TB -- Cold spare: 1x6TB
Entware
===============================
Model: TS-451A -- RAM: 2G -- FW: QTS 4.5.2 Build 20210202 (used as a video server)
WL3000GSA6472(x3) White label NAS HDDs -- RAID5: 3x3TB
Entware -- MyKodi 17.3 (default is Kodi 16)
===============================
My 2017 Total Solar Eclipse Photos | My 2019 N. Ireland Game of Thrones tour
Submit QNAP Support Ticket - QNAP Tutorials, FAQs, Downloads, Wiki - Product Support Status - Moogle's QNAP FAQ help V2
Asking a question, include the following (Thanks to Toxic17)
QNAP md_checker nasreport (release 20210309)
===============================
Model: TS-869L -- RAM: 3G -- FW: QTS 4.1.4 Build 20150522 (used for data storage)
WD60EFRX-68L0BN1(x1)/68MYMN1(x7) Red HDDs -- RAID6: 8x6TB -- Cold spare: 1x6TB
Entware
===============================
Model: TS-451A -- RAM: 2G -- FW: QTS 4.5.2 Build 20210202 (used as a video server)
WL3000GSA6472(x3) White label NAS HDDs -- RAID5: 3x3TB
Entware -- MyKodi 17.3 (default is Kodi 16)
===============================
My 2017 Total Solar Eclipse Photos | My 2019 N. Ireland Game of Thrones tour
-
- Starting out
- Posts: 12
- Joined: Tue Dec 11, 2018 7:48 pm
Re: Volume not in active mode
Firmware 4.2.0
TS-419 P II
RAID 5
Unfortunately no backup.
That's why it is critical to save the data.
TS-419 P II
RAID 5
Unfortunately no backup.
That's why it is critical to save the data.
- storageman
- Ask me anything
- Posts: 5507
- Joined: Thu Sep 22, 2011 10:57 pm
Re: Volume not in active mode
Run "md_checker" and "df" in putty and post the results.
A full ramdisk can stop rebuild from completing.
Did you open ticket with Qnap?
A full ramdisk can stop rebuild from completing.
Did you open ticket with Qnap?
-
- Starting out
- Posts: 12
- Joined: Tue Dec 11, 2018 7:48 pm
Re: Volume not in active mode
Yes.GEX-169-10706
But still no answer...
But still no answer...
- storageman
- Ask me anything
- Posts: 5507
- Joined: Thu Sep 22, 2011 10:57 pm
Re: Volume not in active mode
and still no answer from you on the outputs I requested...
- dolbyman
- Guru
- Posts: 35248
- Joined: Sat Feb 12, 2011 2:11 am
- Location: Vancouver BC , Canada
Re: Volume not in active mode
2 disks failed on a raid5 .. data is toast
either data recovery service for $$$ or just start over with no data
sorry to be so blunt ..but thats how it is..no sugar coating
either data recovery service for $$$ or just start over with no data
sorry to be so blunt ..but thats how it is..no sugar coating
-
- Starting out
- Posts: 12
- Joined: Tue Dec 11, 2018 7:48 pm
Re: Volume not in active mode
This is the output you asked for:
[~] # df -k
Filesystem 1k-blocks Used Available Use% Mounted on
/dev/ramdisk 33709 33709 0 100% /
tmpfs 65536 248 65288 0% /tmp
/dev/sda4 379888 373060 6828 98% /mnt/ext
/dev/md9 521684 147968 373716 28% /mnt/HDA_ROOT
./md_checker
Welcome to MD superblock checker (v1.4) - have a nice day~
Scanning system...
Legacy Firmware Detected!
Scanning disks...
RAID metadata found!
UUID: f7c35f90:b2431519:d8161d11:54343d86
Level: raid5
Devices: 4
Name: md0
Chunk Size: 64K
md Version: 00.90.00
Creation Time: Jun 11 14:14:06 2013
Status: OFFLINE
===============================================================================
Disk | Device | # | Status | Last Update Time | Events | Array State
===============================================================================
1 /dev/sda3 0 active Dec 10 19:24:09 2018 0.933038 afafs
-------------- 1 Missing -------------------------------------------
3 /dev/sdc3 2 active Dec 10 19:24:09 2018 0.933038 afafs
-------------- 3 Missing -------------------------------------------
===============================================================================
Is there any way to make the status ONLINE?
[~] # df -k
Filesystem 1k-blocks Used Available Use% Mounted on
/dev/ramdisk 33709 33709 0 100% /
tmpfs 65536 248 65288 0% /tmp
/dev/sda4 379888 373060 6828 98% /mnt/ext
/dev/md9 521684 147968 373716 28% /mnt/HDA_ROOT
./md_checker
Welcome to MD superblock checker (v1.4) - have a nice day~
Scanning system...
Legacy Firmware Detected!
Scanning disks...
RAID metadata found!
UUID: f7c35f90:b2431519:d8161d11:54343d86
Level: raid5
Devices: 4
Name: md0
Chunk Size: 64K
md Version: 00.90.00
Creation Time: Jun 11 14:14:06 2013
Status: OFFLINE
===============================================================================
Disk | Device | # | Status | Last Update Time | Events | Array State
===============================================================================
1 /dev/sda3 0 active Dec 10 19:24:09 2018 0.933038 afafs
-------------- 1 Missing -------------------------------------------
3 /dev/sdc3 2 active Dec 10 19:24:09 2018 0.933038 afafs
-------------- 3 Missing -------------------------------------------
===============================================================================
Is there any way to make the status ONLINE?
- OneCD
- Guru
- Posts: 12144
- Joined: Sun Aug 21, 2016 10:48 am
- Location: "... there, behind that sofa!"
Re: Volume not in active mode
Yup, toast.
Replace those drives, recreate your array and restore the data from your backups. Important data would never be kept only on the NAS, so you must have another copy somewhere.
Replace those drives, recreate your array and restore the data from your backups. Important data would never be kept only on the NAS, so you must have another copy somewhere.
- storageman
- Ask me anything
- Posts: 5507
- Joined: Thu Sep 22, 2011 10:57 pm
Re: Volume not in active mode
It depends if you can get disk 2 to come back but doesn't look good!cgian wrote: ↑Wed Dec 12, 2018 2:23 pm This is the output you asked for:
[~] # df -k
Filesystem 1k-blocks Used Available Use% Mounted on
/dev/ramdisk 33709 33709 0 100% /
tmpfs 65536 248 65288 0% /tmp
/dev/sda4 379888 373060 6828 98% /mnt/ext
/dev/md9 521684 147968 373716 28% /mnt/HDA_ROOT
./md_checker
Welcome to MD superblock checker (v1.4) - have a nice day~
Scanning system...
Legacy Firmware Detected!
Scanning disks...
RAID metadata found!
UUID: f7c35f90:b2431519:d8161d11:54343d86
Level: raid5
Devices: 4
Name: md0
Chunk Size: 64K
md Version: 00.90.00
Creation Time: Jun 11 14:14:06 2013
Status: OFFLINE
===============================================================================
Disk | Device | # | Status | Last Update Time | Events | Array State
===============================================================================
1 /dev/sda3 0 active Dec 10 19:24:09 2018 0.933038 afafs
-------------- 1 Missing -------------------------------------------
3 /dev/sdc3 2 active Dec 10 19:24:09 2018 0.933038 afafs
-------------- 3 Missing -------------------------------------------
===============================================================================
Is there any way to make the status ONLINE?
Last thing to try is power on without drives. Once you hear beep reseat the first three drives one and a time, then see if you login and check the disk status.
In the unlikely event it comes back in degraded mode, don't try to rebuild, try to copy off data if you can.
-
- Starting out
- Posts: 12
- Joined: Tue Dec 11, 2018 7:48 pm
Re: Volume not in active mode
Thanks anyway.
I will wait for the official answer first.
Maybe there will be another way to mount the storage manually....
I will wait for the official answer first.
Maybe there will be another way to mount the storage manually....
-
- Starting out
- Posts: 12
- Joined: Tue Dec 11, 2018 7:48 pm
Re: Volume not in active mode
Good morning
I opened the case with a ticket GEX-169-10706 at 11/12/2018 but I still have no answer from Qnap tech support.
Are they answer usually so late?
I opened the case with a ticket GEX-169-10706 at 11/12/2018 but I still have no answer from Qnap tech support.
Are they answer usually so late?
-
- Starting out
- Posts: 12
- Joined: Tue Dec 11, 2018 7:48 pm
Re: Volume not in active mode
Is there anyway to manually activate the volume?
I have RAID 5 with 4 disks. I changed the faulty disk2 2TB with a new 4TB and now I have 4 disks in Goog Status.
I remind that the disk4 didn't ever finished the rebuilding...
I have RAID 5 with 4 disks. I changed the faulty disk2 2TB with a new 4TB and now I have 4 disks in Goog Status.
I remind that the disk4 didn't ever finished the rebuilding...
-
- Getting the hang of things
- Posts: 71
- Joined: Sun Dec 16, 2018 12:17 am
- Contact:
Re: Volume not in active mode
It seems you have two drives that dropped out of your RAID array. Assuming you have no backups it may be possible to reassemble the RAID, likely in degraded mode, and recover your data.
The first step to recover your data is to inspect the RAID5 meta data from all four original drives. If you post back with that info the next steps can be determined.
The meta data can be displayed with the Linux command.... mdadm --examine /dev/sda3 /dev/sdb3 /dev/sdc3 /dev/sdd3
It is also helpful to know the SMART health status of the missing RAID member drives plus make and model. It is likely you are dealing with failing drives.
The first step to recover your data is to inspect the RAID5 meta data from all four original drives. If you post back with that info the next steps can be determined.
The meta data can be displayed with the Linux command.... mdadm --examine /dev/sda3 /dev/sdb3 /dev/sdc3 /dev/sdd3
It is also helpful to know the SMART health status of the missing RAID member drives plus make and model. It is likely you are dealing with failing drives.
On-Line Data Recovery Consultant. RAID / NAS / Linux Specialist.
Serving clients worldwide since 2011. Complex cases welcome.
https://FreeDataRecovery.us
Serving clients worldwide since 2011. Complex cases welcome.
https://FreeDataRecovery.us
-
- Starting out
- Posts: 12
- Joined: Tue Dec 11, 2018 7:48 pm
Re: Volume not in active mode
Good morning
This is the output you asked for
mdadm --examine /dev/sda3 /dev/sdb3 /dev/sdc3 /dev/sdd3
/dev/sda3:
Magic : a92b4efc
Version : 00.90.00
UUID : f7c35f90:b2431519:d8161d11:54343d86
Creation Time : Tue Jun 11 14:14:06 2013
Raid Level : raid5
Used Dev Size : 1951945600 (1861.52 GiB 1998.79 GB)
Array Size : 5855836800 (5584.56 GiB 5996.38 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0
Update Time : Mon Dec 10 19:24:09 2018
State : clean
Active Devices : 2
Working Devices : 3
Failed Devices : 2
Spare Devices : 1
Checksum : a1b7941f - correct
Events : 0.933038
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 0 8 3 0 active sync /dev/sda3
0 0 8 3 0 active sync /dev/sda3
1 1 0 0 1 faulty removed
2 2 8 35 2 active sync /dev/sdc3
3 3 0 0 3 faulty removed
4 4 8 51 4 spare /dev/sdd3
mdadm: cannot open /dev/sdb3: No such device or address
/dev/sdc3:
Magic : a92b4efc
Version : 00.90.00
UUID : f7c35f90:b2431519:d8161d11:54343d86
Creation Time : Tue Jun 11 14:14:06 2013
Raid Level : raid5
Used Dev Size : 1951945600 (1861.52 GiB 1998.79 GB)
Array Size : 5855836800 (5584.56 GiB 5996.38 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0
Update Time : Mon Dec 10 19:24:09 2018
State : clean
Active Devices : 2
Working Devices : 3
Failed Devices : 2
Spare Devices : 1
Checksum : a1b79443 - correct
Events : 0.933038
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 2 8 35 2 active sync /dev/sdc3
0 0 8 3 0 active sync /dev/sda3
1 1 0 0 1 faulty removed
2 2 8 35 2 active sync /dev/sdc3
3 3 0 0 3 faulty removed
4 4 8 51 4 spare /dev/sdd3
/dev/sdd3:
Magic : a92b4efc
Version : 00.90.00
UUID : f7c35f90:b2431519:d8161d11:54343d86
Creation Time : Tue Jun 11 14:14:06 2013
Raid Level : raid5
Used Dev Size : 1951945600 (1861.52 GiB 1998.79 GB)
Array Size : 5855836800 (5584.56 GiB 5996.38 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0
Update Time : Mon Dec 10 19:24:09 2018
State : clean
Active Devices : 2
Working Devices : 3
Failed Devices : 2
Spare Devices : 1
Checksum : a1b79451 - correct
Events : 0.933038
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 4 8 51 4 spare /dev/sdd3
0 0 8 3 0 active sync /dev/sda3
1 1 0 0 1 faulty removed
2 2 8 35 2 active sync /dev/sdc3
3 3 0 0 3 faulty removed
4 4 8 51 4 spare /dev/sdd3
Please try to help me ASAP...
This is the output you asked for
mdadm --examine /dev/sda3 /dev/sdb3 /dev/sdc3 /dev/sdd3
/dev/sda3:
Magic : a92b4efc
Version : 00.90.00
UUID : f7c35f90:b2431519:d8161d11:54343d86
Creation Time : Tue Jun 11 14:14:06 2013
Raid Level : raid5
Used Dev Size : 1951945600 (1861.52 GiB 1998.79 GB)
Array Size : 5855836800 (5584.56 GiB 5996.38 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0
Update Time : Mon Dec 10 19:24:09 2018
State : clean
Active Devices : 2
Working Devices : 3
Failed Devices : 2
Spare Devices : 1
Checksum : a1b7941f - correct
Events : 0.933038
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 0 8 3 0 active sync /dev/sda3
0 0 8 3 0 active sync /dev/sda3
1 1 0 0 1 faulty removed
2 2 8 35 2 active sync /dev/sdc3
3 3 0 0 3 faulty removed
4 4 8 51 4 spare /dev/sdd3
mdadm: cannot open /dev/sdb3: No such device or address
/dev/sdc3:
Magic : a92b4efc
Version : 00.90.00
UUID : f7c35f90:b2431519:d8161d11:54343d86
Creation Time : Tue Jun 11 14:14:06 2013
Raid Level : raid5
Used Dev Size : 1951945600 (1861.52 GiB 1998.79 GB)
Array Size : 5855836800 (5584.56 GiB 5996.38 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0
Update Time : Mon Dec 10 19:24:09 2018
State : clean
Active Devices : 2
Working Devices : 3
Failed Devices : 2
Spare Devices : 1
Checksum : a1b79443 - correct
Events : 0.933038
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 2 8 35 2 active sync /dev/sdc3
0 0 8 3 0 active sync /dev/sda3
1 1 0 0 1 faulty removed
2 2 8 35 2 active sync /dev/sdc3
3 3 0 0 3 faulty removed
4 4 8 51 4 spare /dev/sdd3
/dev/sdd3:
Magic : a92b4efc
Version : 00.90.00
UUID : f7c35f90:b2431519:d8161d11:54343d86
Creation Time : Tue Jun 11 14:14:06 2013
Raid Level : raid5
Used Dev Size : 1951945600 (1861.52 GiB 1998.79 GB)
Array Size : 5855836800 (5584.56 GiB 5996.38 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0
Update Time : Mon Dec 10 19:24:09 2018
State : clean
Active Devices : 2
Working Devices : 3
Failed Devices : 2
Spare Devices : 1
Checksum : a1b79451 - correct
Events : 0.933038
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 4 8 51 4 spare /dev/sdd3
0 0 8 3 0 active sync /dev/sda3
1 1 0 0 1 faulty removed
2 2 8 35 2 active sync /dev/sdc3
3 3 0 0 3 faulty removed
4 4 8 51 4 spare /dev/sdd3
Please try to help me ASAP...