Page 1 of 1

TVS-863+ - File system check fails. how to check correctly via CLI?

Posted: Sun Feb 09, 2020 7:01 pm
by LaUs3r
Hi,

I'm currently struggling with a file system check which doesn't run through smoothly.
Starting point was some hick-up in the NAS which caused the RAID to be rebuild which was successful. As for the hick-up, I have no clue what the root cause was.
Nevertheless, after the RAID rebuild, I could successfully mount the volume again, but I was asked to check the file system as it was flag as "not clean".
SMART shows all GREEN, I tested the HDDs via webui and have no errors.

So, the file system check starts but at a certain point (~30%) just stops with the error message "failed to check file system" and the status of the volume is set to "unmounted". At that point, I cannot mount the volume anymore as the the "unlock"-option is simply not shown when I click on the volume.
After a reboot the volume can be mounted again, but I still get the message that the file system is no clean and needs to be rechecked.

Next step was to perform the file system check via CLI. Here I'm not sure if I performed chose the right device to be checked. May you could give me a hint if I'm on the right track. Currently the check still runs....

Here's what I did:
step 1

Code: Select all

/etc/init.d/services.sh stop && /etc/init.d/opentftp.sh stop && /etc/init.d/Qthttpd.sh sto
step 2

Code: Select all

umount /dev/mapper/ce_cachedev1
I was expecting an error message that the device is busy, but didn't get it

step 3

Code: Select all

e2fsck -f -v -C -0 /dev/mapper/ce_cachedev1
result:

Code: Select all

e2fsck 1.43.9 (8-Feb-2018)
Pass 1: Checking inodes, blocks, and sizes
Inode 500432899 has INDEX_FL flag set on filesystem without htree support.
....
and lots of

Code: Select all

Inode 2687893620 block 8388800 conflicts with critical metadata, skipping block checks.
What does this mean? check skipped? How can this be solved?

My device:

Code: Select all

NAS Model:      TVS-863+
Firmware:       4.4.1 Build 20191206

/dev/md1:
        Version : 1.0
  Creation Time : Fri Jul 21 20:47:37 2017
     Raid Level : raid6
     Array Size : 58538898432 (55827.05 GiB 59943.83 GB)
  Used Dev Size : 9756483072 (9304.51 GiB 9990.64 GB)
   Raid Devices : 8
  Total Devices : 8
    Persistence : Superblock is persistent

    Update Time : Sun Feb  9 11:21:08 2020
          State : clean
 Active Devices : 8
Working Devices : 8
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : 1
           UUID : f2934a01:ec4fa905:94ea6eed:27964224
         Events : 103373

    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       1       8       19        1      active sync   /dev/sdb3
       2       8       51        2      active sync   /dev/sdd3
       3       8       35        3      active sync   /dev/sdc3
       9       8       83        4      active sync   /dev/sdf3
      10       8       67        5      active sync   /dev/sde3
       8       8      115        6      active sync   /dev/sdh3
       7       8       99        7      active sync   /dev/sdg3

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md1 : active raid6 sda3[0] sdg3[7] sdh3[8] sde3[10] sdf3[9] sdc3[3] sdd3[2] sdb3[1]
      58538898432 blocks super 1.0 level 6, 512k chunk, algorithm 2 [8/8] [UUUUUUUU]

md322 : active raid1 sdg5[7](S) sdh5[6](S) sde5[5](S) sdf5[4](S) sdc5[3](S) sdd5[2](S) sdb5[1] sda5[0]
      7235136 blocks super 1.0 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md256 : active raid1 sdg2[7](S) sdh2[6](S) sde2[5](S) sdf2[4](S) sdc2[3](S) sdd2[2](S) sdb2[1] sda2[0]
      530112 blocks super 1.0 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md13 : active raid1 sda4[0] sdb4[1] sdd4[2] sdc4[3] sdf4[4] sde4[5] sdh4[32] sdg4[33]
      458880 blocks super 1.0 [32/8] [UUUUUUUU________________________]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md9 : active raid1 sda1[0] sdg1[33] sdh1[32] sde1[5] sdf1[4] sdc1[3] sdd1[2] sdb1[1]
      530048 blocks super 1.0 [32/8] [UUUUUUUU________________________]
      bitmap: 1/1 pages [4KB], 65536KB chunk

I'm a little confused as well be the "State : clean"

Re: TVS-863+ - File system check fails. how to check correctly via CLI?

Posted: Mon Feb 10, 2020 6:36 pm
by storageman
Isn't this an encrypted volume?

Re: TVS-863+ - File system check fails. how to check correctly via CLI?

Posted: Mon Feb 10, 2020 6:59 pm
by LaUs3r
storageman wrote: Mon Feb 10, 2020 6:36 pm Isn't this an encrypted volume?
yes, it's encrypted. I perform the file check of course on the decrypted volume.

Re: TVS-863+ - File system check fails. how to check correctly via CLI?

Posted: Mon Feb 10, 2020 7:03 pm
by storageman
I know but is it getting the correct lock on it via SSH approach?
I would raise ticket, I'm not sure you can e2fsck an encrypted volume properly via backdoor approach.

Re: TVS-863+ - File system check fails. how to check correctly via CLI?

Posted: Mon Feb 10, 2020 7:50 pm
by LaUs3r
storageman wrote: Mon Feb 10, 2020 7:03 pm I know but is it getting the correct lock on it via SSH approach?
I would raise ticket, I'm not sure you can e2fsck an encrypted volume properly via backdoor approach.
ah, now I get your point. That's exactly the uncertainty I have as well.... ticket is already open but no reply so far..
thx for your replies though :-)

Re: TVS-863+ - File system check fails. how to check correctly via CLI?

Posted: Mon Feb 10, 2020 8:15 pm
by storageman
Turn off encryption and try
e2fsck_64 -fp -C 0 /dev/mapper/cachedev1

If that fails try with backup superblock
e2fsck_64 -fp -C 0 -b 32768 /dev/mapper/cachedev1

Re: TVS-863+ - File system check fails. how to check correctly via CLI?

Posted: Mon Feb 10, 2020 9:05 pm
by LaUs3r
storageman wrote: Mon Feb 10, 2020 8:15 pm Turn off encryption and try
e2fsck_64 -fp -C 0 /dev/mapper/cachedev1

If that fails try with backup superblock
e2fsck_64 -fp -C 0 -b 32768 /dev/mapper/cachedev1
afaik, encryption cannot be turned off. I would need to remove the current volume and create a new one. this is not possible as I cannot backup the data :-( (simply too much)

Re: TVS-863+ - File system check fails. how to check correctly via CLI?

Posted: Mon Feb 10, 2020 9:32 pm
by storageman
Sorry, yes correct.
Did you try
"e2fsck_64 -f -v -C -0 /dev/mapper/ce_cachedev1"

Re: TVS-863+ - File system check fails. how to check correctly via CLI?

Posted: Mon Feb 10, 2020 10:28 pm
by LaUs3r
yes, but I guess I used the 32bit version:

Code: Select all

e2fsck -f -v -C -0 /dev/mapper/ce_cachedev1
what I got was a lot of those messages:

Code: Select all

Inode 500432899 has INDEX_FL flag set on filesystem without htree support.
and

Code: Select all

Inode 2687893620 block 8388800 conflicts with critical metadata, skipping block checks.
I cancelt the scan as I read that the filesystem might be changed and data loss could be possible.
But I now know that I should use -n option

Re: TVS-863+ - File system check fails. how to check correctly via CLI?

Posted: Fri Feb 14, 2020 5:33 pm
by LaUs3r
thx all for your replies and support.

meanwhile I was able to verify that the HDDs themselves are ok by pulling out the HDDs and use the SeaTools.
I also contacted the QNAP support and provided the logs. It seems that the backplate is faulty and therefore the fsck doesn't run through. I still have guarantee and will ship the NAS to QNAP.
Cheers