TVS-863+ - File system check fails. how to check correctly via CLI?
Posted: Sun Feb 09, 2020 7:01 pm
Hi,
I'm currently struggling with a file system check which doesn't run through smoothly.
Starting point was some hick-up in the NAS which caused the RAID to be rebuild which was successful. As for the hick-up, I have no clue what the root cause was.
Nevertheless, after the RAID rebuild, I could successfully mount the volume again, but I was asked to check the file system as it was flag as "not clean".
SMART shows all GREEN, I tested the HDDs via webui and have no errors.
So, the file system check starts but at a certain point (~30%) just stops with the error message "failed to check file system" and the status of the volume is set to "unmounted". At that point, I cannot mount the volume anymore as the the "unlock"-option is simply not shown when I click on the volume.
After a reboot the volume can be mounted again, but I still get the message that the file system is no clean and needs to be rechecked.
Next step was to perform the file system check via CLI. Here I'm not sure if I performed chose the right device to be checked. May you could give me a hint if I'm on the right track. Currently the check still runs....
Here's what I did:
step 1
step 2
I was expecting an error message that the device is busy, but didn't get it
step 3
result:
and lots of
What does this mean? check skipped? How can this be solved?
My device:
I'm a little confused as well be the "State : clean"
I'm currently struggling with a file system check which doesn't run through smoothly.
Starting point was some hick-up in the NAS which caused the RAID to be rebuild which was successful. As for the hick-up, I have no clue what the root cause was.
Nevertheless, after the RAID rebuild, I could successfully mount the volume again, but I was asked to check the file system as it was flag as "not clean".
SMART shows all GREEN, I tested the HDDs via webui and have no errors.
So, the file system check starts but at a certain point (~30%) just stops with the error message "failed to check file system" and the status of the volume is set to "unmounted". At that point, I cannot mount the volume anymore as the the "unlock"-option is simply not shown when I click on the volume.
After a reboot the volume can be mounted again, but I still get the message that the file system is no clean and needs to be rechecked.
Next step was to perform the file system check via CLI. Here I'm not sure if I performed chose the right device to be checked. May you could give me a hint if I'm on the right track. Currently the check still runs....
Here's what I did:
step 1
Code: Select all
/etc/init.d/services.sh stop && /etc/init.d/opentftp.sh stop && /etc/init.d/Qthttpd.sh sto
Code: Select all
umount /dev/mapper/ce_cachedev1
step 3
Code: Select all
e2fsck -f -v -C -0 /dev/mapper/ce_cachedev1
Code: Select all
e2fsck 1.43.9 (8-Feb-2018)
Pass 1: Checking inodes, blocks, and sizes
Inode 500432899 has INDEX_FL flag set on filesystem without htree support.
....
Code: Select all
Inode 2687893620 block 8388800 conflicts with critical metadata, skipping block checks.
My device:
Code: Select all
NAS Model: TVS-863+
Firmware: 4.4.1 Build 20191206
/dev/md1:
Version : 1.0
Creation Time : Fri Jul 21 20:47:37 2017
Raid Level : raid6
Array Size : 58538898432 (55827.05 GiB 59943.83 GB)
Used Dev Size : 9756483072 (9304.51 GiB 9990.64 GB)
Raid Devices : 8
Total Devices : 8
Persistence : Superblock is persistent
Update Time : Sun Feb 9 11:21:08 2020
State : clean
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : 1
UUID : f2934a01:ec4fa905:94ea6eed:27964224
Events : 103373
Number Major Minor RaidDevice State
0 8 3 0 active sync /dev/sda3
1 8 19 1 active sync /dev/sdb3
2 8 51 2 active sync /dev/sdd3
3 8 35 3 active sync /dev/sdc3
9 8 83 4 active sync /dev/sdf3
10 8 67 5 active sync /dev/sde3
8 8 115 6 active sync /dev/sdh3
7 8 99 7 active sync /dev/sdg3
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md1 : active raid6 sda3[0] sdg3[7] sdh3[8] sde3[10] sdf3[9] sdc3[3] sdd3[2] sdb3[1]
58538898432 blocks super 1.0 level 6, 512k chunk, algorithm 2 [8/8] [UUUUUUUU]
md322 : active raid1 sdg5[7](S) sdh5[6](S) sde5[5](S) sdf5[4](S) sdc5[3](S) sdd5[2](S) sdb5[1] sda5[0]
7235136 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md256 : active raid1 sdg2[7](S) sdh2[6](S) sde2[5](S) sdf2[4](S) sdc2[3](S) sdd2[2](S) sdb2[1] sda2[0]
530112 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md13 : active raid1 sda4[0] sdb4[1] sdd4[2] sdc4[3] sdf4[4] sde4[5] sdh4[32] sdg4[33]
458880 blocks super 1.0 [32/8] [UUUUUUUU________________________]
bitmap: 1/1 pages [4KB], 65536KB chunk
md9 : active raid1 sda1[0] sdg1[33] sdh1[32] sde1[5] sdf1[4] sdc1[3] sdd1[2] sdb1[1]
530048 blocks super 1.0 [32/8] [UUUUUUUU________________________]
bitmap: 1/1 pages [4KB], 65536KB chunk