[resolved] clear "Disk Access History I/O" ERROR

Questions about SNMP, Power, System, Logs, disk, & RAID.
karls0
Starting out
Posts: 10
Joined: Thu Mar 14, 2019 5:50 pm

[resolved] clear "Disk Access History I/O" ERROR

Post by karls0 »

Hi @ all,
I have a TVS-463 with 2 x 4TB (ST4000NM0035) as RAID1 and 2 x 2TB as another RAID1. After full backup of everything I removed one of the 2 TB-disks and replaced it with a 4 TB disk (ST4000NM000A). Then I started to migrate the RAID 1 to RAID 5. The process was calculated to take 20 hours. After 6-8 hours the process stopped and the NAS was not reachable any more (no web-management, no ssh, no ping). Now the two disks from the original RAID1 show "ERROR ...one or more unrecoverable read/write errors have been detected". SMART says everything is good. I started the Disk Health "Complete test" and after 5 hours it showed "no errors found", but the "Disk Acccess History" still shows "ERROR".

Since I don't believe, that both disks died the same moment, and the test also said OK, I wanted to build a completly new RAID5 with all four 4TB disks I have. I deleted the pool and tried to create a new one, but disks 1 and 2 cannot be selected (the ones with the error-message).

Can anyone tell me how to remove the error-status from these two disks?

TIA, Karl

PS: I found a similar post with no usable answer viewtopic.php?f=25&t=162347&p=795972&hi ... or#p795972
Since I don't have a Windows machine, I cannot try the suggested way.
Last edited by karls0 on Wed Sep 08, 2021 9:44 pm, edited 1 time in total.
TVS-463 running QTS 5.1.2.2533 with two Seagate ST4000NM000A and two ST4000NM0035 as Raid5
User avatar
dolbyman
Guru
Posts: 35275
Joined: Sat Feb 12, 2011 2:11 am
Location: Vancouver BC , Canada

Re: clear "Disk Access History I/O" ERROR

Post by dolbyman »

clrar all disks...format them however you like..with whatever machine you have
karls0
Starting out
Posts: 10
Joined: Thu Mar 14, 2019 5:50 pm

Re: clear "Disk Access History I/O" ERROR

Post by karls0 »

Thx dolbyman for the fast reply, but no luck for me.
- I took out the drives of the NAS
- deleted the partitiontable
- created a new partitiontable with one prim partition over the full disk
- formatted it with a linux file-system

After inserting the disk back in my NAS - hurray, the LED for the disk was green - but only for 15 sec :--((

So I am back where I was. Did I do something wrong? Any other ideas?
TIA, Karl
TVS-463 running QTS 5.1.2.2533 with two Seagate ST4000NM000A and two ST4000NM0035 as Raid5
User avatar
dolbyman
Guru
Posts: 35275
Joined: Sat Feb 12, 2011 2:11 am
Location: Vancouver BC , Canada

Re: clear "Disk Access History I/O" ERROR

Post by dolbyman »

Maybe open a ticket with QNAP ..hope it's not a hardware error on the QNAP itself
karls0
Starting out
Posts: 10
Joined: Thu Mar 14, 2019 5:50 pm

Re: clear "Disk Access History I/O" ERROR

Post by karls0 »

Hi dolbyman,
I did so, but they were not very helpfull.
I asked them for a way to reset the error-flag of the disks three times. But they only told me to test my disks with Seatools (which took me 50 hours each) and after that I still had no success. So they offered me a repair for $ 360.--. After this experience I was short at ordering a new NAS from another vendor. Again I looked through some posts on the internet and found that there is a "bad block test" available through the GUI, that also clears the error-flag if no bad blocks are found.

This solved my problem, but I am annoyed by the way QNAP responds to their customers.
TVS-463 running QTS 5.1.2.2533 with two Seagate ST4000NM000A and two ST4000NM0035 as Raid5
cameo
Starting out
Posts: 29
Joined: Fri Aug 14, 2015 1:09 am

Re: [resolved] clear "Disk Access History I/O" ERROR

Post by cameo »

Hi, for what it's worth, here is my story. And let me put a warning in front, I executed the below in good faith knowing that I had plenty of backup and in case of failure could rebuild this NAS from a backup.

I had a similar problem as described above.

I hot swapped a defective drive from a RAID6 with a fresh one and the NAS started rebuilding right away. Somehow I managed to get the NAS to reboot during the rebuild. IMHO this should not be possible easily.
After the reboot the drive was marked with "error" and I could not reset the disk access history.

Unwilling to wait for the bad block check (after all it was a fresh drive), I looked via ssh on the NAS and found the config file.

Model name: TS-1677X
Firmware version: 5.0.0.1808 Build 20211001

The file /mnt/HDA_ROOT/.conf contains what appears to be a list of all drives that were in the NAS; even some that were removed more than a year ago.
The content looked like this:

Code: Select all

[/] # cat /mnt/HDA_ROOT/.conf
hw_addr = xx:xx:xx:xx:xx:xx
QNAP = TRUE
mirror = 0
hal_support = yes
sm_v2_support = yes
pd_dev_wwn_500xxxxxxxxxxEC0 = 0x1
pd_dev_wwn_500xxxxxxxxxxAEE = 0x2
pd_dev_wwn_500xxxxxxxxxx546 = 0x5
pd_dev_wwn_500xxxxxxxxxx803 = 0x6
pd_dev_wwn_500xxxxxxxxxx221 = 0x7
pd_dev_wwn_500xxxxxxxxxx6DA = 0x9
pd_dev_wwn_500xxxxxxxxxx67A = 0xa
pd_dev_wwn_500xxxxxxxxxx1C0 = 0x3
pd_dev_wwn_500xxxxxxxxxx9B4 = 0x4
pd_dev_wwn_500xxxxxxxxxxFCB = 0xd
pd_dev_wwn_500xxxxxxxxxxB88 = 0xe
pd_dev_wwn_500xxxxxxxxxxDF5 = 0xf
pd_dev_wwn_500xxxxxxxxxx41B = 0x10
nas_capability = 0x1
pd_dev_wwn_500xxxxxxxxxx881 = 0xc
pd_dev_wwn_500xxxxxxxxxx541 = 0x10
pd_err_wwn_500xxxxxxxxxx541 = 0x10

[SSD_SETTING]
ssd_warned_wwn_500xxxxxxxxxx541 = 0
Matching to this is the file /mnt/HDA_ROOT/.config/raid.conf, which contains the current raid configuration.
:idea: And if you ever wondered in which slot your drives belong in what order, this is the place to look.
Here is an excerpt from it:

Code: Select all

[/] # cat /mnt/HDA_ROOT/.config/raid.conf
[Global]
...

[Remove]
0x00000010 = 0x4

[RAID_1]
...

[RAID_2]
uuid = xxxxxxxx:xxxxxxxx:xxxxxxxx:xxxxxxxx
id = 2
partNo = 3
aggreMember = no
readonly = no
legacy = no
version2 = yes
overProvisioning = 0
deviceName = /dev/md2
raidLevel = 6
internal = 1
mdBitmap = 1
chunkSize = 512
readAhead = 2048
stripeCacheSize = 0
speedLimitMax = 0
speedLimitMin = 1000
data_0 = c, 500xxxxxxxxxx881
data_1 = 6, 500xxxxxxxxxx803
data_2 = 7, 500xxxxxxxxxx221
data_3 = 9, 500xxxxxxxxxx6DA
data_4 = a, 500xxxxxxxxxx67A
data_5 = 10, (REMOVED)
dataBitmap = 3F
scrubStatus = 1
eventSkipped = 0
eventCompleted = 0
degradedCnt = 0

[RAID_3]
...
[/] # 

The numbers starting with 500x are the WWN (unique number) of the drive. You can find them in QTS under "Storage & Snapshots" > "Storage" > "Disks/VJBOD" when you hold the mouse pointer above a disk.

Recognize in file /mnt/HDA_ROOT/.conf the line starting with pd_err, it matches exactly the drive that is shown as erroneous and which I had originally inserted into slot 16 (0x10).

Code: Select all

pd_err_wwn_500xxxxxxxxxx541 = 0x10
I removed the drive from the NAS, gave it a moment to recognize the removed drive, then removed the pd_err line from /mnt/HDA_ROOT/.conf.

Code: Select all

[/mnt/HDA_ROOT] # vim .conf
After changing the .conf file put the drive back in slot 16 and after a moment the NAS finds it and starts rebuilding.


:!: Again, let me add a warning. I am no QNAP expert, but impatient sometimes. Before you mess with your setup, ensure you have good backup. Don't blame me if it fails.
And if you are not having a fresh disk to insert, maybe a bad sector check is not the worst idea.

On the other hand, I will not charge you $360 for this little insight 8)
tjarb
Starting out
Posts: 28
Joined: Sat Feb 17, 2018 9:01 pm
Location: netherlands

Re: [resolved] clear "Disk Access History I/O" ERROR

Post by tjarb »

Owh man,

You saved my day!
I "hotswapped" a disk without detaching it first and it got marked as "hardware errors".
But there's no fkng way to recover it! Even by replacing with another fresh out the box disk, it sets the bay as "erroneous", not the drives :-0
Bad sector checks didn't solve the issue (flag was not cleared).

Thanks and have nice day (mine is!).
edj_11
First post
Posts: 1
Joined: Sat Dec 04, 2021 9:26 pm

Re: [resolved] clear "Disk Access History I/O" ERROR

Post by edj_11 »

I had similar problem with my HHD appearings as faulty but there were actually nothing wrong with it, and modifying /mnt/HDA_ROOT/.conf fixed the problem. thanks man
User avatar
Johnno72
Easy as a breeze
Posts: 378
Joined: Fri Jul 31, 2015 1:35 pm
Location: Australia

Re: [resolved] clear "Disk Access History I/O" ERROR

Post by Johnno72 »

So how do I delete the pd-err lines? nothing is working and am pretty sure now I've completely FUBAR'd things
OS: Win10 Professional v2004 OS Build 19041.388 x64
NAS: QNAP TS-EC2480U-RP 16G 24 Bay - Firmware: v4.4.3.1421 build 20200907. Updated from v4.4.3.1400 Build 20200817 Official
StoragePool / DataVol: Storage Pool 1 / DataVol1: Single 29.04TB - Thick Volume: 29TB
HDD's: Western Digital - Model: WDC WD4001FFSX-68JUN0 Red Pro NAS 3.5"
HDD Size: 4TB - HDD Firmware all HDD's: 81.00A81
RAID Configuration: RAID6 x 10, HotSpare x 1, ColdSpare x 1 - Network: 1GbE
UPS: CyberPower PR3000ELCDRT2U Professional Rackmount LCD 3000VA, 2250W 2U Line Interactive UPS
QNAP Hardware details required: viewtopic.php?f=5&t=68954
Remote Administration of: TVS-863+ 16G on UPS Cyberpower OLS1500E+RMcard205
User avatar
kanady43
Starting out
Posts: 10
Joined: Sun Dec 12, 2010 8:07 pm
Location: Prague, Czech republic

Re: [resolved] clear "Disk Access History I/O" ERROR

Post by kanady43 »

@cameo: Thanks, you really saved my day. Had the same issue, and have been in the state to throw the brand new 10 Tbyte Seagate through the closed window (would be not survive, from 9th floor).

@Johnno72: Use WinSCP, you can download it on https://winscp.net/ You have to allow the SSH in Control Panel - Network and then you have to create a connection in WinSCP, transfer protocol SCP, fill the internal TCPIP address of your NAS, port 22, user name admin and the passwort you have for that account. The rest is described above.
geoff.cole1954
First post
Posts: 1
Joined: Sun Feb 13, 2022 5:36 pm

Re: [resolved] clear "Disk Access History I/O" ERROR

Post by geoff.cole1954 »

I am hopeful this will make many users of NAS Boxes happy, as there has been much conversations, on the issue,
both within the QNAP Users Forum and Outside on the internet via Google searches but nothing worked completely for me for fix all the issues.

But nobody has offered my final solution by doing a service on the NAS Box to fix this intermittent issue entirely.

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
ISSUE:
------
Over the last 18 months of a 4 year NAS Box (TS-431P - 4 Bays - 4 x 8TB Seagate Iron Wolf NAS Disks) I had similar problem with my HHD of Disk 4 slot appearing as faulty,
but there were actually nothing wrong with the drives using Seagate Diagnostics SeaTools (Windows) on my windows 10/11 system,
and connecting up the suspect drive.

I have replaced the hard drive 3 times, and would fix the issue for a while, then would disconnect drive 4 from RAID6 Array,
after routine disk tests failed and intermittently during "raid rebuild" of 4 drives but always in drive 4 bay, no matter which disk is used.

Note:
I even swapped good drive 3 and drive 4 over, thinking error would swap over to drive 3 bay disconnecting, but no drive 4 always fails.

RESOLUTION SUMMARY:
------------------------------
1) remove duplicate drive entries in the /mnt/HDA_ROOT/.conf file. -- fixed the testing on drive 4 fails,
but did not fix the issue of drive 4 disconnecting during Rebuild of Array of 4 disks and always around the 30+ hours.

2) FIXED Permanently:
--------------------------
RESULTS:
-----------
Near new performance restored, even download of large files from NAS to Windows 11, right up to spec.
Drive 4 latency reduced from 35 millseconds to 4 milliseconds, with other 3 drives always (before service/after) ~9 miliseconds during "raid rebuild" of 4 drives.

Time for "raid rebuild" of 4 drives, reduced from 35 hrs to 26 hrs due to higher throughput with no errors.

All QNAP NAS Tests on Disks Passed & Rebuild of Array which were all run on the 4 disks at the same time, with no errors.

SOLUTION:
-------------
Did a Service of the QNAP NAS Box, and cleaned everything including the female / male connectors ie 4 SATA disks and the daughter board to motherboard SATA Connector,
especially all the male gold contacts which were cleaned and lightly buffed with clean printer paper, and all connectors mechanically exercise about 6 times (1 = remove/insert cycle)

The daughter board to motherboard SATA Connector, connects the 2 following groups, are Common to ALL 4 Drives.
a) Power Rails (+3.3V, +5V, 12V) to all 4 disks

b) Serial ATA Signals (250mv) multiplexed to all 4 drives, from the one set of signals for all 4 drives ie not 4 sets of signals, 1 for each drive.

These signals run at ~10Gbps (during a "raid rebuild" of 4 drives), so any "resistance" will degrade the signals to each SATA HHD Drive, especially drive 4

Drive 4 is the furtherest away from from the motherboard, and has the longest PCB traces.

CONCLUSION:
-----------------
Since nothing with drives, then had to be something electrical/mechanical on the daughterboard as only drive 4 affected, and no errors on drive 1,2,3

Therefore doing a "service" (cleaning) would eliminate bad connections, which has been the cause of many a PC behaving erratically after a few years,
and seen the same in my industry with Control Systems in the field.

Refer attached screenshots of my QNAP NAS Box.

Note:
-----
Take photos as you disassemble the box, and put screws in groups with each group marked where they go.

This makes assembling much easier.

QNAP QUESTION:
----------------------
If I had a 5 year warranty, a drive disconnecting at about 3 years, but SATA Disks tested ok from disk manufactures software.

Who would fix the issue and who pays the cost ?

I believe since it is not a manufacturing defect, it would not be covered by the QNAP warranty, but the user cannot disassembly as it is still in warranty period.

Perhaps it is a long term design/reliablity issue and much better arrangement of connectors and quality would be desired.

This issue would go for any NAS Box irrespective of brand.

DETAILS:
--------

SYMPTOM:
--------
Intermittently Drive 4 would disconnect during a Raid ReBuild of the Array or when Tests were undertaken.

On the embedded linux server powering the NAS box, the /dev/sdd* dedefinitions would totally disappear,
and only reappear when the drive was hot-swapped-disconnected/removed and then reinstalled/connected.


ACTIONS:
--------

Accessed the NAS Box via Putty SSH using NAS IP Address, and username = admin and admin default password which is the mac_address_1 in UPPERCASE.

So I checked/modified /mnt/HDA_ROOT/.conf fixed the problem for a short while but drive would disconnect,
during Raid Array Rebuild in 30+hours where all 4 drives were active ie reads/writes

I had 3 duplicate entries in the config file, so which is incorrect as should only have one per device connected.

So fixed that, and that only fixed for a short while, and the the issue reappeared in the next rebuild of the array around 30+ hours mark.

Next since drive was intermittently disconnecting when all 4 drives were active during raid rebuild of the array, and the fact the nothing wrong with the drives.

I even swapped GOOD drive 3 and 4 over, but fault stayed on drive 4 bay.

So must be either a mechanical or electrical issue on drive 4 causing the SATA HDD disk to drop out.

So did a full service on NAS Box,
a) disassembled of NAS box into all parts
b) cleaned of all dust fluff using brush and vacuum cleaner
c) cleaned using IPA ISOPROPYL ALCOHOL 99.8% Pure (Australia JAY CAR) but any electronics part store should have this cleaning agent which does not leave any residue.
d) cleaned up all male/female connectors as c above, and lightly buffed gold plated male fingers from a dull gold colour with clean printing paper.
e) mechanical exercise all connectors about 6 times.
f) reassembled
g) tested and found no further issues.

Linux Commands:
---------------

Editor use either vi or vim both the same.


CONFIG FILES:
-------------
[~] # cat /mnt/HDA_ROOT/.conf
-----------------------
hal_support = yes
sm_v2_support = yes
pd_dev_wwn_5000C500AF87E5B5 = 0x1
pd_dev_wwn_5000C500A2DD462E = 0x2
pd_dev_wwn_5000C500D6C523B8 = 0x3
pd_dev_wwn_5000C500B5D764FC = 0x4
hw_addr = 9E:77:63:16:E9:82
nas_capability = 0x1
QNAP = TRUE
mirror = 0

[SSD_SETTING]
ssd_warned_wwn_5000C500B5D764FC = 0
ssd_warned_wwn_5000C500AFBBEC49 = 0
ssd_warned_wwn_5000C500D6C523B8 = 0
[~] #
[~] #

which disk hot-swapped ie remove/reinstall in drive 4 bay, which forces a reconnect and raid rebuild

cat /mnt/HDA_ROOT/.config/raid.conf, and checked the data_0, data_1, data_2, data_3 entries.
---------------------------------------------------------------------------------------------
[~] # cat /mnt/HDA_ROOT/.config/raid.conf
[Global]
raidBitmap = 0x2
globalSpareBitmap = 0x0
pd_5000C500AF87E5B5_Raid_Bitmap = 0x2
pd_5000C500A2DD462E_Raid_Bitmap = 0x2
pd_5000C500D6C523B8_Raid_Bitmap = 0x2
pd_5000C500B5D764FC_Raid_Bitmap = 0x2

[Remove]

[RAID_1]
uuid = edadf029:c5fbcd40:0129d94d:3fb50bdb
id = 1
partNo = 3
aggreMember = no
readonly = no
legacy = no
version2 = yes
overProvisioning = 0
deviceName = /dev/md1
raidLevel = 6
internal = 1
mdBitmap = 0
chunkSize = 512
readAhead = 1024
stripeCacheSize = 1024
speedLimitMax = 0
speedLimitMin = 50000
data_0 = 1, 5000C500AF87E5B5
data_1 = 2, 5000C500A2DD462E
data_2 = 3, 5000C500D6C523B8
data_3 = 4, 5000C500B5D764FC
dataBitmap = F
scrubStatus = 1
eventSkipped = 0
eventCompleted = 1
degradedCnt = 0
[~] #

TESTS & COMMANDS:
-----------------
During Rebuild of Raid showing All 4 drives connected.

cmd = ls -l /dev/sd*

Displays Results All Drives Connected:
--------------------------------------
brw------- 1 admin administ 8, 0 Jan 1 1970 /dev/sda
brw------- 1 admin administ 8, 1 Jan 1 1970 /dev/sda1
brw------- 1 admin administ 8, 2 Feb 18 22:01 /dev/sda2
brw------- 1 admin administ 8, 3 Jan 1 1970 /dev/sda3
brw------- 1 admin administ 8, 4 Feb 19 05:24 /dev/sda4
brw------- 1 admin administ 8, 5 Feb 18 22:01 /dev/sda5
brw------- 1 admin administ 8, 16 Jan 1 1970 /dev/sdb
brw------- 1 admin administ 8, 17 Jan 1 1970 /dev/sdb1
brw------- 1 admin administ 8, 18 Feb 18 22:01 /dev/sdb2
brw------- 1 admin administ 8, 19 Jan 1 1970 /dev/sdb3
brw------- 1 admin administ 8, 20 Feb 19 05:25 /dev/sdb4
brw------- 1 admin administ 8, 21 Feb 18 22:01 /dev/sdb5
brw------- 1 admin administ 8, 32 Jan 1 1970 /dev/sdc
brw------- 1 admin administ 8, 33 Jan 1 1970 /dev/sdc1
brw------- 1 admin administ 8, 34 Feb 18 22:01 /dev/sdc2
brw------- 1 admin administ 8, 35 Jan 1 1970 /dev/sdc3
brw------- 1 admin administ 8, 36 Feb 19 05:26 /dev/sdc4
brw------- 1 admin administ 8, 37 Feb 18 22:01 /dev/sdc5
brw------- 1 admin administ 8, 48 Jan 1 1970 /dev/sdd
brw------- 1 admin administ 8, 49 Jan 1 1970 /dev/sdd1
brw------- 1 admin administ 8, 50 Feb 18 22:01 /dev/sdd2
brw------- 1 admin administ 8, 51 Jan 1 1970 /dev/sdd3
brw------- 1 admin administ 8, 52 Feb 19 05:27 /dev/sdd4
brw------- 1 admin administ 8, 53 Feb 18 22:01 /dev/sdd5

Displaye Results but Now Showing Drive 4 DisConnected: (note no drive /dev/sdd*)
------------------------------------------------------
brw------- 1 admin administ 8, 0 Jan 1 1970 /dev/sda
brw------- 1 admin administ 8, 1 Jan 1 1970 /dev/sda1
brw------- 1 admin administ 8, 2 Feb 18 22:01 /dev/sda2
brw------- 1 admin administ 8, 3 Jan 1 1970 /dev/sda3
brw------- 1 admin administ 8, 4 Feb 19 05:24 /dev/sda4
brw------- 1 admin administ 8, 5 Feb 18 22:01 /dev/sda5
brw------- 1 admin administ 8, 16 Jan 1 1970 /dev/sdb
brw------- 1 admin administ 8, 17 Jan 1 1970 /dev/sdb1
brw------- 1 admin administ 8, 18 Feb 18 22:01 /dev/sdb2
brw------- 1 admin administ 8, 19 Jan 1 1970 /dev/sdb3
brw------- 1 admin administ 8, 20 Feb 19 05:25 /dev/sdb4
brw------- 1 admin administ 8, 21 Feb 18 22:01 /dev/sdb5
brw------- 1 admin administ 8, 32 Jan 1 1970 /dev/sdc
brw------- 1 admin administ 8, 33 Jan 1 1970 /dev/sdc1
brw------- 1 admin administ 8, 34 Feb 18 22:01 /dev/sdc2
brw------- 1 admin administ 8, 35 Jan 1 1970 /dev/sdc3
brw------- 1 admin administ 8, 36 Feb 19 05:26 /dev/sdc4
brw------- 1 admin administ 8, 37 Feb 18 22:01 /dev/sdc5

FDISK:
------
cmd = fdisk -l 2>/dev/null | head -n60

PARTED:
-------
cmd = parted -l 2>/dev/null | head -n 60

Config files:
-------------
cmd = cat /mnt/HDA_ROOT/.config/raid.conf

Drive 4 Disconnected: (no drive in data_3 definition.
----------------------------------------------------
[~] # cat /mnt/HDA_ROOT/.config/raid.conf
[Global]
raidBitmap = 0x2
globalSpareBitmap = 0x0
pd_5000C500AF87E5B5_Raid_Bitmap = 0x2
pd_5000C500A2DD462E_Raid_Bitmap = 0x2
pd_5000C500D6C523B8_Raid_Bitmap = 0x2

[Remove]
0x00000004 = 0x2

[RAID_1]
uuid = edadf029:c5fbcd40:0129d94d:3fb50bdb
id = 1
partNo = 3
aggreMember = no
readonly = no
legacy = no
version2 = yes
overProvisioning = 0
deviceName = /dev/md1
raidLevel = 6
internal = 1
mdBitmap = 0
chunkSize = 512
readAhead = 1024
stripeCacheSize = 1024
speedLimitMax = 0
speedLimitMin = 50000
data_0 = 1, 5000C500AF87E5B5
data_1 = 2, 5000C500A2DD462E
data_2 = 3, 5000C500D6C523B8
data_3 = 4, (REMOVED)
dataBitmap = F
scrubStatus = 1
eventSkipped = 0
eventCompleted = 1
degradedCnt = 0
[~] #

Drive 4 Connected: ( now drive in data_3 definition.
-----------------------------------------------------
[~] # cat /mnt/HDA_ROOT/.config/raid.conf
[Global]
raidBitmap = 0x2
globalSpareBitmap = 0x0
pd_5000C500AF87E5B5_Raid_Bitmap = 0x2
pd_5000C500A2DD462E_Raid_Bitmap = 0x2
pd_5000C500D6C523B8_Raid_Bitmap = 0x2
pd_5000C500B5D764FC_Raid_Bitmap = 0x2

[Remove]

[RAID_1]
uuid = edadf029:c5fbcd40:0129d94d:3fb50bdb
id = 1
partNo = 3
aggreMember = no
readonly = no
legacy = no
version2 = yes
overProvisioning = 0
deviceName = /dev/md1
raidLevel = 6
internal = 1
mdBitmap = 0
chunkSize = 512
readAhead = 1024
stripeCacheSize = 1024
speedLimitMax = 0
speedLimitMin = 50000
data_0 = 1, 5000C500AF87E5B5
data_1 = 2, 5000C500A2DD462E
data_2 = 3, 5000C500D6C523B8
data_3 = 4, 5000C500B5D764FC
dataBitmap = F
scrubStatus = 1
eventSkipped = 0
eventCompleted = 1
degradedCnt = 0
[~] #
You do not have the required permissions to view the files attached to this post.
louiscar
Easy as a breeze
Posts: 265
Joined: Mon Aug 10, 2015 4:32 am

Re: [resolved] clear "Disk Access History I/O" ERROR

Post by louiscar »

geoff.cole1954 wrote: Sat Feb 19, 2022 11:10 am I am hopeful this will make many users of NAS Boxes happy, as there has been much conversations, on the issue,
both within the QNAP Users Forum and Outside on the internet via Google searches but nothing worked completely for me for fix all the issues.

But nobody has offered my final solution by doing a service on the NAS Box to fix this intermittent issue entirely.

-
Thanks for this lengthy explanation although I have some trouble following some of it.

I ended up here because I had to shutdown my NAS during a reboot which appeared stuck. Long story short I had to clean the file system first then 1 pending sector appeared on Disk 1 and my Raid 5 was in a degraded state. I also had the Disk access history (I/O) Error which at the time I didn't know was anything other than part of the same issue.

I did a bad block scan which came up with and extra 8 pending sectors.

I ended up having to take Disk 1 out and connect to PC doing a reinitialize disk surface scan via HD Sentinel. 25 or so hours later the pendings had gone and there were no reallocations so all seemed fine.
Connecting to the NAS now showed me good SMART data but the Disk Access history error was still there. I cleared this thank to this thread and the Raid began to rebuild.

Not more than an hour I got another failure, this time 1 pending sector appears on Disk 3 and Disk 1 goes back to Disk Access history (I/O) error but SMART data is fine still.

So maybe I'm experiencing the same thing but it's all such a coincidence that I'm not sure I really believe this has suddenly become a problem for more than one disk.

I had ordered a replacement (hasn't arrived yet) for Disk 1 but since I managed to clear the pendings I thought it would be fine. So it's a confusing situation right now.

Your bits on the commands and config are not that clear as to what you are saying should be done however. But I am bothered about the I/O situation right now on Disk 1 and why it should suddenly flag at the exact time that Disk 1 shows a pending.
This sent my Raid to READ ONLY till I again cleared the I/O on Disk one via .conf.

Could one be causing the other eg. the I/O error caused the Pending on Disk 3 ?

I could believe at the beginning that Disk 1 might be dying as it was 2015 when I bought that however, I started with 2 and in 2018 I added the other two Disk 3 is 2018 and I'm pretty sure they are all fine. It all began with a forced shutdown .
Model : TS-453 Pro
Firmware : 5.0.0.1828
4x WD RED 3TB - Raid 5
wilko61
Starting out
Posts: 16
Joined: Wed Oct 25, 2017 5:20 pm
Location: Bathurst, NSW, Australia

Re: [resolved] clear "Disk Access History I/O" ERROR

Post by wilko61 »

Hi,

I too have followed the process outlined by cameo and have been successful at clearing the error. My issue now seems to be some type of HW failure occurring in the Disk Bay. Swapping the disk with a known good disk, from another disk-bay (no previous errors) and the disk is taken offline / disconnected.

Thank you for the advice.
an0malaus
First post
Posts: 1
Joined: Thu Jun 23, 2022 9:21 pm

Re: [resolved] clear "Disk Access History I/O" ERROR

Post by an0malaus »

Great stuff, @cameo.

Sorted out my situation with a previously unused TVS-822T and 6 x 10TB Seagate Ironwolf Pro drives that were still in their sealed bags I was just gifted by a friend who was no longer using them.
One of the new (5yo) drives perhaps didn't seat correctly on first installation in the 2nd bay, and came up with a red LED and the Error warning.

Moved to another slot and the problem followed the drive, so I was pleased the NAS wasn't likely to be faulty.

Found the Bad Block check and was 75% through with zero sign of problems on the drive before I found your post here.

Once I'd worked out I needed to specify the admin user on SSH and started entering the correct password, I saw the error line in /mnt/HDA_ROOT/.conf , ejected the drive, removed the line with vi, plugged the drive back in, and it was good to go once it spun up again. No more red LED or Access History I/O Error.

Huzzah!
itjmiller
First post
Posts: 1
Joined: Thu Jul 07, 2022 5:32 am

Re: [resolved] clear "Disk Access History I/O" ERROR

Post by itjmiller »

@cameo I created an account here just to say thank you! Clearing the 'pd_err_wwn' lines in /mnt/HDA_ROOT/.conf file, then rebooting the NAS worked for me. The arrays rebuilt and worked fine after that. A slick move to make it easy is to just run this and reboot:
sed -i '/pd_err_wwn_/d' /mnt/HDA_ROOT/.conf
Locked

Return to “System & Disk Volume Management”