Uncertain. I run linux, but I still know just enough to be dangerous. I'd rather not screw around too much with the linked method as I'm not certain which are which, or tied to what. What I do know is the likely problem is this old as heck set of e2fsprogs/resize2fs.
Has anyone forced this update on their box by using the Entware-NG package? It defaults to /opt of course which is on the NAS disks to expand, so I guess I will continue to fiddle around with things, manually extract to RAM or the like.
Thanks!
Resize error with progress flag
Code: Select all
[~] # resize2fs /dev/md0 -p
resize2fs 1.41.4 (27-Jan-2009)
Resizing the filesystem on /dev/md0 to 3905449536 (4k) blocks.
Begin pass 1 (max = 29808)
Extending the inode table XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
resize2fs: Memory allocation failed while trying to resize /dev/md0
Code: Select all
[15103.785156] md/raid1:md6: Disk failure on sdf2, disabling device.
[15103.785161] md/raid1:md6: Operation continuing on 2 devices.
[15103.820583] RAID1 conf printout:
[15103.820590] --- wd:2 rd:2
[15103.820598] disk 0, wo:0, o:1, dev:sda2
[15103.820604] disk 1, wo:0, o:1, dev:sdb2
[15103.820608] RAID1 conf printout:
[15103.820612] --- wd:2 rd:2
[15103.820618] disk 0, wo:0, o:1, dev:sda2
[15103.820623] disk 1, wo:0, o:1, dev:sdb2
[15103.820627] RAID1 conf printout:
[15103.820632] --- wd:2 rd:2
[15103.820637] disk 0, wo:0, o:1, dev:sda2
[15103.820643] disk 1, wo:0, o:1, dev:sdb2
[15105.803222] md: unbind<sdf2>
[15105.813055] md: export_rdev(sdf2)
[15116.993780] Adding 530136k swap on /dev/sdf2. Priority:-3 extents:1 across:530136k
[15117.033713] md/raid1:md6: Disk failure on sde2, disabling device.
[15117.033719] md/raid1:md6: Operation continuing on 2 devices.
[15117.066288] RAID1 conf printout:
[15117.066296] --- wd:2 rd:2
[15117.066303] disk 0, wo:0, o:1, dev:sda2
[15117.066309] disk 1, wo:0, o:1, dev:sdb2
[15117.066314] RAID1 conf printout:
[15117.066318] --- wd:2 rd:2
[15117.066323] disk 0, wo:0, o:1, dev:sda2
[15117.066329] disk 1, wo:0, o:1, dev:sdb2
[15119.050937] md: unbind<sde2>
[15119.060048] md: export_rdev(sde2)
[15130.142000] Adding 530136k swap on /dev/sde2. Priority:-4 extents:1 across:530136k
[15130.197893] md/raid1:md6: Disk failure on sdd2, disabling device.
[15130.197899] md/raid1:md6: Operation continuing on 2 devices.
[15130.241341] RAID1 conf printout:
[15130.241349] --- wd:2 rd:2
[15130.241356] disk 0, wo:0, o:1, dev:sda2
[15130.241362] disk 1, wo:0, o:1, dev:sdb2
[15132.217681] md: unbind<sdd2>
[15132.227069] md: export_rdev(sdd2)
[15143.304488] Adding 530136k swap on /dev/sdd2. Priority:-5 extents:1 across:530136k
[15143.362212] md/raid1:md6: Disk failure on sdc2, disabling device.
[15143.362217] md/raid1:md6: Operation continuing on 2 devices.
[15145.379698] md: unbind<sdc2>
[15145.389056] md: export_rdev(sdc2)
[15156.517320] Adding 530136k swap on /dev/sdc2. Priority:-6 extents:1 across:530136k
[15156.694317] md/raid1:md6: Disk failure on sdb2, disabling device.
[15156.694322] md/raid1:md6: Operation continuing on 1 devices.
[15156.738460] RAID1 conf printout:
[15156.738468] --- wd:1 rd:2
[15156.738475] disk 0, wo:0, o:1, dev:sda2
[15156.738482] disk 1, wo:1, o:0, dev:sdb2
[15156.742036] RAID1 conf printout:
[15156.742041] --- wd:1 rd:2
[15156.742047] disk 0, wo:0, o:1, dev:sda2
[15158.711584] md: unbind<sdb2>
[15158.724047] md: export_rdev(sdb2)
[15169.801071] Adding 530136k swap on /dev/sdb2. Priority:-7 extents:1 across:530136k
[17329.818557] md: export_rdev(sdb2)
[17329.877427] md: bind<sdb2>
[17329.899730] RAID1 conf printout:
[17329.899739] --- wd:1 rd:2
[17329.899748] disk 0, wo:0, o:1, dev:sda2
[17329.899756] disk 1, wo:1, o:1, dev:sdb2
[17329.899923] md: recovery of RAID array md6
[17329.905069] md: minimum _guaranteed_ speed: 5000 KB/sec/disk.
[17329.910620] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
[17329.915474] md: using 128k window, over a total of 530128k.
[17403.541355] md: md6: recovery done.
[17403.604374] RAID1 conf printout:
[17403.604383] --- wd:2 rd:2
[17403.604393] disk 0, wo:0, o:1, dev:sda2
[17403.604401] disk 1, wo:0, o:1, dev:sdb2
[19433.257877] md: bind<sdf2>
[19433.302017] RAID1 conf printout:
[19433.302026] --- wd:2 rd:2
[19433.302036] disk 0, wo:0, o:1, dev:sda2
[19433.302044] disk 1, wo:0, o:1, dev:sdb2
[19433.357891] md: bind<sde2>
[19433.409551] RAID1 conf printout:
[19433.409561] --- wd:2 rd:2
[19433.409571] disk 0, wo:0, o:1, dev:sda2
[19433.409580] disk 1, wo:0, o:1, dev:sdb2
[19433.409587] RAID1 conf printout:
[19433.409593] --- wd:2 rd:2
[19433.409601] disk 0, wo:0, o:1, dev:sda2
[19433.409608] disk 1, wo:0, o:1, dev:sdb2
[19433.468968] md: bind<sdd2>
[19433.513105] RAID1 conf printout:
[19433.513115] --- wd:2 rd:2
[19433.513124] disk 0, wo:0, o:1, dev:sda2
[19433.513133] disk 1, wo:0, o:1, dev:sdb2
[19433.513139] RAID1 conf printout:
[19433.513145] --- wd:2 rd:2
[19433.513153] disk 0, wo:0, o:1, dev:sda2
[19433.513161] disk 1, wo:0, o:1, dev:sdb2
[19433.513166] RAID1 conf printout:
[19433.513172] --- wd:2 rd:2
[19433.513178] disk 0, wo:0, o:1, dev:sda2
[19433.513186] disk 1, wo:0, o:1, dev:sdb2
[19433.568949] md: bind<sdc2>
[19433.615029] RAID1 conf printout:
[19433.615038] --- wd:2 rd:2
[19433.615047] disk 0, wo:0, o:1, dev:sda2
[19433.615056] disk 1, wo:0, o:1, dev:sdb2
[19433.615062] RAID1 conf printout:
[19433.615068] --- wd:2 rd:2
[19433.615076] disk 0, wo:0, o:1, dev:sda2
[19433.615084] disk 1, wo:0, o:1, dev:sdb2
[19433.615090] RAID1 conf printout:
[19433.615096] --- wd:2 rd:2
[19433.615103] disk 0, wo:0, o:1, dev:sda2
[19433.615110] disk 1, wo:0, o:1, dev:sdb2
[19433.615115] RAID1 conf printout:
[19433.615121] --- wd:2 rd:2
[19433.615127] disk 0, wo:0, o:1, dev:sda2
[19433.615135] disk 1, wo:0, o:1, dev:sdb2
[19433.724441] EXT4-fs (md0): Mount option "noacl" will be removed by 3.5
[19433.724447] Contact linux-ext4@vger.kernel.org if you think we should keep it.
[19433.724451]
[19434.554679] ext4_init_reserve_inode_table0: md0, 89377
[19434.559413] ext4_init_reserve_inode_table2: md0, 89377, 0, 0, 4096
[19434.564461] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: usrjquota=aquota.user,jqfmt=vfsv0,user_xattr,data=ordered,nodelalloc,noacl
[19504.008858] nfsd: last server has exited, flushing export cache
[19536.584137] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[19536.589160] NFSD: starting 90-second grace period
[19735.520043] EXT4-fs (md0): error count: 23
[19735.524891] EXT4-fs (md0): initial error at 1395808651: ext4_iget:3869: inode 172257841
[19735.529839] EXT4-fs (md0): last error at 1451980822: ext4_iget:3862: inode 1848113
Code: Select all
[~] # free
total used free shared buffers
Mem: 2070104 370564 1699540 0 8520
Swap: 530124 74352 455772
Total: 2600228 444916 2155312
[~] # mkswap /dev/sdi1
Setting up swapspace version 1, size = 16008589 kB
no label, UUID=5e45e9c9-8677-483b-a66e-a67fe8c9d0db
[~] # swapon /dev/sdi1
[~] # free
total used free shared buffers
Mem: 2070104 374284 1695820 0 8524
Swap: 16163512 74316 16089196
Total: 18233616 448600 17785016
[/] # cat /proc/swaps
Filename Type Size Used Priority
/dev/md6 partition 530124 43872 -1
/dev/sdi partition 33554428 0 -2