we've two TS-470. One with Firmware 4.2.0 Build 20151118 and the other one with 4.2.0 Build 20160101.
Both are configured with RAID5
Both are hanging on 50% when clicking on "Storage Space".
Last Log:
Code: Select all
==> hal_lib.log <==
disk_manage.cgi:Tue Jan 26 10:47:52 2016
Get_Temp_Threshold() called, CPU_WARNING_TEMP=80
disk_manage.cgi:Tue Jan 26 10:47:52 2016
Get_Temp_Threshold() called, CPU_ERROR_TEMP=85
disk_manage.cgi:Tue Jan 26 10:47:52 2016
Get_Temp_Threshold() called, SYS_WARNING_TEMP=60
disk_manage.cgi:Tue Jan 26 10:47:52 2016
Get_Temp_Threshold() called, SYS_ERROR_TEMP=70
==> storage_lib.log <==
Perform cmd "/bin/echo 200000 > /sys/block/md1/md/sync_speed_max 2>>/dev/null" OK, cmd_rsp=0, reason code:0.
Perform cmd "/bin/echo 200000 > /sys/block/md1/md/sync_speed_max 2>>/dev/null" OK, cmd_rsp=0, reason code:0.
md_is_busy: md/sync_completed = "none".
md_get_status: /dev/md1 : status=0, progress=100.000000.
RAID_Is_Bitmap_Enabled:RAID(1):Bitmap already is NOT enabled.
md_is_busy: md/sync_completed = "none".
md_get_status: /dev/md1 : status=0, progress=100.000000.
RAID_Is_Bitmap_Enabled:RAID(1):Bitmap already is NOT enabled.
Perform cmd "/bin/echo 200000 > /sys/block/md1/md/sync_speed_max 2>>/dev/null" OK, cmd_rsp=0, reason code:0.
Perform cmd "/bin/echo 200000 > /sys/block/md1/md/sync_speed_max 2>>/dev/null" OK, cmd_rsp=0, reason code:0.
==> storage_lib.log <==
md_is_busy: md/sync_completed = "none".
md_get_status: /dev/md1 : status=0, progress=100.000000.
Blk_Dev_Get_Mount_Point: device "/dev/mapper/ce_cachedev1" found, and mount point is "/share/CE_CACHEDEV1_DATA".
Perform cmd "/bin/df -k /share/CE_CACHEDEV1_DATA 2>>/dev/null | /usr/bin/tail -n1 | /bin/awk -F ' ' '{print $(NF-3)}'" OK, cmd_rsp=0, reason code:0.
Perform cmd "/bin/df -k /share/CE_CACHEDEV1_DATA 2>>/dev/null | /usr/bin/tail -n1 | /bin/awk -F ' ' '{print $(NF-2)}'" OK, cmd_rsp=0, reason code:0.
Perform cmd "/bin/echo 200000 > /sys/block/md1/md/sync_speed_max 2>>/dev/null" OK, cmd_rsp=0, reason code:0.
==> storage_lib.log <==
md_is_busy: md/sync_completed = "none".
LV_Get_Pool_Id: [LV(256)] Fail to get Pool ID.
Perform cmd "/bin/echo 200000 > /sys/block/md1/md/sync_speed_max 2>>/dev/null" OK, cmd_rsp=0, reason code:0.
Blk_Dev_Get_Mount_Point: device "/dev/mapper/ce_cachedev1" found, and mount point is "/share/CE_CACHEDEV1_DATA".
==> storage_lib.log <==
Perform cmd "/bin/mount | grep "/mnt/HDA_ROOT" &>/dev/null" OK, cmd_rsp=0, reason code:0.
==> storage_lib.log <==
md_get_status: /dev/md1 : status=0, progress=100.000000.
Blk_Dev_Get_Mount_Point: device "/dev/mapper/ce_cachedev1" found, and mount point is "/share/CE_CACHEDEV1_DATA".
Perform cmd "/bin/df -k /share/CE_CACHEDEV1_DATA 2>>/dev/null | /usr/bin/tail -n1 | /bin/awk -F ' ' '{print $(NF-3)}'" OK, cmd_rsp=0, reason code:0.
Perform cmd "/bin/df -k /share/CE_CACHEDEV1_DATA 2>>/dev/null | /usr/bin/tail -n1 | /bin/awk -F ' ' '{print $(NF-2)}'" OK, cmd_rsp=0, reason code:0.
Blk_Dev_Get_Mount_Point: device "/dev/mapper/ce_cachedev1" found, and mount point is "/share/CE_CACHEDEV1_DATA".
md_get_status: /dev/md1 : status=0, progress=100.000000.
md_get_status: /dev/md1 : status=0, progress=100.000000.
Blk_Dev_Get_Mount_Point: device "/dev/mapper/ce_cachedev1" found, and mount point is "/share/CE_CACHEDEV1_DATA".
Perform cmd "/bin/df -k /share/CE_CACHEDEV1_DATA 2>>/dev/null | /usr/bin/tail -n1 | /bin/awk -F ' ' '{print $(NF-3)}'" OK, cmd_rsp=0, reason code:0.
Perform cmd "/bin/df -k /share/CE_CACHEDEV1_DATA 2>>/dev/null | /usr/bin/tail -n1 | /bin/awk -F ' ' '{print $(NF-2)}'" OK, cmd_rsp=0, reason code:0.
md_get_status: /dev/md1 : status=0, progress=100.000000.
I can reset Talon, for Malon I need a working fix.
best regards