Hi,
I’ve been running a Debian and Ubuntu Virtual Machine using VirtualBox 6.1 on a MacBook Pro and decided today to try installing and running them on QNAP’s Virtualization Station (VS) on a TVS-672XT.
It all seemed to go ok. But I after logged into the VM (using my browser), I went to start firefox in the VM then everything froze and I couldn’t open any applications. Whilst it was in this frozen state I received a power failure warning and my NAS reported it was (re?)booting – but I was still logged into the NAS over HTTP and I didn’t lose my connection and it didn’t actually power off. The VS software also froze and I tried to force quit. All the other QNAP NAS apps worked fine, but VS wasn’t responsive so I restarted the NAS (see screenshot attached).
I wondered whether I allocated too many resources to the VM so tried again allocating less resources. But the same things happened. I had to rebuild/check integrity of my storage pools and now the RAID group is synchronising (20+ hours to go) so wanted to find out what I might have done wrong/what to change before I try again. I've tried finding info online but didn't find anything similar.
Any advice would be appreciated. THanks.
***********************************
Below is how I set things up:
1) Created a virtual LAN on NAS
2) Exported my Debian VM using the format option “Open Virtualization Format 1.0” (as when I tried the v2.0 the file type wasn’t recognised by VS.
3) Saved the export into a folder on the QNAP NAS which is on an encrypted storage pool Thick volume which is just for shared data storage (the other storage pool I have is just for the local system).
4) Started up VS and imported the .ova file with the following settings:
OS: Linux
Version: Debian 9.1 (as it was highest version, but my version is actually version 10.10)
Boot Firmware: Legacy BIOS
Cores: 3 (2nd time I tried 2) – out of 4 total
Mem: 7GB (2nd time I tried 4) – out of 8 total
ticked box “enable memory sharing”
HDD location: on the data storage pool (mentioned above) same folder as the .ova
HDD storage: 64GB (the VM was around 8GB when uncompressed and running on VB))
CPU: Intel Core i& (Westmere)
Network Adapter:
-Virtual Switch
-MAC Address xxx
-Intel Gigabit Ethernet
Hard Disk: Source Path: deb2-disk001.vmdk
Controller: IDE
Iron Wolf drives all report being healthy, but I did note it says in green they are 37 degrees C / 100 degrees F → is this normal?
***********************************
Not sure what caused this but issue is now fixed - see below.
Virtualization Station – Freezes then causes power failure(?) [FIXED]
-
- Starting out
- Posts: 41
- Joined: Tue Oct 27, 2020 4:51 am
Virtualization Station – Freezes then causes power failure(?) [FIXED]
You do not have the required permissions to view the files attached to this post.
Last edited by sleepy_panda on Tue Aug 03, 2021 7:53 pm, edited 1 time in total.
TVS-672XT (8Gb)
Firmware: Version 5.0.0.1850
Seagate Iron Wolf drives (raid 5)
Samsung 970 EVO Plus NVMe™ M.2 SSD (raid 1) [for system + vms]
Firmware: Version 5.0.0.1850
Seagate Iron Wolf drives (raid 5)
Samsung 970 EVO Plus NVMe™ M.2 SSD (raid 1) [for system + vms]
- dolbyman
- Guru
- Posts: 35273
- Joined: Sat Feb 12, 2011 2:11 am
- Location: Vancouver BC , Canada
Re: Virtualization Station – Freezes then causes power failure(?)
try CPU passdown, not emulation
You might have found a bug .. then report it to QNAP
You might have found a bug .. then report it to QNAP
-
- Starting out
- Posts: 41
- Joined: Tue Oct 27, 2020 4:51 am
Re: Virtualization Station – Freezes then causes power failure(?)
Aha, oK. Will do. In mean time guess I'll try creating a fresh VM from .iso. Thanks for feedback.
TVS-672XT (8Gb)
Firmware: Version 5.0.0.1850
Seagate Iron Wolf drives (raid 5)
Samsung 970 EVO Plus NVMe™ M.2 SSD (raid 1) [for system + vms]
Firmware: Version 5.0.0.1850
Seagate Iron Wolf drives (raid 5)
Samsung 970 EVO Plus NVMe™ M.2 SSD (raid 1) [for system + vms]
-
- Starting out
- Posts: 41
- Joined: Tue Oct 27, 2020 4:51 am
Re: Virtualization Station – Freezes then causes power failure(?)
VMs are now working though I'm not 100% what caused them to work. Below I've put the things I did in case it is of use to others:
NOTE: VMs were a Debian 10 LTS 64-bit .ova import and an Ubuntu 20.04 LTS x64-bit created in Virtualization Station [Version3.5.57 (2021-04-29)]
Changed boot options to UEFI
Disabled Firefox extensions (u-block origin, noscript)
QNAP help desk suggested:
It may also be worth doing an in depth file system check.
How to connect to QNAP via ssh
https://www.qnap.com/en/how-to/knowledg ... as-by-ssh/
If you can SSH to NAS command line, please check if the mounted volume is cachedev1 (/dev/mapper/cachedev1 mount on /share/CACHEDEV1_DATA )
Usually volume name for cachedev1 is DataVol1. Execute df command, provide output.
# df
If mounted volume is cachedev1, please try below commands to stop services and un-mount cachedev1.
# /etc/init.d/services.sh stop
# /etc/init.d/opentftp.sh stop
# /etc/init.d/Qthttpd.sh stop
Try un-mount /dev/mapper/cachedev1.
# umount /dev/mapper/cachedev1
If still cannot un-mount, check which process is using cachedev1, and provide output (#1).
# lsof | grep /CACHEDEV1
Try kill processes that use CACHEDEV1, provide output.
# kill -9 $(lsof | grep CACHEDEV1)
Check again if there are processes using CACHEDEV1, and provide output (#2).
# lsof | grep /CACHEDEV1
Un-mount volume cachedev1. Provide output if un-mount command does not work.
# umount /dev/mapper/cachedev1
Verify, make sure cachedev1 is NOT mounted.
# df
Run check file system.
# e2fsck_64 -fp -C 0 /dev/mapper/cachedev1
After check file system, reboot.
# reboot
I did all the above in one go (had probs unmounting cachedev1) but after a reboot both VMs started working.
NOTE: VMs were a Debian 10 LTS 64-bit .ova import and an Ubuntu 20.04 LTS x64-bit created in Virtualization Station [Version3.5.57 (2021-04-29)]
Changed boot options to UEFI
Disabled Firefox extensions (u-block origin, noscript)
QNAP help desk suggested:
It may also be worth doing an in depth file system check.
How to connect to QNAP via ssh
https://www.qnap.com/en/how-to/knowledg ... as-by-ssh/
If you can SSH to NAS command line, please check if the mounted volume is cachedev1 (/dev/mapper/cachedev1 mount on /share/CACHEDEV1_DATA )
Usually volume name for cachedev1 is DataVol1. Execute df command, provide output.
# df
If mounted volume is cachedev1, please try below commands to stop services and un-mount cachedev1.
# /etc/init.d/services.sh stop
# /etc/init.d/opentftp.sh stop
# /etc/init.d/Qthttpd.sh stop
Try un-mount /dev/mapper/cachedev1.
# umount /dev/mapper/cachedev1
If still cannot un-mount, check which process is using cachedev1, and provide output (#1).
# lsof | grep /CACHEDEV1
Try kill processes that use CACHEDEV1, provide output.
# kill -9 $(lsof | grep CACHEDEV1)
Check again if there are processes using CACHEDEV1, and provide output (#2).
# lsof | grep /CACHEDEV1
Un-mount volume cachedev1. Provide output if un-mount command does not work.
# umount /dev/mapper/cachedev1
Verify, make sure cachedev1 is NOT mounted.
# df
Run check file system.
# e2fsck_64 -fp -C 0 /dev/mapper/cachedev1
After check file system, reboot.
# reboot
I did all the above in one go (had probs unmounting cachedev1) but after a reboot both VMs started working.
TVS-672XT (8Gb)
Firmware: Version 5.0.0.1850
Seagate Iron Wolf drives (raid 5)
Samsung 970 EVO Plus NVMe™ M.2 SSD (raid 1) [for system + vms]
Firmware: Version 5.0.0.1850
Seagate Iron Wolf drives (raid 5)
Samsung 970 EVO Plus NVMe™ M.2 SSD (raid 1) [for system + vms]