I have had this same problem occur on 2 different qnap devices now. We get several daily incremental upload S3 jobs configured and working through the web admin. One of these jobs then runs into a 10-20 gig file that it can't finish uploading to s3 before the next daily job occurs. When the next daily schedule starts the same job starts again even though the previous one hasn't finished. This process then starts to occur more often because the uploads start to take up all of the possible upload bandwidth. Then I log into the NAS through SSH and look for all of the running S3 jobs using "ps -A | grep "/usr/local/amazons3". I end up finding about 20-30 of them and manually kill them all.
Is there anyway to stop this from happening on these jobs? I was a little worried to try to manually edit the cron jobs that run these uploads but would be fine doing that if someone knows a relatively simple way to do it.
Thanks for your help,
1 post • Page 1 of 1
- New here
- Posts: 7
- Joined: Thu Aug 13, 2009 9:58 am