S3 Plus backup job fails on 2nd run due to wrong region

Post Reply
doozr
New here
Posts: 4
Joined: Sun Aug 21, 2016 5:13 pm

S3 Plus backup job fails on 2nd run due to wrong region

Post by doozr »

I have a TS-251 with firmware version 4.2.2 and I am attempting to back up to Amazon S3 with the S3 Plus application.

I can successfully create backups jobs and they complete the first time, but subsequent runs fail immediately with one of two errors:
the folder name is not available. Please select antoher one
(typo is present in the log)
the previously created bucket is missing in the cloud storage. Please check if you have deleted the data accidentally and contact the service provider for detailed information
The problem appears to be that the software is assuming that all buckets exist in the us-east-1 region. I'm not sure why it works the first time, but on the second run of any backup job the following is logged to the debug log:

Code: Select all

2016-08-21 09:57:22,185 PID:11161 awss3connector.py:509 cc DEBUG test_container
2016-08-21 09:57:22,185 PID:11161 awss3connector.py:236 cc DEBUG establish connection with is_secure=True
2016-08-21 09:57:22,186 PID:11161 awss3connector.py:210 cc DEBUG Connect S3...
2016-08-21 09:57:22,192 PID:11161 awss3connector.py:411 cc DEBUG try to get bucket sheridandrews.backup
2016-08-21 09:57:22,193 PID:11161 awss3connector.py:413 cc DEBUG Finish get bucket
2016-08-21 09:57:22,623 PID:11161 awss3connector.py:424 cc ERROR get_location
Traceback (most recent call last):
  File "./package/src/shared/bin/connector/awss3connector.py", line 422, in __get_connection_by_bucket_region
  File "./package/src/shared/python-lib/boto/s3/bucket.py", line 1146, in get_location
S3ResponseError: S3ResponseError: 400 Bad Request
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AuthorizationHeaderMalformed</Code><Message>The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'eu-west-1'</Message><Region>eu-west-1</Region><RequestId>1D1F5E1EF3FC9887</RequestId><HostId>ibRa2wHVUfHREHTlk/3txC0L/bF0DFNjzZW5+eMpu7cd99iEwxbauZA/yzgZ6ZwnLKuXZB+yKXY=</HostId></Error>
2016-08-21 10:02:58,256 PID:6498 awss3connector.py:236 cc DEBUG establish connection with is_secure=True
2016-08-21 10:02:58,257 PID:6498 awss3connector.py:210 cc DEBUG Connect S3...
2016-08-21 10:02:58,260 PID:6498 awss3connector.py:528 cc DEBUG Bucket exists already
2016-08-21 10:02:58,960 PID:6498 awss3connector.py:590 cc ERROR get exception ConnectorFolderExistError: (ConnectorFolderExistError(...), -2012, 'the folder name is not available. Please select antoher one')
Traceback (most recent call last):
  File "./package/src/shared/bin/connector/awss3connector.py", line 542, in test_container
ConnectorFolderExistError: (ConnectorFolderExistError(...), -2012, 'the folder name is not available. Please select antoher one')
2016-08-21 10:02:58,962 PID:6498 engine.py:559 cc ERROR exist a container, but job was never ran
This means that incremental or repeating backup jobs are impossible at the moment, because as soon as they run a second time the fail and never run again. Setting up a new job complains that the bucket is not empty and refuses to inherit the previous job's bucket.

Please advise how to make the backup work for repeating and incremental jobs.
czlars
New here
Posts: 2
Joined: Wed Jan 19, 2011 3:06 pm

Re: S3 Plus backup job fails on 2nd run due to wrong region

Post by czlars »

I confirm the same problem. FW 4.2.2 (2016/12/16), TS 853A.

Is there any know solution how to make S3 inremental backup work?

I find this behaviour somewhat stupid as it means a backup job can not be moved between NAS boxes, or even created again from scratch, because it would require deletion of current S3 data and full resync again. Quite unpleasant if I have 2+ TB of data already synced to S3.
sidhant
New here
Posts: 4
Joined: Sat Nov 26, 2016 7:31 am

Re: S3 Plus backup job fails on 2nd run due to wrong region

Post by sidhant »

Exact same problem. I just backed up 380GB, and on running the second time, it is throwing the incorrect region error. What the hell QNAP?
sidhant
New here
Posts: 4
Joined: Sat Nov 26, 2016 7:31 am

Re: S3 Plus backup job fails on 2nd run due to wrong region

Post by sidhant »

OK, so I just spent the last few hours debugging this and made it WORK.

BUG: "statistics" SQLLite file is not handled appropriately. The core issue is that any change to the job, deletes any statistics thus losing history/state of the existing job.


Steps to recreate:
1. I created an S3 backup and it worked successfully.
2. It ran automatically after a few hours (2nd run) -- successfully.
3. I made a small change to the backup job by adding more folders to back. MISTAKE.
4. BUG: Entries from the statistics SQLLite3 database (/share/CACHEDEV1_DATA/.qpkg/CloudBackupSync/data/0/statistics) were DELETED.
5. When I try to run job again, is_job_executed is FALSE because there are no entries with job_id=1

This is crazy because I should be allowed to change the job settings as needed without requiring a new job.

How I figured it out:

The log ( tail -f /share/CACHEDEV1_DATA/.qpkg/CloudBackupSync/log/0/1/cloudconnector-debug.log) said the following on the failing job:

Code: Select all

[2017-01-17 20:33:00,712][JOB:1][PID:2531][TNAME:MainThread][INFO ][cc][engine.py:main_function:179] : Photos and Video to S3-IA, Go!
[2017-01-17 20:33:01,582][JOB:1][PID:2531][TNAME:MainThread][DEBUG ][cc][engine.py:process:260] : [b]test container exist, is_job_executed False[/b]
[2017-01-17 20:33:01,583][JOB:1][PID:2531][TNAME:MainThread][DEBUG ][cc][awss3connector.py:test_container:509] : test_container
[2017-01-17 20:33:01,584][JOB:1][PID:2531][TNAME:MainThread][DEBUG ][cc][awss3connector.py:get_connection:233] : establish connection with is_secure=True
[2017-01-17 20:33:01,585][JOB:1][PID:2531][TNAME:MainThread][DEBUG ][cc][awss3connector.py:__get_connection_host:207] : Connect S3...
[2017-01-17 20:33:01,589][JOB:1][PID:2531][TNAME:MainThread][DEBUG ][cc][awss3connector.py:__get_connection_by_bucket_region:410] : try to get bucket sidhant-nasbackup
[2017-01-17 20:33:01,589][JOB:1][PID:2531][TNAME:MainThread][DEBUG ][cc][awss3connector.py:__get_connection_by_bucket_region:412] : Finish get bucket
[2017-01-17 20:33:01,680][JOB:1][PID:2531][TNAME:MainThread][ERROR ][cc][awss3connector.py:__get_connection_by_bucket_region:423] : get_location
Traceback (most recent call last):
File "./package/src/shared/python-lib/connector/awss3connector.py", line 421, in __get_connection_by_bucket_region
File "./package/src/shared/python-lib/boto/s3/bucket.py", line 1146, in get_location
S3ResponseError: S3ResponseError: 400 Bad Request

AuthorizationHeaderMalformed[b]The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'us-west-2'us-west-[/b]21954995DE6DAFB55eRYdR6fhp71QPLJPaCHiHLdyZmPWXYR64a5o9THeK0yBQAUl6bJCDoTHpI7ur+zlXYLjjAkCAdw=
[2017-01-17 20:33:01,682][JOB:1][PID:2531][TNAME:MainThread][DEBUG ][cc][awss3connector.py:get_connection:233] : establish connection with is_secure=True
[2017-01-17 20:33:01,683][JOB:1][PID:2531][TNAME:MainThread][DEBUG ][cc][awss3connector.py:__get_connection_host:207] : Connect S3...
[2017-01-17 20:33:01,686][JOB:1][PID:2531][TNAME:MainThread][DEBUG ][cc][awss3connector.py:test_container:528] : Bucket exists already
[2017-01-17 20:33:01,920][JOB:1][PID:2531][TNAME:MainThread][ERROR ][cc][awss3connector.py:test_container:592] : get exception BackupDstFolderNotAvailable: (BackupDstFolderNotAvailable(...), -4005, 'the folder name is not available. Please select antoher one')
Traceback (most recent call last):
File "./package/src/shared/python-lib/connector/awss3connector.py", line 542, in test_container
BackupDstFolderNotAvailable: (BackupDstFolderNotAvailable(...), -4005, 'the folder name is not available. Please select antoher one')
[2017-01-17 20:33:01,921][JOB:1][PID:2531][TNAME:MainThread][ERROR ][cc][engine.py:process:282] : exist a container, but job was never ran
[2017-01-17 20:33:04,371][JOB:1][PID:2531][TNAME:MainThread][DEBUG ][qnap.qlog][qlog.py:emit:142] : log_tool: Appending a log to database...
Done. Return value is 0

[2017-01-17 20:33:04,651][JOB:1][PID:2531][TNAME:MainThread][ERROR ][cc][logutils.py:do_fail_log:293] : BackupDstFolderNotAvailable: the folder name is not available. Please select antoher one
[2017-01-17 20:33:04,663][JOB:1][PID:2531][TNAME:MainThread][INFO ][cc][engine.py::274] : end backup job:1 
Step 1:
I first fixed the region issue. This can be fixed by editing /etc/config/cloudconnector/CloudBackupSync/cloudconnector.%system_shared%.conf and looking for the key "region_name". In my case, it was set to blank. This is again because the stupid code written by monkeys decided to re-init the entire job when I made a small edit. Setting it to 'us-west-2' or whatever your region is will fix this.


Step 2
Now, the process failed again, with a different error in logs. It said "Bucket exists already".

Step 3
I looked into, why would it throw that error. Turns, out it does it on the *first run*. How does it know, its the first run? the variable is_job_executed was set to False.

Step 4
How is the variable is_job_executed set?
It is set in engine.py in the share/CACHEDEV1_DATA/.qpkg/CloudBackupSync/bin directory. Thre, it calls the backupstatistics.py.
backupstatistics.py runs a SQL something like:

Code: Select all

 "SELECT COUNT(*) FROM backup_job WHERE job_id=XYZ and (number_of_files_to_process > 0 or size_transferred > 0 or state='Success' or state='Stopped')"
If the count > 0, it sets the variable is_job_executed to TRUE otherwise False.

But where is the DB file? Its here: /share/CACHEDEV1_DATA/.qpkg/CloudBackupSync/data/0/statistics

When I looked at the DB in a SQL broswer, there were NO ENTRIES pertaining to job_id=1. I added a dummy entry:

Code: Select all

INSERT INTO backup_job VALUES(1,1,'Success',NULL,'2017-01-17 20:11:00','2017-01-17 20:12:00',0,0,0,0,0,0,0,0,0,NULL,0);
Saved it. Replaced the statistics DB file with my new file with a new entry and Voila!

It WORKED.
czlars
New here
Posts: 2
Joined: Wed Jan 19, 2011 3:06 pm

Re: S3 Plus backup job fails on 2nd run due to wrong region

Post by czlars »

Absolutely fantastic. Works great!

Thank you, Sidhant.
Post Reply

Return to “Amazon S3”