AWS Glacier Deep Archive work around?

caterwall
New here
Posts: 3
Joined: Mon Jan 08, 2018 2:27 am

AWS Glacier Deep Archive work around?

Post by caterwall »

While waiting for the S3 Glacier connector to get updated to include the GDA, does anyone have any good work arounds? The only one I can think of is lifecycling to GDA after a day.
bolandunited
New here
Posts: 9
Joined: Sun Dec 02, 2012 10:29 pm

Re: AWS Glacier Deep Archive work around?

Post by bolandunited »

I have exactly the same question. It is also pretty confusing that S3 Glacier Deep Archiving doesn't seem part of S3 Glacier, but it's part of the "normal" S3? Or am I mistaken?

Is there a workaround to use HBS with S3 Glacier Deep Archiving at this stage?
Ramias
New here
Posts: 9
Joined: Wed Oct 26, 2016 8:33 am

Re: AWS Glacier Deep Archive work around?

Post by Ramias »

Glacier Deep Archive shares the Glacier name with Glacier and that is it.

GDA is a separate storage tier under S3. With S3 API calls. Glacier uses vaults and archives. GDA is objects like regular S3, just a different, much lower-cost (for storage; retrieval is a different issue) tier.

The QNAP S3 Plus application lets you choose Standard and Reduced Redundancy. The Hybrid Backup Sync application lets you choose Standard, RR, and IA.

I suppose I could have S3 just lifecycle to Deep Archive, but it would be nice if one of these (or should I just start using Hybrid Backup Sync now?) supports Deep Archive natively and directly.
User avatar
itsmarcos
Easy as a breeze
Posts: 310
Joined: Thu Sep 29, 2011 5:34 am

Re: AWS Glacier Deep Archive work around?

Post by itsmarcos »

The latest version of HBS 3 Hybrid Backup Sync 3.0.191202 supports

https://www.qnap.com/en/app_releasenote ... bridBackup

Haven't tried it yet

Primary

QNAP TVS-951N [latest QTS 5.0.x]
- disk 1: WDC Red WD80EFZX
- disk 2: WDC Red WD80EFZX
- disk 6: Samsung SSD Evo 500GB, SSD Cache
- disk 7:Samsung SSD Evo 500GB, HybridMount Cache
- External disk: WDC Red WD60EFRX
Dead one
QNAP TS-253B [4.4.x] - now dead


Remote backup
QNAP TS-219 P+ [latest 4.3.x]
- disk 1: HGST Deskstar 7K3000 HDS723030ALA640 3TB
- disk 2: WDC Red WD40EFRX
bprager
Starting out
Posts: 12
Joined: Sat Mar 19, 2011 1:13 am

Re: AWS Glacier Deep Archive work around?

Post by bprager »

I don't see the option. How do I configure that?
Screen Shot 2019-12-07 at 12.13.06.png
You do not have the required permissions to view the files attached to this post.
User avatar
itsmarcos
Easy as a breeze
Posts: 310
Joined: Thu Sep 29, 2011 5:34 am

Re: AWS Glacier Deep Archive work around?

Post by itsmarcos »

Haven't tried it yet (probably during the holiday season). Amazon Deep Glacier is S3 compatible. My guess is that it should be under the generic Amazon Storage section.

Primary

QNAP TVS-951N [latest QTS 5.0.x]
- disk 1: WDC Red WD80EFZX
- disk 2: WDC Red WD80EFZX
- disk 6: Samsung SSD Evo 500GB, SSD Cache
- disk 7:Samsung SSD Evo 500GB, HybridMount Cache
- External disk: WDC Red WD60EFRX
Dead one
QNAP TS-253B [4.4.x] - now dead


Remote backup
QNAP TS-219 P+ [latest 4.3.x]
- disk 1: HGST Deskstar 7K3000 HDS723030ALA640 3TB
- disk 2: WDC Red WD40EFRX
q12345
Starting out
Posts: 29
Joined: Sat Jan 14, 2017 2:20 pm

Re: AWS Glacier Deep Archive work around?

Post by q12345 »

The 12/06/2019 release notes state it supports GDA but I don't see anything in the GUI to allow for actual GDA setup. Time to fumble around some more, but only seeing the "amazon glacier" as an option.
q12345
Starting out
Posts: 29
Joined: Sat Jan 14, 2017 2:20 pm

Re: AWS Glacier Deep Archive work around?

Post by q12345 »

Duplicate post but we have two threads going regarding GDA. It appears "work arounds" aren't needed anymore - there is support for GDA.

q12345 wrote: Sat Dec 21, 2019 8:20 am All right fella's, it seems like this is getting solved / sorted out. It seems Amazon has put access to glacier and deep archive both under S3 now. It looks like you can use AWS to create GDA folders, or perhaps w/ the GUI itself. See the image below.

https://i.imgur.com/GwrbIY5.png
viewtopic.php?f=24&t=148012&p=737855&hi ... ve#p737855
alskie30
New here
Posts: 2
Joined: Mon Dec 23, 2019 4:08 pm

Re: AWS Glacier Deep Archive work around?

Post by alskie30 »

i cant see this setting.. see attached photo
You do not have the required permissions to view the files attached to this post.
q12345
Starting out
Posts: 29
Joined: Sat Jan 14, 2017 2:20 pm

Re: AWS Glacier Deep Archive work around?

Post by q12345 »

Alskie30 -
To keep it brief:
I'm wondering why our settings would be different... but after more thought- you're actually on the right track. Your picture shows you on the "create storage space step". Choose AWS Global, paste in your access key/secret key; and this will give HBS access to your S3 buckets. After the storage space is linked, THEN you go to create a backup, and during the create backup steps you'll choose the storage class (presuming you want to choose deep archive given the thread we're on).


If you or anyone else are after more details - here they are.

1. Different hardware /software? Are you on the latest HBS 3 update?
Here's my setup -
TS231+
QNAP firmware rev: 4.4.1.1146
HBS 3 rev: v3.0.191202

2. Are you following a different process? Here was my process. Hint - you access S3 deep archive through setting up an S3 storage space (unlike glacier which was not previously under S3). I think you can access all storage classes under S3 now.

- FIRST I added a new storage space for Amazon S3. Previously I was using S3 Glacier. I connected HBS to S3 w/ keys (access key / secret key).
- I then made a Bucket from the amazon AWS console, and then a few folders, and set those new folders to deep archive from the AWS console (this is not required, just what I did. But perhaps at least making a bucket 1st is a good idea; not totally sure, could possibly complete the whole setup using the QNAP GUI vs the "backend" AWS console).
- After the storage space is connected, continue to HBS 3 / backup and restore / Create new backup job
- Choose local folders
- Choose destination storage space (S3 / AWS Global)
- Choose a bucket from your S3 storage space, and then choose storage class (deep archive).
- Continue w/ preferred backup settings/schedules
alskie30
New here
Posts: 2
Joined: Mon Dec 23, 2019 4:08 pm

Re: AWS Glacier Deep Archive work around?

Post by alskie30 »

Thanks for the great help sir!
I got it now! :D Scheduled the backup today and let's see...
03045
New here
Posts: 3
Joined: Mon Nov 23, 2015 12:44 am

Re: AWS Glacier Deep Archive work around?

Post by 03045 »

q12345 wrote: Mon Dec 23, 2019 9:37 pm Alskie30 -
- FIRST I added a new storage space for Amazon S3. Previously I was using S3 Glacier. I connected HBS to S3 w/ keys (access key / secret key).
- I then made a Bucket from the amazon AWS console, and then a few folders, and set those new folders to deep archive from the AWS console (this is not required, just what I did. But perhaps at least making a bucket 1st is a good idea; not totally sure, could possibly complete the whole setup using the QNAP GUI vs the "backend" AWS console).
- After the storage space is connected, continue to HBS 3 / backup and restore / Create new backup job
- Choose local folders
- Choose destination storage space (S3 / AWS Global)
- Choose a bucket from your S3 storage space, and then choose storage class (deep archive).
- Continue w/ preferred backup settings/schedules
Hi q12345,

I have been experimenting, and also used the steps you outlined.

This does result in a successful backup to my S3 bucket. However, I am observing two or three undesirable results, and I wonder if you can check your backups to see if the same has occurred for you?

Within HBS3, I use the wizard to create a new Backup Job:

1) Even though I specify the storage class as "Glacier Deep Archive", when the backup is completed (for the first time), I check my AWS Console S3 and see that all the folders and files which HBS3 created are storage class "Standard", not "Glacier Deep Archive". I am having to manually change the storage class of that backup to Glacier Deep Archive within my AWS console.
1a) When I run the same job a second time, sadly, HBS changes all the files it touches back to Standard. :(

2) When I create a new HBS3 backup job and create a new bucket and folder structure within the wizard, and specify which region, the resulting S3 bucket has Public Access enabled, and the bucket is NOT created in the region which I specify. Obviously, the workaround for this is to create the S3 bucket manually from my AWS S3 console first, and just point at that bucket when I am configuring the HBS3 job.

Based on my observations, it would appear that HBS3 is not actually passing along the storage class nor region which I specify in the wizard.

Curious to hear anyone else's experience?

Thanks! 8)
q12345
Starting out
Posts: 29
Joined: Sat Jan 14, 2017 2:20 pm

Re: AWS Glacier Deep Archive work around?

Post by q12345 »

03045 - the only issue I had (already stopped using AWS) is that before the files go to deep archive, I think they are billed for briefly passing through regular S3. There are some indices (index's?) that are stuck on S3 which I guess just track what's going in/out of deep archive - but those files were pretty small/cheap. I don't have any files left on AWS but I can confirm that my I was successful to get 1+TB of files from QNAP to deep archive and billed at their cheap rate. The bummer is the initial backup is pricey due to passing through S3, but then the "static" storage should be cheaper. Sorry I'm not much help. It took some experimented on my part for sure to be comfortable w/ all the plethora of settings and bucket pricing. I'm guessing a little bit but i'm fairly sure the backed up files immediately showed the right storage class when checking w/ the AWS web console. Checking/deleting files was a pain too...had to use a 3rd party software (FastGlacier) since I didn't want to learn syntax to manage w/ command lines.
-
I reverted back to idrive - since w/ a student email you can get a good rate but even full price is not too horrible. idrive was deprecated temporarily w/ a QNAP update so I went searching for an alternative; after a failed and expensive AWS experience, and idrive releasing an update to resolve issues... back to them.
-
Again - this is all in hindsight and afraid I can't offer much better help.
streep
New here
Posts: 7
Joined: Wed Jan 06, 2010 3:33 am

Re: AWS Glacier Deep Archive work around?

Post by streep »

03045 wrote: Sun Apr 26, 2020 4:20 am 1) Even though I specify the storage class as "Glacier Deep Archive", when the backup is completed (for the first time), I check my AWS Console S3 and see that all the folders and files which HBS3 created are storage class "Standard", not "Glacier Deep Archive". I am having to manually change the storage class of that backup to Glacier Deep Archive within my AWS console.
1a) When I run the same job a second time, sadly, HBS changes all the files it touches back to Standard. :(
Late reply, but I'm seeing exactly the same... did anybody ever find a solution for this? Or are people seeing the same problem still and would this make it impossible to backup to a Deep Glacier bucket in AWS S3?
Revb0b
First post
Posts: 1
Joined: Sat Jan 16, 2016 6:14 am

Re: AWS Glacier Deep Archive work around?

Post by Revb0b »

I came here today to look for an answer as I'm still seeing my files get stuck in Standard.
Post Reply

Return to “Amazon Glacier”