Which service or feature provides the fastest method of getting the data into Amazon Glacier?

You are working with a customer who has 10 TB of archival data that they want to migrate to Amazon Glacier.
The customer has a 1-Mbps connection to the Internet. Which service or feature provides the fastest method
of getting the data into Amazon Glacier?

You are working with a customer who has 10 TB of archival data that they want to migrate to Amazon Glacier.
The customer has a 1-Mbps connection to the Internet. Which service or feature provides the fastest method
of getting the data into Amazon Glacier?

A.
Amazon Glacier multipart upload

B.
AWS Storage Gateway

C.
VM Import/Export

D.
AWS Import/Export



Leave a Reply 41

Your email address will not be published. Required fields are marked *


JM

JM

Yes multipart upload for archives greater then 100MB… but you still would be limited by 1-Mbps link.

imho AWS Import/Export is better option.
Therefore answer is D

“To upload existing data to Amazon Glacier, you might consider using the AWS Import/Export service. AWS Import/Export accelerates moving large amounts of data into and out of AWS using portable storage devices for transport. AWS transfers your data directly onto and off of storage devices using Amazon’s high-speed internal network, bypassing the Internet. ”

Source: http://docs.aws.amazon.com/amazonglacier/latest/dev/uploading-an-archive.html

Manish Pathak

Manish Pathak

1 TB -> 1024 X1024 MB = 1048576 MB
if Speed is 1 MBPS -> means will take 291 Hours(12 days) to upload.
12 days which is greater than 7 Days.

SO use Import/Export , Answer is D.

Venkat Rangamani

Venkat Rangamani

Manish, 1024×1024 MB = 1 GB. You need to multiply that again by another 1024 to get the number of MB in a TB, so I think that will change your analysis.

Steve

Steve

No, Manish is right with “1 TB -> 1024 X1024 MB”.
However the 12 days needs to be
– multiplied by 10 as the question says 10TB not 1TB
– multiplied by 8 as the question says speed is 1Mbps not 1MBps. Bits not bytes.
So 960 days I think.

BM

BM

10 TB = (10 * 1024 * 1024) = 10485760 MB * 8 => 83886080 Mb / 1Mbps => 83886080s. –> 1398101 min –> 23301 hours –> 970+ days

seenagape

seenagape

I choose A

TechMinded

TechMinded

Answer is D. read carefully the question: “the fastest method”. If you use multipart upload your upload speed is always 1Mbps. Amazon Import/Export is the fastest way.

Tony

Tony

Option A

I though in option D but i found this

You can only perform an Amazon Glacier import from devices of 4 TB in size or smaller.

http://docs.aws.amazon.com/es_es/AWSImportExport/latest/DG/createGlacierimportjobs.html

Joanna LOL

Joanna LOL

if choose A, you will still be limited by the 1 Mbps internet connection, 10TB gonna need 970 days based on the above calculation.
I think we can consider the below option since the question is the fastest method of data transportation, and there is no requirement about how long the objects need to be stay in S3 bucket before they are transferred to Glacier after object creation.
You can only perform a Amazon Glacier import from devices of 4 TB in size or smaller. If you need to import more than 4 TB into Amazon Glacier, consider using an Amazon S3 import and archiving your objects to Amazon Glacier with an Amazon S3 lifecycle policy.
http://docs.aws.amazon.com/AWSImportExport/latest/DG/CHAP_GuideAndLimit.html

Joanna LOL

Joanna LOL

so I think D is more reliable, how do you think ?

euxyabe

euxyabe

fastest method is the key word. So D is perfect match.

kamleshj

kamleshj

A

An archive is a durably stored block of information. You store your data in Amazon Glacier as archives. You may upload a single file as an archive, but your costs will be lower if you aggregate your data. TAR and ZIP are common formats that customers use to aggregate multiple files into a single file before uploading to Amazon Glacier. The total volume of data and number of archives you can store are unlimited. Individual Amazon Glacier archives can range in size from 1 byte to 40 terabytes. The largest archive that can be uploaded in a single Upload request is 4 gigabytes. For items larger than 100 megabytes, customers should consider using the Multipart upload capability. Archives stored in Amazon Glacier are immutable, i.e. archives can be uploaded and deleted but cannot be edited or overwritten.

niraj

niraj

Answer D…..

AWS ARC

AWS ARC

correct ans is D

Senator

Senator

Answer is D. Going with a, you restrict yourself to the internet which is 1Mbs, but import/export boycotts that completely, as you just upload into a disk or snowball and send to AWS.

eduardojn

eduardojn

Why no one is considering B.?

https://aws.amazon.com/es/storagegateway/

mutiger91

mutiger91

Storage gateway will use internet connection or DirectConnect. Presumably based on this scenario, that is the same 1 Mbps link.

Moving 10 TB through a 1 Mbps link is like trying to suck a bowling ball through a cocktail straw.

krish

krish

A
“You can only perform an Amazon Glacier import from devices of 4 TB in size or smaller”
https://docs.aws.amazon.com/es_es/AWSImportExport/latest/DG/createGlacierimportjobs.html

donkeynuts

donkeynuts

I reluctantly agree with A. D would be correct if it wasnt for the size limitation. Tricky question

Aseem

Aseem

D is correct. You can send 3 disks, 2 of size 4 TB and 1 of size 2 TB. The data can be either directly copied to Glacier, or to S3 and then to Glacier if you want to restore individual files later on.

engmohhamed

engmohhamed

D, is the fastest one.

vladam

vladam

D is the right answer as there is no problem in splitting 10TB into multiple disks.

vladam

vladam

Also consider the guidance published by Amazon on when to use Import/Export:
https://aws.amazon.com/cloud-data-migration/

Available Internet Connection | Theoretical Min. Number of Days to Transfer 100TB at 80% Network Utilization | When to Consider AWS Import/Export Snowball?

Connection Days Min size
—- —- ——–
T3 (44.736Mbps) 269 days 2TB or more
100Mbps 120 days 5TB or more
1000Mbps 12 days 60TB or more

MorseIT

MorseIT

D is correct.
10 TB takes ~900 days through 1 Mbps link. Splitting into 3 drives is faster

Kamran

Kamran

Import/Export Limit for Glacier is 5TB per job per disk but maximum allowable jobs per day is 50, so total one can transfer upto 5×50=250TB. Of course it means splitting data into multiple disks. With option A the limiting factor is the internet speed and that cannot be overcome any other way.

Duck Bro

Duck Bro

D
http://docs.aws.amazon.com/amazonglacier/latest/dev/uploading-an-archive.html
Using the AWS Import/Export Service

To upload existing data to Amazon Glacier, you might consider using the AWS Import/Export service. AWS Import/Export accelerates moving large amounts of data into and out of AWS using portable storage devices for transport. AWS transfers your data directly onto and off of storage devices using Amazon’s high-speed internal network, bypassing the Internet. For more information, go to the AWS Import/Export detail page.

Rickety

Rickety

You can only perform a Amazon Glacier import from devices of 4 TB in size or smaller. If you need to import more than 4 TB into Amazon Glacier, consider using an Amazon S3 import and archiving your objects to Amazon Glacier with an Amazon S3 lifecycle policy.

Jas

Jas

This clearly states that 16TB is the upper limit of data size for Amazon AWS:
“AWS Import/Export accelerates transferring data between the AWS cloud and portable storage devices that you mail to us. AWS Import/Export is a good choice if you have 16 terabytes (TB) or less of data to import into Amazon Simple Storage Service (Amazon S3), Amazon Glacier, or Amazon Elastic Block Store (Amazon EBS). You can also export data from Amazon S3 with AWS Import/Export.”

http://docs.aws.amazon.com/AWSImportExport/latest/DG/whatisdisk.html

KusK

KusK

D is 100% correct

AWS Snowball can accelerate moving large amounts of data into and out of AWS using portable storage devices for transport. AWS transfers your data directly onto and off of storage devices using Amazon’s high-speed internal network and bypassing the Internet. For significant data sets, AWS Import/Export is often faster than Internet transfer and more cost effective than upgrading your connectivity. You can use AWS Import/Export for migrating data into the cloud, distributing content to your customers, sending backups to AWS, and disaster recovery.

T

T

AWS Import/Export accelerates transferring data between the AWS cloud and portable storage devices that you mail to us. AWS Import/Export is a good choice if you have 16 terabytes (TB) or less of data to import into Amazon Simple Storage Service or Amazon Elastic Block Store (Amazon EBS). You can also export data from Amazon S3 with AWS Import/Export.

So right one is AWS Import/Export