You are working with a customer who has 10 TB of archival data that they want to migrate to Amazon Glacier.
The customer has a 1-Mbps connection to the Internet. Which service or feature provides the fastest method
of getting the data into Amazon Glacier?
A.
Amazon Glacier multipart upload
B.
AWS Storage Gateway
C.
VM Import/Export
D.
AWS Import/Export
Yes multipart upload for archives greater then 100MB… but you still would be limited by 1-Mbps link.
imho AWS Import/Export is better option.
Therefore answer is D
“To upload existing data to Amazon Glacier, you might consider using the AWS Import/Export service. AWS Import/Export accelerates moving large amounts of data into and out of AWS using portable storage devices for transport. AWS transfers your data directly onto and off of storage devices using Amazon’s high-speed internal network, bypassing the Internet. ”
Source: http://docs.aws.amazon.com/amazonglacier/latest/dev/uploading-an-archive.html
d
1 TB -> 1024 X1024 MB = 1048576 MB
if Speed is 1 MBPS -> means will take 291 Hours(12 days) to upload.
12 days which is greater than 7 Days.
SO use Import/Export , Answer is D.
Manish, 1024×1024 MB = 1 GB. You need to multiply that again by another 1024 to get the number of MB in a TB, so I think that will change your analysis.
No, Manish is right with “1 TB -> 1024 X1024 MB”.
However the 12 days needs to be
– multiplied by 10 as the question says 10TB not 1TB
– multiplied by 8 as the question says speed is 1Mbps not 1MBps. Bits not bytes.
So 960 days I think.
10 TB = (10 * 1024 * 1024) = 10485760 MB * 8 => 83886080 Mb / 1Mbps => 83886080s. –> 1398101 min –> 23301 hours –> 970+ days
A. Amazon Glacier multipart upload is correct answer.
Visit it for valid AWS exam answers: http://www.dumps4download.us/free-aws-solution-architect-associate/amazon-question-answers.html
Answer is A ,
The limit is 10,000 x 4 GB.
Refer to : Quick Facts
http://docs.aws.amazon.com/amazonglacier/latest/dev/uploading-archive-mpu.html
I choose A
Answer is D. read carefully the question: “the fastest method”. If you use multipart upload your upload speed is always 1Mbps. Amazon Import/Export is the fastest way.
Option A
I though in option D but i found this
You can only perform an Amazon Glacier import from devices of 4 TB in size or smaller.
http://docs.aws.amazon.com/es_es/AWSImportExport/latest/DG/createGlacierimportjobs.html
if choose A, you will still be limited by the 1 Mbps internet connection, 10TB gonna need 970 days based on the above calculation.
I think we can consider the below option since the question is the fastest method of data transportation, and there is no requirement about how long the objects need to be stay in S3 bucket before they are transferred to Glacier after object creation.
You can only perform a Amazon Glacier import from devices of 4 TB in size or smaller. If you need to import more than 4 TB into Amazon Glacier, consider using an Amazon S3 import and archiving your objects to Amazon Glacier with an Amazon S3 lifecycle policy.
http://docs.aws.amazon.com/AWSImportExport/latest/DG/CHAP_GuideAndLimit.html
so I think D is more reliable, how do you think ?
fastest method is the key word. So D is perfect match.
A
An archive is a durably stored block of information. You store your data in Amazon Glacier as archives. You may upload a single file as an archive, but your costs will be lower if you aggregate your data. TAR and ZIP are common formats that customers use to aggregate multiple files into a single file before uploading to Amazon Glacier. The total volume of data and number of archives you can store are unlimited. Individual Amazon Glacier archives can range in size from 1 byte to 40 terabytes. The largest archive that can be uploaded in a single Upload request is 4 gigabytes. For items larger than 100 megabytes, customers should consider using the Multipart upload capability. Archives stored in Amazon Glacier are immutable, i.e. archives can be uploaded and deleted but cannot be edited or overwritten.
Answer D…..
correct ans is D
Answer is D. Going with a, you restrict yourself to the internet which is 1Mbs, but import/export boycotts that completely, as you just upload into a disk or snowball and send to AWS.
Why no one is considering B.?
https://aws.amazon.com/es/storagegateway/
Storage gateway will use internet connection or DirectConnect. Presumably based on this scenario, that is the same 1 Mbps link.
Moving 10 TB through a 1 Mbps link is like trying to suck a bowling ball through a cocktail straw.
A
“You can only perform an Amazon Glacier import from devices of 4 TB in size or smaller”
https://docs.aws.amazon.com/es_es/AWSImportExport/latest/DG/createGlacierimportjobs.html
I reluctantly agree with A. D would be correct if it wasnt for the size limitation. Tricky question
D is correct. You can send 3 disks, 2 of size 4 TB and 1 of size 2 TB. The data can be either directly copied to Glacier, or to S3 and then to Glacier if you want to restore individual files later on.
D, is the fastest one.
D is the right answer as there is no problem in splitting 10TB into multiple disks.
Also consider the guidance published by Amazon on when to use Import/Export:
https://aws.amazon.com/cloud-data-migration/
Available Internet Connection | Theoretical Min. Number of Days to Transfer 100TB at 80% Network Utilization | When to Consider AWS Import/Export Snowball?
Connection Days Min size
—- —- ——–
T3 (44.736Mbps) 269 days 2TB or more
100Mbps 120 days 5TB or more
1000Mbps 12 days 60TB or more
D is correct.
10 TB takes ~900 days through 1 Mbps link. Splitting into 3 drives is faster
Import/Export Limit for Glacier is 5TB per job per disk but maximum allowable jobs per day is 50, so total one can transfer upto 5×50=250TB. Of course it means splitting data into multiple disks. With option A the limiting factor is the internet speed and that cannot be overcome any other way.
Still agree with D.
https://docs.aws.amazon.com/es_es/AWSImportExport/latest/DG/CHAP_GuideAndLimit.html
==> You can only perform a Amazon Glacier import from devices of 4 TB in size or smaller.
The 4TB limit is per device, drive. If the 10TB data is in several drives / devices, that would be ok. Besides,I do not currently see any 10TB drive available on market.
A:
In these days passing AWS Solution Architect Associate certificated exam is very difficult. But you can make it simple and easy with the help of dumps question answers
http://www.dumps4download.com/aws-solution-architect-associate-dumps.html
If you have please forward it to me : [email protected]
If you have kindly can you please forward me to [email protected]
D
http://docs.aws.amazon.com/amazonglacier/latest/dev/uploading-an-archive.html
Using the AWS Import/Export Service
To upload existing data to Amazon Glacier, you might consider using the AWS Import/Export service. AWS Import/Export accelerates moving large amounts of data into and out of AWS using portable storage devices for transport. AWS transfers your data directly onto and off of storage devices using Amazon’s high-speed internal network, bypassing the Internet. For more information, go to the AWS Import/Export detail page.
It’s A.
it support to 40TB
http://docs.aws.amazon.com/amazonglacier/latest/dev/uploading-an-archive.html
You really want a upload over 23871 Hours?
Is AWS import export
You can only perform a Amazon Glacier import from devices of 4 TB in size or smaller. If you need to import more than 4 TB into Amazon Glacier, consider using an Amazon S3 import and archiving your objects to Amazon Glacier with an Amazon S3 lifecycle policy.
This clearly states that 16TB is the upper limit of data size for Amazon AWS:
“AWS Import/Export accelerates transferring data between the AWS cloud and portable storage devices that you mail to us. AWS Import/Export is a good choice if you have 16 terabytes (TB) or less of data to import into Amazon Simple Storage Service (Amazon S3), Amazon Glacier, or Amazon Elastic Block Store (Amazon EBS). You can also export data from Amazon S3 with AWS Import/Export.”
http://docs.aws.amazon.com/AWSImportExport/latest/DG/whatisdisk.html
http://docs.aws.amazon.com/amazonglacier/latest/dev/uploading-an-archive.html
Upload archives in parts – Using the multipart upload API, you can upload large archives, up to about 40,000 GB (10,000 * 4 GB).
D
D is 100% correct
AWS Snowball can accelerate moving large amounts of data into and out of AWS using portable storage devices for transport. AWS transfers your data directly onto and off of storage devices using Amazon’s high-speed internal network and bypassing the Internet. For significant data sets, AWS Import/Export is often faster than Internet transfer and more cost effective than upgrading your connectivity. You can use AWS Import/Export for migrating data into the cloud, distributing content to your customers, sending backups to AWS, and disaster recovery.
AWS Import/Export accelerates transferring data between the AWS cloud and portable storage devices that you mail to us. AWS Import/Export is a good choice if you have 16 terabytes (TB) or less of data to import into Amazon Simple Storage Service or Amazon Elastic Block Store (Amazon EBS). You can also export data from Amazon S3 with AWS Import/Export.
So right one is AWS Import/Export