What should you do to ensure optimal performance?

You are designing a web application that stores static assets in an Amazon Simple Storage Service (S3) bucket.
You expect this bucket to immediately receive over 150 PUT requests per second. What should you do to
ensure optimal performance?

You are designing a web application that stores static assets in an Amazon Simple Storage Service (S3) bucket.
You expect this bucket to immediately receive over 150 PUT requests per second. What should you do to
ensure optimal performance?

A.
Use multi-part upload.

B.
Add a random prefix to the key names.

C.
Amazon S3 will automatically manage performance at this scale.

D.
Use a predictable naming scheme, such as sequential numbers or date time sequences, in the key names

Explanation:
One way to introduce randomness to key names is to add a hash string as prefix to the key name. For example,
you can compute an MD5 hash of the character sequence that you plan to assign as the key name. From the
hash, pick a specific number of characters, and add them as the prefix to the key name.



Leave a Reply 10

Your email address will not be published. Required fields are marked *


fun4two

fun4two

answer b explanation link at the jm

kamleshj

kamleshj

B

If you anticipate that your workload will consistently exceed 100 requests per second, you should avoid sequential key names. If you must use sequential numbers or date and time patterns in key names, add a random prefix to the key name. The randomness of the prefix more evenly distributes key names across multiple index partitions. Examples of introducing randomness are provided later in this topic.

P

P

Background: http://docs.aws.amazon.com/AmazonS3/latest/dev/request-rate-perf-considerations.html

This topic discusses Amazon S3 best practices for optimizing performance depending on your request rates. If your workload in an Amazon S3 bucket routinely exceeds 100 PUT/LIST/DELETE requests per second or more than 300 GET requests per second, follow the guidelines in this topic to ensure the best performance and scalability.

Amazon S3 scales to support very high request rates. If your request rate grows steadily, Amazon S3 automatically partitions your buckets as needed to support higher request rates. However, if you expect a rapid increase in the request rate for a bucket to more than 300 PUT/LIST/DELETE requests per second or more than 800 GET requests per second, we recommend that you open a support case to prepare for the workload and avoid any temporary limits on your request rate. To open a support case, go to Contact Us.

KusK

KusK

Answer is C

Amazon S3 scales to support very high request rates. If your request rate grows steadily, Amazon S3 automatically partitions your buckets as needed to support higher request rates. However, if you expect a rapid increase in the request rate for a bucket to more than 300 PUT/LIST/DELETE requests per second or more than 800 GET requests per second, we recommend that you open a support case to prepare for the workload and avoid any temporary limits on your request rate.

evolver

evolver

as per earlier official doc, s3 handling capacity has 100 PUT/LIST/DELETE and then B option was the correct choice but now as KusK has mentioned the capacity has been increased to 300 PUT/LIST/DELETE requests.

Explaination for B: if the names are sequential then the indices created by s3 for the objects are also similar and stored at same location. the likelihood that Amazon S3 will target a specific partition for a large number of your keys, overwhelming the I/O capacity of the partition. If you introduce some randomness in your key name prefixes, the key names, and therefore the I/O load, will be distributed across more than one partition.

VK

VK

I agree with Evolver. Ans is B

rnatarajan

rnatarajan

I agree with Kusk as it is very clear that any put/list/delete below 300 is automatically scaled by S3. Answer is C.

TJ

TJ

I thinks its C
Amazon S3 scales to support very high request rates. If your request rate grows steadily, Amazon S3 automatically partitions your buckets as needed to support higher request rates. However, if you expect a rapid increase in the request rate for a bucket to more than 300 PUT/LIST/DELETE requests per second or more than 800 GET requests per second, we recommend that you open a support case to prepare for the workload and avoid any temporary limits on your request rate.

http://docs.aws.amazon.com/AmazonS3/latest/dev/request-rate-perf-considerations.html