You are configuring your company’s application to use Auto Scaling and need to move user state information.
Which of the following AWS services provides a shared data store with durability and low latency?
A.
AWS ElastiCache Memcached
B.
Amazon Simple Storage Service
C.
Amazon EC2 instance storage
D.
Amazon DynamoDB
Explanation:
https://d36cz9buwru1tt.cloudfront.net/AWS_Overview.pdf (page 13, aws storage gateway)
I don`t think that AWS Storage Gateway should be used in this case. Storage Gateway is more about files. But user state information is more about sessions and other temporary data.
Can anyone confirm if B is a correct answer?
I would say that D should be the answer.
I would agree with T to use DynamoDB
I’m not sure if DynamoDB is really durable for data.
I would go for B also
“Amazon Simple Storage Service (Amazon S3), provides developers and IT teams with secure, durable, highly-scalable cloud storage”
https://aws.amazon.com/s3/?nc1=h_ls
d.
Amazon DynamoDB is a fully managed NoSQL database cloud service, part of the AWS portfolio. Fast and easily scalable, it is meant to serve applications which require very low latency,…
point here is :
1 shared data store
2 durability
3 low latency
so S3 should be the answer
yes should be B
D. Amazon DynamoDB (100% sure)
in DynamoDB’s FAQ:
Q: How does Amazon DynamoDB achieve high uptime and durability?
To achieve high uptime and durability, Amazon DynamoDB synchronously replicates data across three facilities within an AWS Region.
question is about datastore not about database.
So i guess its B
what is the difference
I think its S3
I have the same idea. B
D
It’s A.
State information (for instance for websites) is best stored in fast, low latency systems such as memcached or Redis.
No durability.
DynamoDB is a “shared data store with durability and low latency”, and it offers strongly consistent reads.
The 3 points of this question are: Share, Durability and consistence…
Answer is D
Answer is D.
A: Elasticache Memcached is shared and low latency, but not very durable.
B: S3 is both shared and amazingly durable, but it has trouble in the latency
C: Instance storage ok for low latency, but it is neither shared nor durable.
D: DynamoDB is perfect for this: it is a “shared data store with durability and low latency”, and it offers strongly consistent reads. http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html
This is the best answer.
It would be possible to define a session object that can be stored in S3. I don’t know that latency on a given read would be an issue, but let’s say that I’m on a page and make a change that has to be written to my session object (overwrite). I navigate to another page. I need to again get the latest session data. There will be a period of time where I could get stale data because of the S3 consistency model and storing multiple copies of the data.
DynamoDB is clearly a better answer.
agree with Dynamo DB.
Many examples for user state record.
S3 is not a Data Store, it stores data as objects. (Key words are : user state information and low latency (DynamoDB use SSD’s)).
Answer: D
B
Amazon Simple Storage Service (Amazon S3) is storage for the Internet. It’s a simple storage service that offers software
developers a highly-scalable, reliable, and low-latency data storage infrastructure at very low costs. Amazon S3 provides
a simple web services interface that can be used to store and retrieve any amount of data, at any time, from within
Amazon Elastic Compute Cloud (Amazon EC2) or from anywhere on the web. You can write, read, and delete objects
containing from 1 byte to 5 terabytes of data each. The number of objects you can store in an Amazon S3 bucket is
virtually unlimited.
Amazon S3 is also commonly used as a data store for computation and large-scale analytics, such as analyzing financial
transactions, clickstream analytics, and media transcoding. Because of the horizontal scalability of Amazon S3, you can
access your data from multiple computing nodes concurrently without being constrained by a single connection.
D
Amazon S3 stores unstructured blobs and suited for storing large objects up to 5 TB.
smaller data elements or file pointers (possibly to Amazon S3 objects) are best saved in Amazon DynamoDB.
An application, with user data most reasonably deals with pointers or tags. This is structured data not bulk objects. DynamoDB does offers durability, low latency and can be auto scaled!
just because the nature of the data, I’ve chosen D.
D- DynamoDB is durable shared and latency
The question talks about user session which points to dynamodb but then asks “Which of the following AWS services provides a shared data store with durability and low latency” which is S3.
whoever writes these questions hates us all. 🙂
D, it is dynamoDB, agree with techminded2,
Ill go with dynamo on this one. If you realistically think about how you would implement this, in a million years i would never save user state in S3. I would prefer ElasticCache, but since the question mentions durability it perhaps points to a different answer. In which case dynamo is the best fit.
D
Based on the question – 1) not concerned about cost 2) key word in question is user state information which is like a meta data for which DynamoDB is best.
S3 is good for storing objects but when it comes to user state information there could be additional fields/metadata that make DynamoDB preferred option. Hence correct choice is D.
Here is related snippet from https://media.amazonwebservices.com/AWS_Storage_Options.pdf
To speed access to relevant data, many developers pair Amazon S3 with a database, such as Amazon DynamoDB or Amazon RDS. Amazon S3 stores the actual information, and the database serves as the repository for associated metadata (e.g., object name, size, keywords, and so on). Metadata in the database can easily be indexed and queried, making it very efficient to locate an object’s reference via a database query. This result can then be used to pinpoint and then retrieve the object itself from Amazon S3.
Ill go with D on this one, S3 too slow
The only reason among possible answers why S3 not a good solution is latency. DynamoDB beats S3 in terms of speed of access. If DynamoDB was not the choice, I would have gone with S3.
it’s S3, (B)
A database is NOT a datastore
See below about low latency according to AWS
Amazon S3 Standard
Amazon S3 Standard offers high durability, availability, and performance object storage for frequently accessed data. Because it delivers low latency and high throughput, Standard is perfect for a wide variety of use cases including cloud applications, dynamic websites, content distribution, mobile and gaming applications, and big data analytics. Lifecycle management offers configurable policies to automatically migrate objects to the most appropriate storage class.
https://aws.amazon.com/s3/storage-classes/
Answer – B
Amazon S3 Standard
Amazon S3 Standard offers high durability, availability, and performance object storage for frequently accessed data. Because it delivers low latency and high throughput, Standard is perfect for a wide variety of use cases including cloud applications, dynamic websites, content distribution, mobile and gaming applications, and big data analytics. Lifecycle management offers configurable policies to automatically migrate objects to the most appropriate storage class.
Key Features:
Low latency and high throughput performance
Designed for durability of 99.999999999% of objects
Designed for 99.99% availability over a given year
Backed with the Amazon S3 Service Level Agreement for availability.
Supports SSL encryption of data in transit and at rest
Lifecycle management for automatic migration of objects.
https://aws.amazon.com/s3/storage-classes/
can you autoscale s3?
so the answer should be d
you do not need to scale S3 as the question states: “..to move user state information..shared date store…”. You can access to S3 from anywhere depending on your design and it works with auto-scaling well.
However, there is one word which pointed me to the D, it is “store” not “storage” like S3.
A. AWS ElastiCache Memcached (does not allow writes)
B. Amazon Simple Storage Service (does not provide low latency)
C. Amazon EC2 instance storage (not durable)
D is correct answer.
Q: When should I use Amazon DynamoDB vs Amazon S3?
Amazon DynamoDB stores structured data, indexed by primary key, and allows low latency read and write access to items ranging from 1 byte up to 64KB. Amazon S3 stores unstructured blobs and suited for storing large objects up to 5 TB. In order to optimize your costs across AWS services, large objects or infrequently accessed data sets should be stored in Amazon S3, while smaller data elements or file pointers (possibly to Amazon S3 objects) are best saved in Amazon DynamoDB.
— http://aws.amazon.com/dynamodb/faqs/#When_should_I_use_Amazon_DynamoDB_vs_Amazon_S3
D
Managing ASP.NET Session State with Amazon DynamoDB
ASP.NET applications often store session-state data in memory. However, this approach doesn’t scale well. After the application grows beyond a single web server, the session state must be shared between servers. A common solution is to set up a dedicated session-state server with Microsoft SQL Server. But this approach also has drawbacks: you must administer another machine, the session-state server is a single point of failure, and the session-state server itself can become a performance bottleneck.
Amazon DynamoDB, a NoSQL database store from Amazon Web Services (AWS), provides an effective solution for sharing session state across web servers without incurring any of these drawbacks.
http://docs.aws.amazon.com/sdk-for-net/v2/developer-guide/dynamodb-session-net-sdk.html