You have recently joined a startup company building sensors to measure street noise and air quality in urban
areas.
The company has been running a pilot deployment of around 100 sensors for 3 months Each sensor uploads
1KB of sensor data every minute to a backend hosted on AWS.
During the pilot, you measured a peak or 10 IOPS on the database, and you stored an average of 3GB of sensor
data per month in the database
The current deployment consists of a load-balanced auto scaled Ingestion layer using EC2 instances and a
PostgreSQL RDS database with 500GB standard storage.
The pilot is considered a success and your CEO has managed to get the attention or some potential investors
The business plan requires a deployment of at least 1O0K sensors which needs to be supported by the backend
You also need to store sensor data for at least two years to be able to compare year over year Improvements.
To secure funding, you have to make sure that the platform meets these requirements and leaves room for
further scaling
Which setup win meet the requirements?
A.
Add an SOS queue to the ingestion layer to buffer writes to the RDS instance
B.
Ingest data into a DynamoDB table and move old data to a Redshift cluster
C.
Replace the RDS instance with a 6 node Redshift cluster with 96TB of storage
D.
Keep the current architecture but upgrade RDS storage to 3TB and 10K provisioned IOPS
Does this sounds right?
answer c
https://aws.amazon.com/redshift/faqs/
ideally you bring data to emr then to redshift assuming sensor data is unstructured or semi structured
B is the best solution.
The POC solution is being scaled up by 1000, which means it will require 72TB of Storage to retain 24 months worth of data. This rules out RDS as a possible DB solution which leaves you with RedShift.
I believe DynamoDB is a more cost effective and scales better for ingest rather than using EC2 in an autoscaling group.
Also, this example solution from AWS is some what similar for reference.
http://media.amazonwebservices.com/architecturecenter/AWS_ac_ra_timeseriesprocessing_16.pdf
I think the example you quote is not suitable for this question. The example is typical for mapreduce job, which uses dynamic db for some preparing task (etl, data laundry…)
You also need to store sensor data for at least two years to be able to compare year over year Improvements.
Sounds like a RedShift function to me! Dynamo DB is better suited for large ingest!
B
Why C is wrong.
A six node Redshift architecture cannot have 96TB storage, can it?
Even though compression is possible.
A single node is limited to 160GB of data.
Wonderful point. C is completely ruled out leaving B as the answer. Thanks Bones Cisco
Yoy mean 16TB
I’ll let you calculate 6×16
A single node is not limited to 160 GB.
Dense Storage (DS) nodes are available in two sizes, Extra Large and Eight Extra Large. The Extra Large (XL) has 3 HDDs with a total of 2TB of magnetic storage, whereas Eight Extra Large (8XL) has 24 HDDs with a total of 16TB of magnetic storage
https://aws.amazon.com/redshift/faqs/
You can easily create an Amazon Redshift data warehouse cluster by using the AWS Management Console or the Amazon Redshift APIs. You can start with a single node, 160GB data warehouse and scale all the way to a petabyte or more with a few clicks in the AWS Console or a single API call.
extract from
https://aws.amazon.com/redshift/faqs/
B
I would go with B.
Going with option B. it’s will be more costlier than option C. by using both DynamoDB and Redshift Cluster. Though DynamoDB is faster and on SSD drives but here we IOPS are very low and we don’t need high speed Database so just using Redshift would be enough!.
I would go with option C.
Redshift single node can support up to 16TB of Storage and we need 96TB for 24 months of data to be saved and with 6 node Redshift Cluster it makes 96TB of storage by using Eight Extra Large (8XL) has 24 HDDs with a total of 16TB of magnetic storage. So the option C. should be the right one.
C. Replace the RDS instance with a 6 node Redshift cluster with 96TB of storage
It says “replace” RDS, how does the data is a data ware house solution. It is not kinesis to stream live data directly.
I got with B.
It says “replace” RDS, Redshift is a data ware house solution. It is not kinesis to stream live data directly.
I got with B.
The two challenges are to scale ingestion and storage.
Option B is the right answer as it allows scaling ingestion with DynamoDB and storage with Redshift. With these options you’ll be able to scale even past the 1000x POC size.
but isn’t C more suitable ?
can you explain why B ?
Dynamo is right but the write costs would be expensive ! You’d want SQS or Kinesis to buffer the writes …
Redshift isnt suited for lots of small writes, its ingestion is supposed to be from S3, Dynamo, EMR or Kinesis .. (using COPY not INSERT)
C is the right answer. You cannot go with DynamoDB because the application is currently using a Postgre SQL which is an RDS. Replacing an RDS SQL with a noSQL DB, for the sake of scaling is not a sensible option.
Whereas Amazon Redshift allows you to run relational databases.
Having data on two places will allow to compare year over year? so C seems like an answer
I think answer C is incorrect. It stores only 3GB of data a month [“you stored an average of 3GB of sensor data per month in the database”]So why do you need 96TB of storage, I don’t get it.
you are moving 100 sensors to 100K sensors , C is correct answer.
you are moving 100 sensors to 100K sensors , C is correct answer and why need to have dynamo DB then move to redshift?