You have recently joined a startup company building sensors to measure street noise and air quality in urban
areas. The company has been running a pilot deployment of around 100 sensors for 3 months each sensor
uploads 1KB of sensor data every minute to a backend hosted on AWS.
During the pilot, you measured a peak or 10 IOPS on the database, and you stored an average of 3GB of
sensor data per month in the database.
The current deployment consists of a load-balanced auto scaled Ingestion layer using EC2 instances and a
PostgreSQL RDS database with 500GB standard storage.
The pilot is considered a success and your CEO has managed to get the attention or some potential investors.
The business plan requires a deployment of at least 1O0K sensors which needs to be supported by the
backend. You also need to store sensor data for at least two years to be able to compare year over yearImprovements.
To secure funding, you have to make sure that the platform meets these requirements and leaves room for
further scaling. Which setup win meet the requirements?
A.
Add an SOS queue to the ingestion layer to buffer writes to the RDS instance
B.
Ingest data into a DynamoDB table and move old data to a Redshift cluster
C.
Replace the RDS instance with a 6 node Redshift cluster with 96TB of storage
D.
Keep the current architecture but upgrade RDS storage to 3TB and 10K provisioned IOPS
C
http://www.aiotestking.com/amazon/which-setup-win-meet-the-requirements/
https://aws.amazon.com/redshift/faqs/
…
Both Amazon Redshift and Amazon RDS enable you to run traditional relational databases in the cloud while offloading database administration. Customers use Amazon RDS databases both for online-transaction processing (OLTP) and for reporting and analysis. Amazon Redshift harnesses the scale and resources of multiple nodes and uses a variety of optimizations to provide order of magnitude improvements over traditional databases for analytic and reporting workloads against very large data sets. Amazon Redshift provides an excellent scale-out option as your data and query complexity grows or if you want to prevent your reporting and analytic processing from interfering with the performance of your OLTP workload.
Answer is B. Refer: http://jayendrapatil.com/aws-redshift/
Add an SQS queue to the ingestion layer to buffer writes to the RDS instance (RDS instance will not support data for 2 years)
Ingest data into a DynamoDB table and move old data to a Redshift cluster (Handle 10K IOPS ingestion and store data into Redshift for analysis)
Replace the RDS instance with a 6 node Redshift cluster with 96TB of storage (Does not handle the ingestion issue)
Keep the current architecture but upgrade RDS storage to 3TB and 10K provisioned IOPS (RDS instance will not support data for 2 years)