You have a web application leveraging an Elastic Load Balancer (ELB) In front of the web servers deployed
using an Auto Scaling Group Your database is running on Relational Database Service (RDS) The application
serves out technical articles and responses to them in general there are more views of an article than there are
responses to the article. On occasion, an article on the site becomes extremely popular resulting in significant
traffic Increases that causes the site to go down.
What could you do to help alleviate the pressure on the infrastructure while maintaining availability during
these events?
Choose 3 answers
A.
Leverage CloudFront for the delivery of the articles.
B.
Add RDS read-replicas for the read traffic going to your relational database
C.
Leverage ElastiCache for caching the most frequently used data.
D.
Use SOS to queue up the requests for the technical posts and deliver them out of the queue.
E.
Use Route53 health checks to fail over to an S3 bucket for an error page.
The answer is A, B, C.
The questions mentions RDS so an answer that includes that as part of the solution makes sense. Also, Route53 does nothing to alleviate pressure on the infrastructure, its for failover.
Agree wuth Bryan – ABC is the answer
E is counter productive. It talks about failing over to an error page on S3.
ABC
ACE
Key words are “causes the site to go down.”
Cloud front = No brainer
Route 53 health Check = Obvious
Memcache will be faster than a read replica
———————————————
Read replica= slower than memcache
SQS =does not apply
https://aws.amazon.com/blogs/aws/create-a-backup-website-using-route-53-dns-failover-and-s3-website-hosting/
But if you have ABC, you don’t need E.
To me the key word is “alleviate the pressure”. The system is failing because it cannot take the pressure. If one implements ABC, the chances of needing E is very slim, as all 3 will help to alleviate the pressure.
Also it doesn’t matter if the “speed” between elasticache or RDS read replica is different, the criteria is about alleviating pressure and redundancy not about speed of delivery.
“Read replica= slower than memcache”
Absolutely, but if the database instance is under a huge load, a read replica will still help to split the load and increase the query outputs.
The key word here is “maintain availability”, displaying an error page doesn’t achieve this.
So: ABC are my choices.
ABC is right.
D – SQS is not relevant in this scenario
E – its not maintaining availability as its only redirecting to an error page
Can someone update this to ABC?
ABC
ABC
abc
@Chef , you are mostly right , but not on this instance 🙂
Its ABC , as the point about site going down is not valid, the Route53 option is only providing a graceful downtime with a graceful error page thats it, not helping with removing the actual problem, which is solved by ABC.
E is wrong, we are not using multiple regions, Route53 ELB-failvoer is not required.
D is wrong, SQS will only add to the strain on the server, and is not useful in this situation.
A is correct, as you can have users read from edge locations
B is correct, as you can service reads with read-replicas
C is correct, Elasticache would assist in this situation.
ABC
ABC
ABC
BCD
Going against the tide here, as the question states you need to “alleviate the pressure on the infrastructure while maintaining availability”.
As the webservers are already on an auto scaling group I’m going to say the problem lies with the DB. As such having users connect to cloudfront locations does not help. These articles are from the DB …
B – Read replicas great.
C – Caching great
D – SQS
SQS will allow you to decouple the requests into a queue. which can then be served by the DB at a rate it can manage “maintaining availabilty”.
key words are ‘site to go down’ ‘extremely popular resulting’ and ‘Alleviate the pressure’
ABC
ABC