You meet once per month with your operations team to review the past month’s data. During the meeting, you realize
that 3 weeks ago, your monitoring system which pings over HTTP from outside AWS recorded a large spike in latency
on your 3-tier web service API. You use DynamoDB for the database layer, ELB, EBS, and EC2 for the business logic
tier, and SQS, ELB, and EC2 for the presentation layer. Which of the following techniques will NOT help you figure out
what happened?
A.
Check your CloudTrail log history around the spike’s time for any API calls that caused slowness.
B.
Review CloudWatch Metrics graphs to determine which component(s) slowed the system down.
C.
Review your ELB access logs in S3 to see if any ELBs in your system saw the latency.
D.
Analyze your logs to detect bursts in traffic at that time.
Explanation:
Metrics data are available for 2 weeks. If you want to store metrics data beyond that duration, you can retrieve it using
our GetMetricStatistics API as well as a number of applications and tools offered by AWS partners.
https://aws.amazon.com/cloudwatch/faqs/
Of course CloudWatch metrics data is available for longer than that now.
Latency data etc can be seen only in CloudWatch not CloudTrail. CloudTrail is API calls
Question is Not – so the answer is A CloudTrail
A cannot be the answer, You can use CloudTrail to see which APIs caused slowness. CloudTrail is an audit log of all APIs.. so CloudTrail can be used.
cloudWatch metrics in console are available for 2 weeks. So
B is correct.
Q: Will I lose the metrics data if I disable monitoring for an Amazon EC2 instance?
No. You can always retrieve metrics data for any Amazon EC2 instance based on the retention schedules described above. However, the CloudWatch console limits the search of metrics to 2 weeks after a metric is last ingested to ensure that the most up to date instances are shown in your namespace.
https://aws.amazon.com/cloudwatch/faqs/
Answer: B
The cloud watch log retention policy is 2 weeks.