How can you best meet this requirement and satisfy your CTO?

You are hired as the new head of operations for a SaaS company. Your CTO has asked you to make debugging any part of your entire operation simpler and as fast as possible. She complains that she has no idea what is going on in the complex, service-oriented architecture, because the developers just log to disk, and it’s very hard to find errors in logs on so many services. How can you best meet this requirement and satisfy your CTO?

log analysis and aggregation. All other answers introduce extra delay or require pre-defined queries. Amazon application monitoring, and click stream analytics. https://aws.amazon.com/elasticsearch-service/

You are hired as the new head of operations for a SaaS company. Your CTO has asked you to make debugging any part of your entire operation simpler and as fast as possible. She complains that she has no idea what is going on in the complex, service-oriented architecture, because the developers just log to disk, and it’s very hard to find errors in logs on so many services. How can you best meet this requirement and satisfy your CTO?

log analysis and aggregation. All other answers introduce extra delay or require pre-defined queries. Amazon application monitoring, and click stream analytics. https://aws.amazon.com/elasticsearch-service/

A.
Copy all log files into AWS S3 using a cron job on each instance. Use an S3 Notification Configuration on the PutBucket
event and publish events to AWS Lambda. Use the Lambda to analyze logs as soon as they come in and flag issues.

B.
Begin using CloudWatch Logs on every service. Stream all Log Groups into S3 objects. Use AWS EMR cluster jobs to perform ad-
hoc MapReduce analysis and write new queries when needed.

C.
Copy all log files into AWS S3 using a cron job on each instance. Use an S3 Notification Configuration on the PutBucket
event and publish events to AWS Kinesis. Use Apache Spark on AWS EMR to perform at-scale stream processing queries on the log
chunks and flag issues.

D.
Begin using CloudWatch Logs on every service. Stream all Log Groups into an AWS Elasticsearch Service Domain running Kibana 4
and perform log analysis on a search cluster.

Explanation:
Explanation/Reference:
The Elasticsearch and Kibana 4 combination is called the ELK Stack, and is designed specifically for real-time, ad-hoc
Elasticsearch Service is a managed service that makes it easy to deploy, operate, and scale Elasticsearch in the AWS
Cloud. Elasticsearch is a popular open-source search and analytics engine for use cases such as log analytics, real-time



Leave a Reply 0

Your email address will not be published. Required fields are marked *