Which of the following cannot be done using AWS Data Pipeline?
A.
Create complex data processing workloads that are fault tolerant, repeatable, and highly
available.
B.
Regularly access your data where it’s stored, transform and process it at scale, and efficiently
transfer the results to another AWS service.
C.
Generate reports over data that has been stored.
D.
Move data between different AWS compute and storage services as well as on-premise data
sources at specified intervals.
Explanation:
AWS Data Pipeline is a web service that helps you reliably process and move data between
different AWS compute and storage services as well as on-premise data sources at specified
intervals. With AWS Data Pipeline, you can regularly access your data where it’s stored,
transform and process it at scale, and efficiently transfer the results to another AWS.
AWS Data Pipeline helps you easily create complex data processing workloads that are fault
tolerant, repeatable, and highly available. AWS Data Pipeline also allows you to move and
process data that was previously locked up in on-premise data silos.
http://aws.amazon.com/datapipeline/