What should you configure?

You have a Microsoft Azure SQL data warehouse that contains information about community events.
An Azure Data Factory job writes an updated CSV file in Azure Blob storage to Community/{date}/events.csv
daily.
You plan to consume a Twitter feed by using Azure Stream Analytics and to correlate the feed to the community
events.
You plan to use Stream Analytics to retrieve the latest community events data and to correlate the data to the
Twitter feed data.
You need to ensure that when updates to the community events data is written to the CSV files, the Stream
Analytics job can access the latest community events data.
What should you configure?

You have a Microsoft Azure SQL data warehouse that contains information about community events.
An Azure Data Factory job writes an updated CSV file in Azure Blob storage to Community/{date}/events.csv
daily.
You plan to consume a Twitter feed by using Azure Stream Analytics and to correlate the feed to the community
events.
You plan to use Stream Analytics to retrieve the latest community events data and to correlate the data to the
Twitter feed data.
You need to ensure that when updates to the community events data is written to the CSV files, the Stream
Analytics job can access the latest community events data.
What should you configure?

A.
an output that uses a blob storage sink and has a path pattern of Community/{date}

B.
an output that uses an event hub sink and the CSV event serialization format

C.
an input that uses a reference data source and has a path pattern of Community/{date}/events.csv

D.
an input that uses a reference data source and has a path pattern of Community/{date}



Leave a Reply 0

Your email address will not be published. Required fields are marked *