Which technology should you implement?

Note: This question is part of a series of questions that use the same scenario. For your convenience, the
scenario is repeated in each question. Each question presents a different goal and answer choices, but the text
of the scenario is exactly the same in each question in this series.
You are planning a big data infrastructure by using an Apache Spark cluster in Azure HDInsight. The cluster
has 24 processor cores and 512 GB of memory.
The architecture of the infrastructure is shown in the exhibit. (Click the Exhibit button.)

The architecture will be used by the following users:
Support analysts who run applications that will use REST to submit Spark jobs.
Business analysts who use JDBC and ODBC client applications from a real-time view. The business
analysts run monitoring queries to access aggregate results for 15 minutes. The results will be referenced
by subsequent queries.
Data analysts who publish notebooks drawn from batch layer, serving layer, and speed layer queries. All of
the notebooks must support native interpreters for data sources that are batch processed. The serving layer
queries are written in Apache Hive and must support multiple sessions. Unique GUIDs are used across the
data sources, which allow the data analysts to use Spark SQL.
The data sources in the batch layer share a common storage container. The following data sources are used:
Hive for sales data
Apache HBase for operations data
HBase for logistics data by using a single region server
You need to ensure that the analysts can query the logistics data by using JDBC APIs and SQL APIs.
Which technology should you implement?

Note: This question is part of a series of questions that use the same scenario. For your convenience, the
scenario is repeated in each question. Each question presents a different goal and answer choices, but the text
of the scenario is exactly the same in each question in this series.
You are planning a big data infrastructure by using an Apache Spark cluster in Azure HDInsight. The cluster
has 24 processor cores and 512 GB of memory.
The architecture of the infrastructure is shown in the exhibit. (Click the Exhibit button.)

The architecture will be used by the following users:
Support analysts who run applications that will use REST to submit Spark jobs.
Business analysts who use JDBC and ODBC client applications from a real-time view. The business
analysts run monitoring queries to access aggregate results for 15 minutes. The results will be referenced
by subsequent queries.
Data analysts who publish notebooks drawn from batch layer, serving layer, and speed layer queries. All of
the notebooks must support native interpreters for data sources that are batch processed. The serving layer
queries are written in Apache Hive and must support multiple sessions. Unique GUIDs are used across the
data sources, which allow the data analysts to use Spark SQL.
The data sources in the batch layer share a common storage container. The following data sources are used:
Hive for sales data
Apache HBase for operations data
HBase for logistics data by using a single region server
You need to ensure that the analysts can query the logistics data by using JDBC APIs and SQL APIs.
Which technology should you implement?

A.
Apache Phoenix

B.
Apache Spark

C.
Apache Storm

D.
Apache Hive

Explanation:
http://phoenix.apache.org/



Leave a Reply 0

Your email address will not be published. Required fields are marked *