Skip to content

DP-100 Designing and Implementing a Data Science Solution on Azure exams demo

QUESTION 2
Your team is building a data engineering and data science development environment.
The environment must support the following requirements:
support Python and Scala
compose data storage, movement, and processing services into automated data pipelines
the same tool should be used for the orchestration of both data engineering and data science
support workload isolation and interactive workloads
enable scaling across a cluster of machines
You need to create the environment.
What should you do?
A. Build the environment in Apache Hive for HDInsight and use Azure Data Factory for orchestration.
B. Build the environment in Azure Databricks and use Azure Data Factory for orchestration.
C. Build the environment in Apache Spark for HDInsight and use Azure Container Instances for orchestration.
D. Build the environment in Azure Databricks and use Azure Container Instances for orchestration.
Correct Answer: B
Section: (none)
Explanation
Explanation/Reference:
Explanation:
In Azure Databricks, we can create two different types of clusters.
Standard, these are the default clusters and can be used with Python, R, Scala and SQL
High-concurrency
Azure Databricks is fully integrated with Azure Data Factory.
Incorrect Answers:
D: Azure Container Instances is good for development or testing. Not suitable for production workloads.
Reference:
https://docs.microsoft.com/en-us/azure/architecture/data-guide/technology-choices/data-science-and-machinelearning

.
.

.

.
QUESTION 4
You plan to build a team data science environment. Data for training models in machine learning pipelines will
be over 20 GB in size.
You have the following requirements:
Models must be built using Caffe2 or Chainer frameworks.
Data scientists must be able to use a data science environment to build the machine learning pipelines and
train models on their personal devices in both connected and disconnected network environments.
Personal devices must support updating machine learning pipelines when connected to a network.
You need to select a data science environment.
Which environment should you use?
A. Azure Machine Learning Service
B. Azure Machine Learning Studio
C. Azure Databricks
D. Azure Kubernetes Service (AKS)
Correct Answer: A
Section: (none)
Explanation
Explanation/Reference:
Explanation:
The Data Science Virtual Machine (DSVM) is a customized VM image on Microsoft’s Azure cloud built
specifically for doing data science. Caffe2 and Chainer are supported by DSVM.
DSVM integrates with Azure Machine Learning.
Incorrect Answers:
B: Use Machine Learning Studio when you want to experiment with machine learning models quickly and easily,
and the built-in machine learning algorithms are sufficient for your solutions.
Reference:
https://docs.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/overview
QUESTION 5
You are implementing a machine learning model to predict stock prices.
The model uses a PostgreSQL database and requires GPU processing.
You need to create a virtual machine that is pre-configured with the required tools.
What should you do?
A. Create a Data Science Virtual Machine (DSVM) Windows edition.
96CE4376707A97CE80D4B1916F054522
B. Create a Geo Al Data Science Virtual Machine (Geo-DSVM) Windows edition.
C. Create a Deep Learning Virtual Machine (DLVM) Linux edition.
D. Create a Deep Learning Virtual Machine (DLVM) Windows edition.
Correct Answer: A
Section: (none)
Explanation
Explanation/Reference:
Explanation:
In the DSVM, your training models can use deep learning algorithms on hardware that's based on graphics
processing units (GPUs).
PostgreSQL is available for the following operating systems: Linux (all recent distributions), 64-bit installers
available for macOS (OS X) version 10.6 and newer – Windows (with installers available for 64-bit version;
tested on latest versions and back to Windows 2012 R2.
Incorrect Answers:
B: The Azure Geo AI Data Science VM (Geo-DSVM) delivers geospatial analytics capabilities from Microsoft's
Data Science VM. Specifically, this VM extends the AI and data science toolkits in the Data Science VM by
adding ESRI's market-leading ArcGIS Pro Geographic Information System.
C, D: DLVM is a template on top of DSVM image. In terms of the packages, GPU drivers etc are all there in the
DSVM image. Mostly it is for convenience during creation where we only allow DLVM to be created on GPU VM
instances on Azure.
Reference:
https://docs.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/overview
QUESTION 6
You are developing deep learning models to analyze semi-structured, unstructured, and structured data types.
You have the following data available for model building:
Video recordings of sporting events
Transcripts of radio commentary about events
Logs from related social media feeds captured during sporting events
You need to select an environment for creating the model.
Which environment should you use?
A. Azure Cognitive Services
B. Azure Data Lake Analytics
C. Azure HDInsight with Spark MLib
D. Azure Machine Learning Studio
Correct Answer: A
Section: (none)
Explanation
Explanation/Reference:
Explanation:
Azure Cognitive Services expand on Microsoft’s evolving portfolio of machine learning APIs and enable
developers to easily add cognitive features – such as emotion and video detection; facial, speech, and vision
recognition; and speech and language understanding – into their applications. The goal of Azure Cognitive
96CE4376707A97CE80D4B1916F054522
Services is to help developers create applications that can see, hear, speak, understand, and even begin to
reason. The catalog of services within Azure Cognitive Services can be categorized into five main pillars -
Vision, Speech, Language, Search, and Knowledge.
Reference:
https://docs.microsoft.com/en-us/azure/cognitive-services/welcome
QUESTION 7
You must store data in Azure Blob Storage to support Azure Machine Learning.
You need to transfer the data into Azure Blob Storage.
What are three possible ways to achieve the goal? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
A. Bulk Insert SQL Query
B. AzCopy
C. Python script
D. Azure Storage Explorer
E. Bulk Copy Program (BCP)
Correct Answer: BCD
Section: (none)
Explanation
Explanation/Reference:
Explanation:
You can move data to and from Azure Blob storage using different technologies:
Azure Storage-Explorer
AzCopy
Python
SSIS
Reference:
https://docs.microsoft.com/en-us/azure/machine-learning/team-data-science-process/move-azure-blob
QUESTION 8
You are moving a large dataset from Azure Machine Learning Studio to a Weka environment.
You need to format the data for the Weka environment.
Which module should you use?
A. Convert to CSV
B. Convert to Dataset
C. Convert to ARFF
D. Convert to SVMLight
Correct Answer: C
Section: (none)
Explanation
.

.

.

.

QUESTION 10
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution that might meet the stated goals. Some question sets might have
more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.
You train a classification model by using a logistic regression algorithm.
96CE4376707A97CE80D4B1916F054522
You must be able to explain the model’s predictions by calculating the importance of each feature, both as an
overall global relative importance value and as a measure of local importance for a specific set of predictions.
You need to create an explainer that you can use to retrieve the required global and local feature importance
values.
Solution: Create a MimicExplainer.
Does the solution meet the goal?
A. Yes
B. No
Correct Answer: B
Section: (none)
Explanation
Explanation/Reference:
Explanation:
Instead use Permutation Feature Importance Explainer (PFI).
Note 1: Mimic explainer is based on the idea of training global surrogate models to mimic blackbox models. A
global surrogate model is an intrinsically interpretable model that is trained to approximate the predictions of
any black box model as accurately as possible. Data scientists can interpret the surrogate model to draw
conclusions about the black box model.
Note 2: Permutation Feature Importance Explainer (PFI): Permutation Feature Importance is a technique used
to explain classification and regression models. At a high level, the way it works is by randomly shuffling data
one feature at a time for the entire dataset and calculating how much the performance metric of interest
changes. The larger the change, the more important that feature is. PFI can explain the overall behavior of any
underlying model but does not explain individual predictions.
Reference:
https://docs.microsoft.com/en-us/azure/machine-learning/how-to-machine-learning-interpretability
QUESTION 11
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution that might meet the stated goals. Some question sets might have
more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.
You train a classification model by using a logistic regression algorithm.
You must be able to explain the model’s predictions by calculating the importance of each feature, both as an
overall global relative importance value and as a measure of local importance for a specific set of predictions.
You need to create an explainer that you can use to retrieve the required global and local feature importance
values.
Solution: Create a TabularExplainer.
Does the solution meet the goal?
A. Yes
B. No
Correct Answer: B

 
Buy full version for more questions

Next article HPE6-A71 Aruba Certified Mobility Professional Exams demo

Leave a comment

* Required fields