Don't Miss Your Chance! Limited Time Offer | Extra 25% Off - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions
Mail Us support@pass4success.com
Location Virginia, US

Microsoft DP-200 Exam

Certification Provider: Microsoft
Exam Name: Implementing an Azure Data Solution
Duration: 120 Minutes
Number of questions in our database: 243
Exam Version: Jun. 07, 2021
DP-200 Exam Official Topics:
  • Topic 1: Responsibility for data related tasks that include Ingesting/ Egressing/ and Transforming Data
  • Topic 2: Multiple Sources Using Various Services and Tools
  • Topic 3: Azure Data Engineer Collaborates With Business Stakeholders
  • Topic 4: Identify and Meet Data Requirements While Designing and Implementing the Management
  • Topic 5: Monitoring Security and Privacy of Data Using the Full Stack of Azure Services

Free Microsoft DP-200 Exam Actual Questions

The questions for DP-200 were last updated On Jun. 07, 2021

Question #1

Use the following login credentials as needed:

Azure Username: xxxxx

Azure Password: xxxxx

The following information is for technical support purposes only:

Lab Instance: 10543936

You need to ensure that you can recover any blob data from an Azure Storage account named storage10543936 up to 10 days after the data is deleted.

To complete this task, sign in to the Azure portal.

Reveal Solution Hide Solution
Question #2

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this scenario, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You have a container named Sales in an Azure Cosmos DB database. Sales has 120 GB of dat

a. Each entry in Sales has the following structure.

The partition key is set to the OrderId attribute.

Users report that when they perform queries that retrieve data by ProductName, the queries take longer than expected to complete.

You need to reduce the amount of time it takes to execute the problematic queries.

Solution: You increase the Request Units (RUs) for the database.

Does this meet the goal?

Reveal Solution Hide Solution
Correct Answer: A

To scale the provisioned throughput for your application, you can increase or decrease the number of RUs at any time.

Note: The cost of all database operations is normalized by Azure Cosmos DB and is expressed by Request Units (or RUs, for short). You can think of RUs per second as the currency for throughput. RUs per second is a rate-based currency. It abstracts the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB.


https://docs.microsoft.com/en-us/azure/cosmos-db/request-units

Question #3

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this scenario, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You are developing a solution that will use Azure Stream Analytics. The solution will accept an Azure Blob storage file named Customers. The file will contain both in-store and online customer details. The online customers will provide a mailing address.

You have a file in Blob storage named LocationIncomes that contains based on location. The file rarely changes.

You need to use an address to look up a median income based on location. You must output the data to Azure SQL Database for immediate use and to Azure Data Lake Storage Gen2 for long-term retention.

Solution: You implement a Stream Analytics job that has one streaming input, one reference input, two queries, and four outputs.

Does this meet the goal?

Reveal Solution Hide Solution
Correct Answer: A

We need one reference data input for LocationIncomes, which rarely changes.

We need two queries, on for in-store customers, and one for online customers.

For each query two outputs is needed.

Note: Stream Analytics also supports input known as reference data. Reference data is either completely static or changes slowly.

References:

https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-add-inputs#stream-and-reference-inputs

https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-define-outputs


Question #4

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this scenario, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You plan to create an Azure Databricks workspace that has a tiered structure. The workspace will contain the following three workloads:

A workload for data engineers who will use Python and SQL

A workload for jobs that will run notebooks that use Python, Spark, Scala, and SQL

A workload that data scientists will use to perform ad hoc analysis in Scala and R

The enterprise architecture team at your company identifies the following standards for Databricks environments:

The data engineers must share a cluster.

The job cluster will be managed by using a request process whereby data scientists and data engineers provide packaged notebooks for deployment to the cluster.

All the data scientists must be assigned their own cluster that terminates automatically after 120 minutes of inactivity. Currently, there are three data scientists.

You need to create the Databrick clusters for the workloads.

Solution: You create a Standard cluster for each data scientist, a Standard cluster for the data engineers, and a High Concurrency cluster for the jobs.

Does this meet the goal?

Reveal Solution Hide Solution
Correct Answer: B

We need a High Concurrency cluster for the data engineers and the jobs.

Note:

Standard clusters are recommended for a single user. Standard can run workloads developed in any language: Python, R, Scala, and SQL.

A high concurrency cluster is a managed cloud resource. The key benefits of high concurrency clusters are that they provide Apache Spark-native fine-grained sharing for maximum resource utilization and minimum query latencies.

References:

https://docs.azuredatabricks.net/clusters/configure.html

.


Question #5

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this scenario, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You are developing a solution that will use Azure Stream Analytics. The solution will accept an Azure Blob storage file named Customers. The file will contain both in-store and online customer details. The online customers will provide a mailing address.

You have a file in Blob storage named LocationIncomes that contains median incomes based on location. The file rarely changes.

You need to use an address to look up a median income based on location. You must output the data to Azure SQL Database for immediate use and to Azure Data Lake Storage Gen2 for long-term retention.

Solution: You implement a Stream Analytics job that has two streaming inputs, one query, and two outputs.

Does this meet the goal?

Reveal Solution Hide Solution
Correct Answer: B

We need one reference data input for LocationIncomes, which rarely changes

Note: Stream Analytics also supports input known as reference data. Reference data is either completely static or changes slowly.


https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-add-inputs#stream-and-reference-inputs


Unlock all DP-200 Exam Questions with Advanced Practice Test Features:
  • Select Question Types you want
  • Set your Desired Pass Percentage
  • Allocate Time (Hours : Minutes)
  • Create Multiple Practice tests with Limited Questions
  • Customer Support
Get Full Access Now
Disscuss Microsoft DP-200 Topics, Questions or Ask Anything Related

Save Cancel