Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Free Microsoft DP-203 Exam Dumps

Here you can find all the free questions related with Microsoft Data Engineering on Microsoft Azure (DP-203) exam. You can also find on this page links to recently updated premium files with which you can practice for actual Microsoft Data Engineering on Microsoft Azure Exam. These premium versions are provided as DP-203 exam practice tests, both as desktop software and browser based application, you can use whatever suits your style. Feel free to try the Data Engineering on Microsoft Azure Exam premium files for free, Good luck with your Microsoft Data Engineering on Microsoft Azure Exam.
Question No: 1

MultipleChoice

You have an Azure Data lake Storage account that contains a staging zone.

You need to design a daily process to ingest incremental data from the staging zone, transform the data by executing an R script, and then insert the transformed data into a data warehouse in Azure Synapse Analytics.

Solution You use an Azure Data Factory schedule trigger to execute a pipeline that executes an Azure Databricks notebook, and then inserts the data into the data warehouse

Dow this meet the goal?

Options
Question No: 2

MultipleChoice

You have an Azure Stream Analytics job.

You need to ensure that the job has enough streaming units provisioned

You configure monitoring of the SU % Utilization metric.

Which two additional metrics should you monitor? Each correct answer presents part of the solution.

NOTE Each correct selection is worth one point

Options
Question No: 3

MultipleChoice

You have an Azure Data Factory pipeline that is triggered hourly.

The pipeline has had 100% success for the past seven days.

The pipeline execution fails, and two retries that occur 15 minutes apart also fail. The third failure returns the following error.

What is a possible cause of the error?

A . From 06.00 to 07:00 on January 10.2021 there was no data in w1/bikes/CARBON.

Options
Question No: 4

MultipleChoice

You have an Azure Synapse Analytics dedicated SQL pool named Pool1 and an Azure Data Lake Storage Gen2 account named Account 1.

You plan to access the files in Accoun1l by using an external table.

You need to create a data source in Pool1 that you can reference when you create the external table.

How should you complete the Transact-SQL statement? To answer, select the appropriate options in the answer area.

NOTE Each coned selection is worth one point.

Options
Question No: 5

MultipleChoice

You have an Azure subscription that contains an Azure SQL database named DB1 and a storage account

named storage1. The storage1 account contains a file named File1.txt. File1.txt contains the names of selected

tables in DB1.

You need to use an Azure Synapse pipeline to copy data from the selected tables in DB1 to the files in

storage1. The solution must meet the following requirements:

* The Copy activity in the pipeline must be parameterized to use the data in File1.txt to identify the source and

destination of the copy.

* Copy activities must occur in parallel as often as possible.

Which two pipeline activities should you include in the pipeline? Each correct answer presents part of the

solution.

NOTE: Each correct selection is worth one point.

Options
Question No: 6

Hotspot

You have an Azure subscription that contains a logical Microsoft SQL server named Server1. Server1 hosts an Azure Synapse Analytics SQL dedicated pool named Pool1.

You need to recommend a Transparent Data Encryption (TDE) solution for Server1. The solution must meet the following requirements:

Track the usage of encryption keys.

Maintain the access of client apps to Pool1 in the event of an Azure datacenter outage that affects the availability of the encryption keys.

What should you include in the recommendation? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Question No: 7

Hotspot

Which Azure Data Factory components should you recommend using together to import the daily inventory data from the SQL server to Azure Data Lake Storage? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Box 1: Self-hosted integration runtime

A self-hosted IR is capable of running copy activity between a cloud data stores and a data store in private network.

Box 2: Schedule trigger

Schedule every 8 hours

Box 3: Copy activity

Scenario:

Customer data, including name, contact information, and loyalty number, comes from Salesforce and can be imported into Azure once every eight hours. Row modified dates are not trusted in the source table.

Product data, including product ID, name, and category, comes from Salesforce and can be imported into Azure once every eight hours. Row modified dates are not trusted in the source table.

Question No: 8

Hotspot

You need to implement an Azure Databricks cluster that automatically connects to Azure Data lake Storage Gen2 by using Azure Active Directory (Azure AD) integration. How should you configure the new clutter? To answer, select the appropriate options in the answers are

a. NOTE: Each correct selection is worth one point.

https://docs.azuredatabricks.net/spark/latest/data-sources/azure/adls-passthrough.html

Question No: 9

Hotspot

You plan to create an Azure Data Lake Storage Gen2 account

You need to recommend a storage solution that meets the following requirements:

* Provides the highest degree of data resiliency

* Ensures that content remains available for writes if a primary data center fails

What should you include in the recommendation? To answer, select the appropriate options in the answer area.

https://docs.microsoft.com/en-us/azure/storage/common/storage-disaster-recovery-guidance?toc=/azure/storage/blobs/toc.json

https://docs.microsoft.com/en-us/answers/questions/32583/azure-data-lake-gen2-disaster-recoverystorage-acco.html

Question No: 10

Hotspot

You have an Azure Synapse Analytics dedicated SQL pool that contains the users shown in the following table.

User1 executes a query on the database, and the query returns the results shown in the following exhibit.

User1 is the only user who has access to the unmasked data.

Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic.

NOTE: Each correct selection is worth one point.

Box 1: 0

The YearlyIncome column is of the money data type.

The Default masking function: Full masking according to the data types of the designated fields

Use a zero value for numeric data types (bigint, bit, decimal, int, money, numeric, smallint, smallmoney, tinyint, float, real).

Box 2: the values stored in the database

Users with administrator privileges are always excluded from masking, and see the original data without any mask.


Save Cancel