BlackFriday 2024! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Free Microsoft DP-203 Exam Dumps

Here you can find all the free questions related with Microsoft Data Engineering on Microsoft Azure (DP-203) exam. You can also find on this page links to recently updated premium files with which you can practice for actual Microsoft Data Engineering on Microsoft Azure Exam. These premium versions are provided as DP-203 exam practice tests, both as desktop software and browser based application, you can use whatever suits your style. Feel free to try the Data Engineering on Microsoft Azure Exam premium files for free, Good luck with your Microsoft Data Engineering on Microsoft Azure Exam.
Question No: 1

Hotspot

You have an Azure subscription that contains a logical Microsoft SQL server named Server1. Server1 hosts an Azure Synapse Analytics SQL dedicated pool named Pool1.

You need to recommend a Transparent Data Encryption (TDE) solution for Server1. The solution must meet the following requirements:

Track the usage of encryption keys.

Maintain the access of client apps to Pool1 in the event of an Azure datacenter outage that affects the availability of the encryption keys.

What should you include in the recommendation? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

.

Question No: 2

MultipleChoice

You have a Microsoft Entra tenant.

The tenant contains an Azure Data Lake Storage Gen2 account named storage! that has two containers named fs1 and fs2. You have a Microsoft Entra group named Oepartment

A . You need to meet the following requirements:

* OepartmentA must be able to read, write, and list all the files in fs1.

* OepartmentA must be prevented from accessing any files in fs2

* The solution must use the principle of least privilege.

Which role should you assign to DepartmentA?

Options
Question No: 3

MultipleChoice

You have an Azure Data lake Storage account that contains a staging zone.

You need to design a daily process to ingest incremental data from the staging zone, transform the data by executing an R script, and then insert the transformed data into a data warehouse in Azure Synapse Analytics.

Solution You use an Azure Data Factory schedule trigger to execute a pipeline that executes an Azure Databricks notebook, and then inserts the data into the data warehouse

Dow this meet the goal?

Options
Question No: 4

MultipleChoice

You have an Azure Stream Analytics job.

You need to ensure that the job has enough streaming units provisioned

You configure monitoring of the SU % Utilization metric.

Which two additional metrics should you monitor? Each correct answer presents part of the solution.

NOTE Each correct selection is worth one point

Options
Question No: 5

MultipleChoice

You have an Azure Data Factory pipeline that is triggered hourly.

The pipeline has had 100% success for the past seven days.

The pipeline execution fails, and two retries that occur 15 minutes apart also fail. The third failure returns the following error.

What is a possible cause of the error?

A . From 06.00 to 07:00 on January 10.2021 there was no data in w1/bikes/CARBON.

Options
Question No: 6

MultipleChoice

You have an Azure Synapse Analytics dedicated SQL pool named Pool1 and an Azure Data Lake Storage Gen2 account named Account 1.

You plan to access the files in Accoun1l by using an external table.

You need to create a data source in Pool1 that you can reference when you create the external table.

How should you complete the Transact-SQL statement? To answer, select the appropriate options in the answer area.

NOTE Each coned selection is worth one point.

Options
Question No: 7

MultipleChoice

You have an Azure subscription that contains an Azure SQL database named DB1 and a storage account

named storage1. The storage1 account contains a file named File1.txt. File1.txt contains the names of selected

tables in DB1.

You need to use an Azure Synapse pipeline to copy data from the selected tables in DB1 to the files in

storage1. The solution must meet the following requirements:

* The Copy activity in the pipeline must be parameterized to use the data in File1.txt to identify the source and

destination of the copy.

* Copy activities must occur in parallel as often as possible.

Which two pipeline activities should you include in the pipeline? Each correct answer presents part of the

solution.

NOTE: Each correct selection is worth one point.

Options
Question No: 8

Hotspot

You have an Azure subscription that contains a logical Microsoft SQL server named Server1. Server1 hosts an Azure Synapse Analytics SQL dedicated pool named Pool1.

You need to recommend a Transparent Data Encryption (TDE) solution for Server1. The solution must meet the following requirements:

Track the usage of encryption keys.

Maintain the access of client apps to Pool1 in the event of an Azure datacenter outage that affects the availability of the encryption keys.

What should you include in the recommendation? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Question No: 9

Hotspot

Which Azure Data Factory components should you recommend using together to import the daily inventory data from the SQL server to Azure Data Lake Storage? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Box 1: Self-hosted integration runtime

A self-hosted IR is capable of running copy activity between a cloud data stores and a data store in private network.

Box 2: Schedule trigger

Schedule every 8 hours

Box 3: Copy activity

Scenario:

Customer data, including name, contact information, and loyalty number, comes from Salesforce and can be imported into Azure once every eight hours. Row modified dates are not trusted in the source table.

Product data, including product ID, name, and category, comes from Salesforce and can be imported into Azure once every eight hours. Row modified dates are not trusted in the source table.

Question No: 10

Hotspot

You need to implement an Azure Databricks cluster that automatically connects to Azure Data lake Storage Gen2 by using Azure Active Directory (Azure AD) integration. How should you configure the new clutter? To answer, select the appropriate options in the answers are

a. NOTE: Each correct selection is worth one point.

https://docs.azuredatabricks.net/spark/latest/data-sources/azure/adls-passthrough.html


Save Cancel