Free Data Engineering on Microsoft Azure Exam DP-203 Exam Practice Test

UNLOCK FULL
DP-203 Exam Features
In Just $59 You can Access
  • All Official Question Types
  • Interactive Web-Based Practice Test Software
  • No Installation or 3rd Party Software Required
  • Customize your practice sessions (Free Demo)
  • 24/7 Customer Support
Page: 1 / 69
Total Questions: 341
  • You need to schedule an Azure Data Factory pipeline to execute when a new file arrives in an Azure Data Lake Storage Gen2 container.Which type of trigger should you use?

    Answer: D Next Question
  • You have a C# application that process data from an Azure IoT hub and performs complex transformations.You need to replace the application with a real-time solution. The solution must reuse as much code aspossible from the existing application.

    Answer: C Next Question
  • You have an Azure Data Factory version 2 (V2) resource named Df1. Df1 contains a linked service.You have an Azure Key vault named vault1 that contains an encryption key named key1.You need to encrypt Df1 by using key1.What should you do first?

    Answer: A Next Question
  • You have an Azure data factory that connects to a Microsoft Purview account. The data factory is registered in Microsoft Purview.You update a Data Factory pipeline.You need to ensure that the updated lineage is available in Microsoft Purview.What You have an Azure subscription that contains an Azure SQL database named DB1 and a storage account named storage1. The storage1 account contains a file named File1.txt. File1.txt contains the names of selected tables in DB1.You need to use an Azure Synapse pipeline to copy data from the selected tables in DB1 to the files in storage1. The solution must meet the following requirements:* The Copy activity in the pipeline must be parameterized to use the data in File1.txt to identify the source and destination of the copy.* Copy activities must occur in parallel as often as possible.Which two pipeline activities should you include in the pipeline? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

    Answer: A, D Next Question
  • Note: This question it part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.You have an Azure Data Lake Storage account that contains a staging zone.You need to design a daily process to ingest incremental data *rom the staging zone, transform the data by executing an R script and then insert the transformed data into a data warehouse in Azure Synapse Analytics.Solution: You use an Azure Data Factory schedule trigger to execute a pipeline that executes a mapping data flow, and then inserts the data into the data warehouse.Does this meet the goal?

    Answer: A Next Question
  • You are creating an Azure Data Factory data flow that will ingest data from a CSV file, cast columns to specified types of data, and insert the data into a table in an Azure Synapse Analytics dedicated SQL pool. The CSV file contains columns named username, comment and date.The data flow already contains the following:* A source transformation* A Derived Column transformation to set the appropriate types of data* A sink transformation to land the data in the poolYou need to ensure that the data flow meets the following requirements;* All valid rows must be written to the destination table.* Truncation errors in the comment column must be avoided proactively.* Any rows containing comment values that will cause truncation errors upon insert must be written to a file in blob storage.Which two actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point

    Answer: B, D Next Question
  • You have an Azure Synapse Analytics dedicated SQL pool.You need to Create a fact table named Table1 that will store sales data from the last three years. The solution must be optimized for the following query operations:Show order counts by week.* Calculate sales totals by region.* Calculate sales totals by product.* Find all the orders from a given month.Which data should you use to partition Table1?

    Answer: D Next Question
  • You have an Azure data factor/ connected to a Git repository that contains the following branches:* mam: Collaboration branch* abc: Feature branch* xyz: Feature branchYou save charges to a pipeline in the xyz branch.You need to publish the changes to the live serviceWhat should you do first?

    Answer: D Next Question
  • Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.You are designing an Azure Stream Analytics solution that will analyze Twitter data.You need to count the tweets in each 10-second window. The solution must ensure that each tweet is counted only once.Solution: You use a session window that uses a timeout size of 10 seconds.Does this meet the goal?

    Answer: A Next Question
  • You have an Azure Data Factory pipeline named pipeline1 that is invoked by a tumbling window trigger named Trigger1. Trigger1 has a recurrence of 60 minutes.You need to ensure that pipeline1 will execute only if the previous execution completes successfully.How should you configure the self-dependency for Trigger1?

    Answer: D Next Question
Page: 1 / 69
Total Questions: 341