Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Microsoft Exam DP-203 Topic 1 Question 94 Discussion

Actual exam question for Microsoft's DP-203 exam
Question #: 94
Topic #: 1
[All DP-203 Questions]

You have two Azure Blob Storage accounts named account1 and account2?

You plan to create an Azure Data Factory pipeline that will use scheduled intervals to replicate newly created or modified blobs from account1 to account?

You need to recommend a solution to implement the pipeline. The solution must meet the following requirements:

* Ensure that the pipeline only copies blobs that were created of modified since the most recent replication event.

* Minimize the effort to create the pipeline.

What should you recommend?

Show Suggested Answer Hide Answer
Suggested Answer: A

Contribute your Thoughts:

Juliana
16 days ago
You know, if we had three Azure Blob Storage accounts, this would be a real party. But for now, I'm going with Option C - sounds like the path of least resistance.
upvoted 0 times
...
Ocie
17 days ago
I'm picturing the pipeline engineer furiously slamming their keyboard, trying to get the flowlet to work. Option C for the win!
upvoted 0 times
Queenie
2 days ago
Definitely, let's go with option C to minimize the effort and ensure we only copy the necessary blobs.
upvoted 0 times
...
Karon
6 days ago
Yeah, using the Copy Data tool with the Metadata-driven copy task would definitely make the pipeline creation easier.
upvoted 0 times
...
Daron
12 days ago
I agree, option C seems like the best choice for this scenario.
upvoted 0 times
...
...
Lai
27 days ago
Ha! I bet the built-in copy task is the easiest way to set this up. No need to overthink it, just let Azure Data Factory do the heavy lifting.
upvoted 0 times
Chantay
5 days ago
User1: I agree, the built-in copy task seems like the simplest option.
upvoted 0 times
...
...
Stephaine
1 months ago
Option D uses a built-in copy task which will minimize the effort to create the pipeline.
upvoted 0 times
...
Tennie
1 months ago
Why do you think option D is better?
upvoted 0 times
...
Germaine
1 months ago
Hmm, I'm not sure about that. Doesn't the Data Flow activity give you more flexibility to customize the pipeline? It might be overkill for this use case, though.
upvoted 0 times
Wilda
15 days ago
User2: Yeah, I agree. Maybe the Metadata-driven copy task would be a better fit.
upvoted 0 times
...
Brett
29 days ago
User1: I think the Data Flow activity might be too much for this.
upvoted 0 times
...
...
Tomas
2 months ago
Option C looks like the way to go. Metadata-driven copy task sounds like it'll handle the incremental updates nicely.
upvoted 0 times
Nadine
30 days ago
I think we should go with option C for sure. It meets all the requirements we need for the pipeline.
upvoted 0 times
...
Shad
1 months ago
It definitely seems like the best option to minimize the effort needed to create the pipeline.
upvoted 0 times
...
Lennie
1 months ago
I agree, using the Copy Data tool with the Metadata-driven copy task seems like the most efficient solution.
upvoted 0 times
...
Micah
1 months ago
Option C looks like the way to go. Metadata-driven copy task sounds like it'll handle the incremental updates nicely.
upvoted 0 times
...
...
Stephaine
2 months ago
I disagree, I believe option D is the better choice.
upvoted 0 times
...
Tennie
2 months ago
I think we should go with option C.
upvoted 0 times
...

Save Cancel