Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Microsoft Exam DP-700 Topic 3 Question 8 Discussion

Actual exam question for Microsoft's DP-700 exam
Question #: 8
Topic #: 3
[All DP-700 Questions]

You need to schedule the population of the medallion layers to meet the technical requirements.

What should you do?

Show Suggested Answer Hide Answer
Suggested Answer: A

The technical requirements specify that:

Why Use a Data Pipeline That Calls Other Data Pipelines?

- Sequential execution of child pipelines.

- Error handling to send email notifications upon failures.

- Parallel execution of tasks where possible (e.g., simultaneous imports into the bronze layer).


Contribute your Thoughts:

Amie
22 days ago
You know, I was just thinking about how great it would be if we could schedule a data pipeline that could also perform stand-up comedy. That would really liven up the technical requirements.
upvoted 0 times
C) Schedule an Apache Spark job.
upvoted 0 times
...
Lamar
22 hours ago
B) That would definitely make things more interesting!
upvoted 0 times
...
Delisa
6 days ago
A) Schedule a data pipeline that calls other data pipelines.
upvoted 0 times
...
...
Stephanie
23 days ago
Hold up, guys. What if we combine options A and D? Scheduling a data pipeline that calls other data pipelines could be the perfect way to orchestrate the whole process.
upvoted 0 times
Naomi
2 days ago
That's a great idea! We can have a main data pipeline that triggers other pipelines.
upvoted 0 times
...
...
Armanda
2 months ago
Hmm, I'm not sure about that. Scheduling a notebook seems a bit too simple for this task. I'd go with option C and schedule an Apache Spark job instead.
upvoted 0 times
Raul
16 days ago
True, but I still think scheduling an Apache Spark job is the best option.
upvoted 0 times
...
Laurene
18 days ago
Scheduling multiple data pipelines could also work.
upvoted 0 times
...
Clement
27 days ago
I agree, I would go with scheduling an Apache Spark job.
upvoted 0 times
...
Moira
1 months ago
I think scheduling a notebook might not be enough for this task.
upvoted 0 times
...
...
Gerald
2 months ago
But wouldn't scheduling multiple data pipelines provide more flexibility and scalability?
upvoted 0 times
...
Stephania
2 months ago
I disagree, I believe scheduling an Apache Spark job would be more efficient.
upvoted 0 times
...
Davida
2 months ago
I think option D is the way to go. Scheduling multiple data pipelines seems like the most comprehensive solution to meet the technical requirements.
upvoted 0 times
Janessa
1 months ago
I agree, scheduling multiple data pipelines is the most efficient way to meet the technical requirements.
upvoted 0 times
...
Vivan
1 months ago
Option D is definitely the best choice. It covers all bases.
upvoted 0 times
...
...
Gerald
2 months ago
I think we should schedule a data pipeline that calls other data pipelines.
upvoted 0 times
...

Save Cancel