Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Microsoft Exam DP-700 Topic 3 Question 8 Discussion

Actual exam question for Microsoft's DP-700 exam
Question #: 8
Topic #: 3
[All DP-700 Questions]

You need to schedule the population of the medallion layers to meet the technical requirements.

What should you do?

Show Suggested Answer Hide Answer
Suggested Answer: A

The technical requirements specify that:

Why Use a Data Pipeline That Calls Other Data Pipelines?

- Sequential execution of child pipelines.

- Error handling to send email notifications upon failures.

- Parallel execution of tasks where possible (e.g., simultaneous imports into the bronze layer).


Contribute your Thoughts:

Amie
1 months ago
You know, I was just thinking about how great it would be if we could schedule a data pipeline that could also perform stand-up comedy. That would really liven up the technical requirements.
upvoted 0 times
Justa
13 days ago
C) Schedule an Apache Spark job.
upvoted 0 times
...
Lamar
14 days ago
B) That would definitely make things more interesting!
upvoted 0 times
...
Delisa
19 days ago
A) Schedule a data pipeline that calls other data pipelines.
upvoted 0 times
...
...
Stephanie
1 months ago
Hold up, guys. What if we combine options A and D? Scheduling a data pipeline that calls other data pipelines could be the perfect way to orchestrate the whole process.
upvoted 0 times
Andrew
9 days ago
Let's go ahead and combine options A and D for a more efficient scheduling solution.
upvoted 0 times
...
Shelia
10 days ago
It sounds like a good plan. We can have a centralized control over the entire process.
upvoted 0 times
...
Glory
11 days ago
I agree, that way we can ensure all the necessary data is processed in the right order.
upvoted 0 times
...
Naomi
15 days ago
That's a great idea! We can have a main data pipeline that triggers other pipelines.
upvoted 0 times
...
...
Armanda
2 months ago
Hmm, I'm not sure about that. Scheduling a notebook seems a bit too simple for this task. I'd go with option C and schedule an Apache Spark job instead.
upvoted 0 times
Raul
29 days ago
True, but I still think scheduling an Apache Spark job is the best option.
upvoted 0 times
...
Laurene
1 months ago
Scheduling multiple data pipelines could also work.
upvoted 0 times
...
Clement
1 months ago
I agree, I would go with scheduling an Apache Spark job.
upvoted 0 times
...
Moira
2 months ago
I think scheduling a notebook might not be enough for this task.
upvoted 0 times
...
...
Gerald
2 months ago
But wouldn't scheduling multiple data pipelines provide more flexibility and scalability?
upvoted 0 times
...
Stephania
2 months ago
I disagree, I believe scheduling an Apache Spark job would be more efficient.
upvoted 0 times
...
Davida
2 months ago
I think option D is the way to go. Scheduling multiple data pipelines seems like the most comprehensive solution to meet the technical requirements.
upvoted 0 times
Janessa
2 months ago
I agree, scheduling multiple data pipelines is the most efficient way to meet the technical requirements.
upvoted 0 times
...
Vivan
2 months ago
Option D is definitely the best choice. It covers all bases.
upvoted 0 times
...
...
Gerald
2 months ago
I think we should schedule a data pipeline that calls other data pipelines.
upvoted 0 times
...

Save Cancel