New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Microsoft DP-700 Exam - Topic 3 Question 8 Discussion

Actual exam question for Microsoft's DP-700 exam
Question #: 8
Topic #: 3
[All DP-700 Questions]

You need to schedule the population of the medallion layers to meet the technical requirements.

What should you do?

Show Suggested Answer Hide Answer
Suggested Answer: A

The technical requirements specify that:

Why Use a Data Pipeline That Calls Other Data Pipelines?

- Sequential execution of child pipelines.

- Error handling to send email notifications upon failures.

- Parallel execution of tasks where possible (e.g., simultaneous imports into the bronze layer).


Contribute your Thoughts:

0/2000 characters
Francesco
3 months ago
Not sure if Apache Spark is the best choice here, honestly.
upvoted 0 times
...
Aleta
3 months ago
Multiple data pipelines could work too, but A seems more efficient.
upvoted 0 times
...
Marnie
3 months ago
Wait, can you really schedule a notebook? Sounds odd.
upvoted 0 times
...
Benton
4 months ago
Definitely agree, option A makes the most sense!
upvoted 0 times
...
Norah
4 months ago
I think scheduling a data pipeline is the way to go.
upvoted 0 times
...
Tran
4 months ago
I vaguely remember something about notebooks being useful for scheduling, but I don't think that's the best fit for this scenario.
upvoted 0 times
...
Willetta
4 months ago
I’m leaning towards option D, scheduling multiple data pipelines, since it seems like a more comprehensive solution, but I can't recall the exact details.
upvoted 0 times
...
Leatha
4 months ago
I remember practicing a question about scheduling jobs, and I feel like scheduling an Apache Spark job could be relevant here, but I need to double-check the requirements.
upvoted 0 times
...
Phuong
5 months ago
I think scheduling a data pipeline that calls other pipelines might be the right approach, but I'm not entirely sure if that's the most efficient way.
upvoted 0 times
...
Yolande
5 months ago
I've got this! Scheduling an Apache Spark job seems like the most straightforward solution. That should give me the flexibility and power I need to handle the technical requirements.
upvoted 0 times
...
Terrilyn
5 months ago
Based on my understanding of the question, I think scheduling multiple data pipelines would be the way to go. That way, I can break down the process into smaller, more manageable tasks and ensure everything is running smoothly.
upvoted 0 times
...
Leota
5 months ago
I'm a bit confused by the wording of the question. Does "schedule the population of the medallion layers" mean I need to set up some kind of scheduling system? I'm not sure which option would be the best approach.
upvoted 0 times
...
Rebeca
5 months ago
Okay, let's see. Scheduling a data pipeline that calls other data pipelines sounds like a good option to me. That way, I can manage the overall process and ensure the technical requirements are met.
upvoted 0 times
...
Delmy
5 months ago
Hmm, this seems like a tricky one. I'll need to think through the technical requirements carefully to determine the best approach.
upvoted 0 times
...
Amie
12 months ago
You know, I was just thinking about how great it would be if we could schedule a data pipeline that could also perform stand-up comedy. That would really liven up the technical requirements.
upvoted 0 times
Justa
11 months ago
C) Schedule an Apache Spark job.
upvoted 0 times
...
Lamar
11 months ago
B) That would definitely make things more interesting!
upvoted 0 times
...
Delisa
11 months ago
A) Schedule a data pipeline that calls other data pipelines.
upvoted 0 times
...
...
Stephanie
12 months ago
Hold up, guys. What if we combine options A and D? Scheduling a data pipeline that calls other data pipelines could be the perfect way to orchestrate the whole process.
upvoted 0 times
Andrew
11 months ago
Let's go ahead and combine options A and D for a more efficient scheduling solution.
upvoted 0 times
...
Shelia
11 months ago
It sounds like a good plan. We can have a centralized control over the entire process.
upvoted 0 times
...
Glory
11 months ago
I agree, that way we can ensure all the necessary data is processed in the right order.
upvoted 0 times
...
Naomi
11 months ago
That's a great idea! We can have a main data pipeline that triggers other pipelines.
upvoted 0 times
...
...
Armanda
1 year ago
Hmm, I'm not sure about that. Scheduling a notebook seems a bit too simple for this task. I'd go with option C and schedule an Apache Spark job instead.
upvoted 0 times
Raul
11 months ago
True, but I still think scheduling an Apache Spark job is the best option.
upvoted 0 times
...
Laurene
11 months ago
Scheduling multiple data pipelines could also work.
upvoted 0 times
...
Clement
12 months ago
I agree, I would go with scheduling an Apache Spark job.
upvoted 0 times
...
Moira
12 months ago
I think scheduling a notebook might not be enough for this task.
upvoted 0 times
...
...
Gerald
1 year ago
But wouldn't scheduling multiple data pipelines provide more flexibility and scalability?
upvoted 0 times
...
Stephania
1 year ago
I disagree, I believe scheduling an Apache Spark job would be more efficient.
upvoted 0 times
...
Davida
1 year ago
I think option D is the way to go. Scheduling multiple data pipelines seems like the most comprehensive solution to meet the technical requirements.
upvoted 0 times
Janessa
1 year ago
I agree, scheduling multiple data pipelines is the most efficient way to meet the technical requirements.
upvoted 0 times
...
Vivan
1 year ago
Option D is definitely the best choice. It covers all bases.
upvoted 0 times
...
...
Gerald
1 year ago
I think we should schedule a data pipeline that calls other data pipelines.
upvoted 0 times
...

Save Cancel