Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Microsoft Exam DP-700 Topic 2 Question 7 Discussion

Actual exam question for Microsoft's DP-700 exam
Question #: 7
Topic #: 2
[All DP-700 Questions]

You have an Azure event hub. Each event contains the following fields:

BikepointID

Street

Neighbourhood

Latitude

Longitude

No_Bikes

No_Empty_Docks

You need to ingest the events. The solution must only retain events that have a Neighbourhood value of Chelsea, and then store the retained events in a Fabric lakehouse.

What should you use?

Show Suggested Answer Hide Answer
Suggested Answer: D

To compute the standard deviation of the temperature from the thermal sensor data, you would use the Aggregate transform operator in Eventstream1. The Aggregate operator allows you to apply functions like sum, average, count, and statistical functions like standard deviation across a group of rows or events. This operator is ideal for operations that require summarizing or computing statistics over a dataset, such as calculating the standard deviation.


Contribute your Thoughts:

Glenn
1 months ago
Haha, I bet the answer is just to use a KQL queryset and then manually filter out the events. That's probably the most 'Azure' solution, right?
upvoted 0 times
Hayley
16 days ago
User 2: That's correct. Using KQL will make it easier to retain only the events you need for the Fabric lakehouse.
upvoted 0 times
...
Kiley
24 days ago
User 1: Actually, the best option is to use a KQL queryset to filter out the events with Neighbourhood value of Chelsea.
upvoted 0 times
...
...
Lillian
1 months ago
I'm leaning towards option C, a streaming dataset. That seems like it would be a good fit for this use case, and it might be a bit simpler to set up than the Spark solution.
upvoted 0 times
...
Dick
2 months ago
Hmm, I'm not sure about that. Wouldn't an eventstream be a more straightforward solution? It's designed for ingesting and processing streaming data like this.
upvoted 0 times
Rhea
22 days ago
I agree, using a KQL queryset would allow us to easily retain events with a Neighbourhood value of Chelsea before storing them in a Fabric lakehouse.
upvoted 0 times
...
Theola
1 months ago
I think a KQL queryset would be more appropriate for filtering events based on specific criteria like Neighbourhood.
upvoted 0 times
...
...
Gracia
2 months ago
I think option D, Apache Spark Structured Streaming, is the way to go. It can handle the streaming data and filter out the events based on the Neighbourhood value.
upvoted 0 times
Dorthy
28 days ago
I'm not sure about the others, but Apache Spark Structured Streaming sounds like the best choice for this task.
upvoted 0 times
...
Jamal
29 days ago
I would go with Apache Spark Structured Streaming as well, it seems like the most suitable option for this scenario.
upvoted 0 times
...
Rozella
1 months ago
I think using a KQL queryset might also work well for filtering out events based on the Neighbourhood value.
upvoted 0 times
...
Amie
1 months ago
I agree, Apache Spark Structured Streaming is a powerful tool for handling streaming data.
upvoted 0 times
...
...
Celestina
2 months ago
I'm not sure, but I think Apache Spark Structured Streaming could also work for this scenario.
upvoted 0 times
...
Nobuko
2 months ago
I agree with Leanora, KQL would be the best option to filter events by Neighbourhood.
upvoted 0 times
...
Leanora
2 months ago
I think we should use a KQL queryset for this.
upvoted 0 times
...

Save Cancel