New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Microsoft DP-100 Exam - Topic 2 Question 129 Discussion

Actual exam question for Microsoft's DP-100 exam
Question #: 129
Topic #: 2
[All DP-100 Questions]

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You create an Azure Machine Learning service datastore in a workspace. The datastore contains the following files:

* /data/2018/Q1 .csv

* /data/2018/Q2.csv

* /data/2018/Q3.csv

* /data/2018/Q4.csv

* /data/2019/Q1.csv

All files store data in the following format:

id,f1,f2,l

1,1,2,0

2,1,1,1

3.2.1.0

You run the following code:

You need to create a dataset named training_data and load the data from all files into a single data frame by using the following code:

Solution: Run the following code:

Does the solution meet the goal?

Show Suggested Answer Hide Answer
Suggested Answer: A

Use two file paths.

Use Dataset.Tabular_from_delimeted as the data isn't cleansed.

Note:

A TabularDataset represents data in a tabular format by parsing the provided file or list of files. This provides you with the ability to materialize the data into a pandas or Spark DataFrame so you can work with familiar data preparation and training libraries without having to leave your notebook. You can create a TabularDataset object from .csv, .tsv, .parquet, .jsonl files, and from SQL query results.


https://docs.microsoft.com/en-us/azure/machine-learning/how-to-create-register-datasets

Contribute your Thoughts:

0/2000 characters
Jamika
2 months ago
This is a classic setup for merging datasets!
upvoted 0 times
...
Ashley
2 months ago
I disagree, I don't think it meets the goal.
upvoted 0 times
...
Refugia
3 months ago
I think this code should work, definitely a yes!
upvoted 0 times
...
Lorean
3 months ago
Wait, are we sure it handles all the files correctly?
upvoted 0 times
...
Aide
3 months ago
Looks like all the files are in the right format!
upvoted 0 times
...
Jeff
3 months ago
I’m leaning towards "No" for this one, but I wish I had practiced more with Azure's data loading functions.
upvoted 0 times
...
Ma
3 months ago
I feel like the solution might not meet the goal because it could be missing some parameters for combining the data frames.
upvoted 0 times
...
Kenia
4 months ago
I remember a similar question where we had to load multiple CSVs, but I can't recall the exact syntax we used.
upvoted 0 times
...
Alva
4 months ago
I think the code should work, but I'm not entirely sure if it handles all the files correctly.
upvoted 0 times
...
Bernardine
4 months ago
This seems straightforward enough. I'll focus on getting the file paths right and making sure the data frames are all combined correctly. The solution provided looks like it should work, so I'll start there and see how it goes.
upvoted 0 times
...
Vanda
4 months ago
Wait, I'm not sure I understand the part about the "unique solution" and not being able to go back to the question. Does that mean there's only one right answer, or that there could be multiple correct solutions? I want to make sure I'm not missing something important.
upvoted 0 times
...
Glendora
4 months ago
Okay, I think I've got it. The solution provided looks good - it uses `os.path.join()` to build the file paths and then `pd.concat()` to combine the data frames. I'll give that a try.
upvoted 0 times
...
Detra
5 months ago
Hmm, I'm a bit confused about the file paths. Do we need to use `os.listdir()` to get a list of all the files first, or can we just hardcode the paths?
upvoted 0 times
...
Clorinda
5 months ago
I think I can handle this. The key is to use the `os.path.join()` function to construct the file paths correctly, and then use `pd.concat()` to combine the data frames from all the files.
upvoted 0 times
...
Gianna
6 months ago
I would go with option A) Yes, the solution seems to meet the goal.
upvoted 0 times
...
Cristal
7 months ago
I think the solution is correct because it combines data from all files into a single data frame.
upvoted 0 times
...
Izetta
7 months ago
I'm not sure, I think there might be a better way to load the data.
upvoted 0 times
...
Tricia
7 months ago
I agree with Graciela, the solution looks good.
upvoted 0 times
...
Melinda
7 months ago
Haha, I bet the exam writers are sitting back and chuckling at all the students overthinking this one. It's a simple task, and the solution provided is perfect. Time to move on to the next question!
upvoted 0 times
Shantell
5 months ago
B) No
upvoted 0 times
...
Yoko
6 months ago
Haha, I agree! It's not as complicated as it seems.
upvoted 0 times
...
Alesia
7 months ago
A) Yes
upvoted 0 times
...
...
Shawn
7 months ago
Wait, why are we using pandas to load CSV files? Isn't that a bit overkill? I would just use the built-in open() and csv.reader() functions. Much simpler and more efficient.
upvoted 0 times
Leota
6 months ago
User2: Yeah, I think using open() and csv.reader() would be a better choice.
upvoted 0 times
...
Stephen
7 months ago
User1: I agree, using pandas seems unnecessary for loading CSV files.
upvoted 0 times
...
...
Vincent
7 months ago
Yep, that code should do the trick. It's a straightforward way to load all the files into a single data frame. I like how it uses a loop to handle the different directories.
upvoted 0 times
Hassie
7 months ago
Agreed, the loop makes it efficient.
upvoted 0 times
...
Hassie
7 months ago
Yes
upvoted 0 times
...
...
Candida
8 months ago
The solution looks good to me. It uses the os.path.join() function to construct the file paths and then concatenates all the data frames into a single data frame. Seems like it would meet the goal.
upvoted 0 times
Colene
7 months ago
A
upvoted 0 times
...
...
Graciela
8 months ago
I think the solution meets the goal.
upvoted 0 times
...

Save Cancel