New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Amazon MLS-C01 Exam - Topic 3 Question 69 Discussion

Actual exam question for Amazon's MLS-C01 exam
Question #: 69
Topic #: 3
[All MLS-C01 Questions]

A Machine Learning Specialist is using Apache Spark for pre-processing training data As part of the Spark pipeline, the Specialist wants to use Amazon SageMaker for training a model and hosting it Which of the following would the Specialist do to integrate the Spark application with SageMaker? (Select THREE)

Show Suggested Answer Hide Answer
Suggested Answer: A

Contribute your Thoughts:

0/2000 characters
Santos
4 months ago
Not sure about using CSV for inferences, seems outdated.
upvoted 0 times
...
Noble
4 months ago
I think downloading the AWS SDK is a must too!
upvoted 0 times
...
Yolande
4 months ago
Wait, do we really need to compress data into a ZIP file?
upvoted 0 times
...
Rosina
4 months ago
Agree, using the appropriate estimator is key for training.
upvoted 0 times
...
Edmond
4 months ago
Definitely need to install the SageMaker Spark library!
upvoted 0 times
...
Xochitl
5 months ago
I’m a bit confused about the inference part; I think using the sageMakerModel.transform method sounds right, but I’m not entirely sure if it’s necessary for this question.
upvoted 0 times
...
Juliann
5 months ago
I practiced a similar question where we had to upload data to S3, so I feel like compressing the training data into a ZIP file is probably one of the steps.
upvoted 0 times
...
Dion
5 months ago
I remember something about using an estimator from the SageMaker Spark Library, but I can't recall if we need to download the AWS SDK too.
upvoted 0 times
...
Dorthy
5 months ago
I think we definitely need to install the SageMaker Spark library in the Spark environment, but I'm not sure about the others.
upvoted 0 times
...
Keneth
5 months ago
Hmm, I'm a little unsure about this one. The options seem pretty similar, so I'll need to read through them carefully and think about the context of the question to make the best choice.
upvoted 0 times
...
Albina
5 months ago
I'm a bit confused by this one. The wording is a bit technical, and I'm not sure I fully understand the implications of the VMM resolution and the lack of Cisco Discovery Protocol. I'll need to re-read the question carefully.
upvoted 0 times
...
Chuck
5 months ago
I feel pretty confident about this one. The JSON structure in Option C looks like it would display all six values correctly in the OmniScript. I'll mark that as my final answer.
upvoted 0 times
...
Lai
9 months ago
This question is like a game of 'Guess the Right Answer' with a side of 'Guess the Secret Handshake'.
upvoted 0 times
...
Christa
10 months ago
Wait, hold up! Do I really need to convert that DataFrame to a CSV file before getting inferences from SageMaker? Sounds like a lot of extra work to me.
upvoted 0 times
Youlanda
8 months ago
C: Yeah, converting to a CSV file is not necessary. Just use the SageMaker Spark Library for training and hosting the model.
upvoted 0 times
...
Jacquelyne
8 months ago
B: Just use the appropriate estimator from the SageMaker Spark Library to train a model and get inferences.
upvoted 0 times
...
Ashley
9 months ago
A: No, you don't need to convert the DataFrame to a CSV file. You can use the sageMakerModel.transform method directly.
upvoted 0 times
...
...
Tegan
10 months ago
Hmm, this question is like a puzzle within a puzzle. I better not forget to compress that data and upload it to S3 before training the model.
upvoted 0 times
Annamaria
9 months ago
A: D) Compress the training data into a ZIP file and upload it to a pre-defined Amazon S3 bucket.
upvoted 0 times
...
Pauline
9 months ago
B: C) Use the appropriate estimator from the SageMaker Spark Library to train a model.
upvoted 0 times
...
Marlon
10 months ago
A: B) Install the SageMaker Spark library in the Spark environment.
upvoted 0 times
...
...
Lemuel
10 months ago
Alright, time to put on my machine learning hat and integrate that Spark app with SageMaker. B, C, and D sound like the way to go.
upvoted 0 times
Diane
8 months ago
D) Compress the training data into a ZIP file and upload it to a pre-defined Amazon S3 bucket.
upvoted 0 times
...
Nickole
8 months ago
C) Use the appropriate estimator from the SageMaker Spark Library to train a model.
upvoted 0 times
...
Audrie
8 months ago
B) Install the SageMaker Spark library in the Spark environment.
upvoted 0 times
...
Lauran
8 months ago
D) Compress the training data into a ZIP file and upload it to a pre-defined Amazon S3 bucket.
upvoted 0 times
...
Misty
8 months ago
C) Use the appropriate estimator from the SageMaker Spark Library to train a model.
upvoted 0 times
...
Alesia
8 months ago
B) Install the SageMaker Spark library in the Spark environment.
upvoted 0 times
...
...
Vinnie
10 months ago
Whoa, this question is a real brain-teaser! I better download that AWS SDK and get crackin' on those SageMaker Spark libraries.
upvoted 0 times
Carri
9 months ago
C) Use the appropriate estimator from the SageMaker Spark Library to train a model.
upvoted 0 times
...
Josue
9 months ago
B) Install the SageMaker Spark library in the Spark environment.
upvoted 0 times
...
Susana
10 months ago
A) Download the AWS SDK for the Spark environment
upvoted 0 times
...
...
Gladys
11 months ago
In addition to that, compressing the training data into a ZIP file and uploading it to an Amazon S3 bucket is necessary for integration.
upvoted 0 times
...
Beckie
11 months ago
I agree with Trina, using the appropriate estimator from the SageMaker Spark Library to train a model is also important.
upvoted 0 times
...
Trina
11 months ago
I think the Specialist should install the SageMaker Spark library in the Spark environment.
upvoted 0 times
...

Save Cancel