New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Amazon MLS-C01 Exam - Topic 2 Question 102 Discussion

Actual exam question for Amazon's MLS-C01 exam
Question #: 102
Topic #: 2
[All MLS-C01 Questions]

A machine learning (ML) specialist is using the Amazon SageMaker DeepAR forecasting algorithm to train a model on CPU-based Amazon EC2 On-Demand instances. The model currently takes multiple hours to train. The ML specialist wants to decrease the training time of the model.

Which approaches will meet this requirement7 (SELECT TWO )

Show Suggested Answer Hide Answer
Suggested Answer: C, D

The best approaches to decrease the training time of the model are C and D, because they can improve the computational efficiency and parallelization of the training process. These approaches have the following benefits:

C: Replacing CPU-based EC2 instances with GPU-based EC2 instances can speed up the training of the DeepAR algorithm, as it can leverage the parallel processing power of GPUs to perform matrix operations and gradient computations faster than CPUs12.The DeepAR algorithm supports GPU-based EC2 instances such as ml.p2 and ml.p33.

D: Using multiple training instances can also reduce the training time of the DeepAR algorithm, as it can distribute the workload across multiple nodes and perform data parallelism4.The DeepAR algorithm supports distributed training with multiple CPU-based or GPU-based EC2 instances3.

The other options are not effective or relevant, because they have the following drawbacks:

A: Replacing On-Demand Instances with Spot Instances can reduce the cost of the training, but not necessarily the time, as Spot Instances are subject to interruption and availability5.Moreover, the DeepAR algorithm does not support checkpointing, which means that the training cannot resume from the last saved state if the Spot Instance is terminated3.

B: Configuring model auto scaling dynamically to adjust the number of instances automatically is not applicable, as this feature is only available for inference endpoints, not for training jobs6.

E: Using a pre-trained version of the model and running incremental training is not possible, as the DeepAR algorithm does not support incremental training or transfer learning3.The DeepAR algorithm requires a full retraining of the model whenever new data is added or the hyperparameters are changed7.

References:

1:GPU vs CPU: What Matters Most for Machine Learning? | by Louis (What's AI) Bouchard | Towards Data Science

2:How GPUs Accelerate Machine Learning Training | NVIDIA Developer Blog

3:DeepAR Forecasting Algorithm - Amazon SageMaker

4:Distributed Training - Amazon SageMaker

5:Managed Spot Training - Amazon SageMaker

6:Automatic Scaling - Amazon SageMaker

7:How the DeepAR Algorithm Works - Amazon SageMaker


Contribute your Thoughts:

0/2000 characters
Corrina
3 months ago
Wait, can switching to GPUs really cut hours off training?
upvoted 0 times
...
Alonso
3 months ago
Pre-trained models can save so much time, great option!
upvoted 0 times
...
Gail
3 months ago
Spot instances? Not sure if that's the best move.
upvoted 0 times
...
Jannette
4 months ago
Using multiple training instances is a solid choice too.
upvoted 0 times
...
Carole
4 months ago
Definitely go for GPU-based EC2 instances!
upvoted 0 times
...
Shelba
4 months ago
I feel like using a pre-trained model could be beneficial, but I’m not clear on how incremental training would fit into this scenario.
upvoted 0 times
...
Nickolas
4 months ago
Replacing CPU instances with GPU instances sounds like a solid option, but I can't recall if it’s always necessary for DeepAR.
upvoted 0 times
...
Stephane
4 months ago
I think using multiple training instances could help speed things up, especially if the workload can be parallelized.
upvoted 0 times
...
Verlene
5 months ago
I remember discussing how Spot Instances can save costs, but I'm not sure if they would directly reduce training time.
upvoted 0 times
...
Jean
5 months ago
I think I'll focus on the two options that seem most promising - replacing the CPU instances with GPU instances, and using multiple training instances. Those seem like the most direct ways to decrease the training time, based on the information provided.
upvoted 0 times
...
Dorian
5 months ago
The GPU-based instances seem like they could be a good option to speed things up. I know the DeepAR algorithm is optimized for GPU, so that might be the way to go. And using a pre-trained model for incremental training could also be a good strategy, if I have access to one.
upvoted 0 times
...
Ceola
5 months ago
Okay, let's see. Replacing the On-Demand instances with Spot Instances could save some money, but I'm not sure if that would actually decrease the training time. And using multiple training instances seems like it could work, but I'd need to make sure I understand how to set that up properly.
upvoted 0 times
...
Jaime
5 months ago
Hmm, this looks like a tricky one. I think I'll start by considering the options that involve changing the instance type or configuration, since that could have a big impact on training time.
upvoted 0 times
...
Iluminada
5 months ago
I'm a bit confused by the auto-scaling option. Does that mean the system would automatically adjust the number of instances as needed during training? That could be really helpful, but I'd want to make sure I understand how to configure it correctly.
upvoted 0 times
...
Vallie
5 months ago
I think I remember something about how OPM negotiates the premiums, but I'm not sure if they can adjust rates after proposals are submitted.
upvoted 0 times
...
Portia
1 year ago
Spot Instances? More like Speed Instances, am I right? *wink wink*
upvoted 0 times
...
Edison
1 year ago
Dynamically adjusting the number of instances? That's like having your cake and eating it too! Brilliant idea.
upvoted 0 times
Wilbert
1 year ago
C) Replace CPU-based EC2 instances with GPU-based EC2 instances.
upvoted 0 times
...
Karon
1 year ago
B) Configure model auto scaling dynamically to adjust the number of instances automatically.
upvoted 0 times
...
...
Paulina
1 year ago
What about using a pre-trained model? That could save a ton of time, but I guess it depends on the specific use case.
upvoted 0 times
...
Lang
1 year ago
I agree, those two options make the most sense. GPU-based instances might also be a good choice, but the cost could be an issue.
upvoted 0 times
...
Velda
1 year ago
Spot Instances and using multiple training instances seem like the way to go. That should significantly reduce the training time.
upvoted 0 times
Lettie
1 year ago
Let's give it a try and see how much we can decrease the training time.
upvoted 0 times
...
Lashon
1 year ago
I agree, those two approaches can definitely speed up the training process.
upvoted 0 times
...
Kristal
1 year ago
Spot Instances and using multiple training instances are great options to reduce training time.
upvoted 0 times
...
...
Gail
1 year ago
I believe option B could also be beneficial, as auto scaling can optimize resource usage.
upvoted 0 times
...
Chu
1 year ago
I agree with Fatima, using GPU-based instances and multiple training instances should speed up the process.
upvoted 0 times
...
Fatima
1 year ago
I think option C and D would help decrease training time.
upvoted 0 times
...

Save Cancel