Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Amazon MLA-C01 Exam - Topic 1 Question 7 Discussion

Actual exam question for Amazon's MLA-C01 exam
Question #: 7
Topic #: 1
[All MLA-C01 Questions]

A company wants to improve the sustainability of its ML operations.

Which actions will reduce the energy usage and computational resources that are associated with the company's training jobs? (Choose two.)

Show Suggested Answer Hide Answer
Suggested Answer: A, D

Contribute your Thoughts:

0/2000 characters
Tyra
3 months ago
I’m not sure about D, are AWS Trainium instances really that efficient?
upvoted 0 times
...
Weldon
3 months ago
Totally agree, A makes a lot of sense!
upvoted 0 times
...
Denae
3 months ago
A is a solid choice for stopping wasted training time.
upvoted 0 times
...
Kattie
3 months ago
C could help, but I think A is the best option here!
upvoted 0 times
...
Ming
4 months ago
B seems irrelevant for energy savings, just saying.
upvoted 0 times
...
Yuonne
4 months ago
I recall that distributed training with PyTorch or TensorFlow can sometimes lead to higher resource consumption, so I'm not sure if that's the best choice for sustainability.
upvoted 0 times
...
Margart
4 months ago
I practiced a question similar to this, and I feel like deploying with AWS Lambda could help reduce resource usage, but I'm not confident.
upvoted 0 times
...
Shonda
4 months ago
I'm not entirely sure, but I think using AWS Trainium instances might be more efficient for training compared to regular instances.
upvoted 0 times
...
Domingo
4 months ago
I remember that using SageMaker Debugger can help stop jobs that aren't converging, which should save energy.
upvoted 0 times
...
Carin
5 months ago
The distributed training option in PyTorch or TensorFlow seems like it could be a good strategy to explore. I'll make sure to consider that as well.
upvoted 0 times
...
Karrie
5 months ago
I'm not familiar with some of these AWS services, so I'll need to do a quick review to make sure I understand how they can help with sustainability.
upvoted 0 times
...
Sophia
5 months ago
Using Amazon SageMaker Debugger to stop non-converging training jobs sounds like a good way to save resources. And using AWS Trainium instances could also be helpful.
upvoted 0 times
...
Merrilee
5 months ago
Hmm, I'm a bit unsure about this one. I'll need to think through the different options and how they might impact the company's sustainability goals.
upvoted 0 times
...
Annice
5 months ago
This question seems straightforward. I'll focus on options that can reduce energy usage and computational resources for the training jobs.
upvoted 0 times
...
Edna
7 months ago
B? Really? Ground Truth for data labeling? I guess they want us to label our data with crayons and glitter to be more 'sustainable'.
upvoted 0 times
...
Lura
7 months ago
D all the way! AWS Trainium instances are specifically designed for training, so they're bound to be more efficient.
upvoted 0 times
Kimbery
6 months ago
C: Deploying models by using AWS Lambda functions could also help in reducing computational resources.
upvoted 0 times
...
Bernardo
6 months ago
B: I think using Amazon SageMaker Debugger to stop training jobs when non-converging conditions are detected is also a good idea.
upvoted 0 times
...
Lashawn
6 months ago
A: I agree, using AWS Trainium instances for training would definitely help reduce energy usage.
upvoted 0 times
...
...
Ernest
7 months ago
I'm surprised option C is there. Using Lambda for model deployment? That's like trying to run a marathon with cement shoes!
upvoted 0 times
Honey
6 months ago
Option C is definitely not the best choice for reducing energy usage.
upvoted 0 times
...
...
Ashlee
8 months ago
A and E are the way to go! Stopping non-converging jobs and distributed training are key to saving resources.
upvoted 0 times
Michal
6 months ago
I agree, using Amazon SageMaker Debugger can really help in detecting non-converging conditions.
upvoted 0 times
...
Lizette
7 months ago
A and E are the way to go! Stopping non-converging jobs and distributed training are key to saving resources.
upvoted 0 times
...
...
Lasandra
8 months ago
Using PyTorch or TensorFlow with distributed training can be another effective way to reduce computational resources.
upvoted 0 times
...
Aleta
8 months ago
I believe deploying models with AWS Lambda functions can also help in reducing energy usage.
upvoted 0 times
...
Lorenza
8 months ago
I agree with Bettina, stopping non-converging conditions early can save resources.
upvoted 0 times
...
Bettina
8 months ago
I think using Amazon SageMaker Debugger can help reduce energy usage.
upvoted 0 times
...

Save Cancel