New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Amazon MLS-C01 Exam - Topic 3 Question 108 Discussion

Actual exam question for Amazon's MLS-C01 exam
Question #: 108
Topic #: 3
[All MLS-C01 Questions]

An insurance company is creating an application to automate car insurance claims. A machine learning (ML) specialist used an Amazon SageMaker Object Detection - TensorFlow built-in algorithm to train a model to detect scratches and dents in images of cars. After the model was trained, the ML specialist noticed that the model performed better on the training dataset than on the testing dataset.

Which approach should the ML specialist use to improve the performance of the model on the testing data?

Show Suggested Answer Hide Answer
Suggested Answer: D

The machine learning model in this scenario shows signs of overfitting, as evidenced by better performance on the training dataset than on the testing dataset. Overfitting indicates that the model is capturing noise or details specific to the training data rather than general patterns.

One common approach to reduce overfitting is L2 regularization, which adds a penalty to the loss function for large weights and helps the model generalize better by smoothing out the weight distribution. By increasing the value of the L2 hyperparameter, the ML specialist can increase this penalty, helping to mitigate overfitting and improve performance on the testing dataset.

Options like increasing momentum or reducing dropout are less effective for addressing overfitting in this context.


Contribute your Thoughts:

0/2000 characters
Heike
3 months ago
Dropping the dropout rate seems risky, but could work.
upvoted 0 times
...
Aleshia
3 months ago
Wait, why would increasing L2 help? Sounds counterintuitive!
upvoted 0 times
...
Gerald
3 months ago
Not so sure about that, might just slow it down.
upvoted 0 times
...
Isidra
4 months ago
Definitely agree with that!
upvoted 0 times
...
Vi
4 months ago
I think reducing the learning rate could help.
upvoted 0 times
...
Yong
4 months ago
I feel like momentum isn't really related to the testing performance directly, so I would lean towards adjusting the dropout rate instead.
upvoted 0 times
...
Tijuana
4 months ago
I practiced a similar question, and I think adjusting the L2 regularization could help with overfitting, but I can't recall if increasing or decreasing it is better.
upvoted 0 times
...
Gearldine
4 months ago
I'm not entirely sure, but I think increasing the learning rate might make the model learn faster, though it could also lead to instability.
upvoted 0 times
...
Walton
5 months ago
I remember that overfitting can cause a model to perform poorly on testing data, so maybe reducing dropout could help?
upvoted 0 times
...
Augustine
5 months ago
I'm pretty confident that reducing the learning rate is the way to go here. Lowering the learning rate can help the model generalize better and avoid overfitting the training data. That's the approach I'd take for this problem.
upvoted 0 times
...
Janet
5 months ago
I've seen this kind of issue before. Increasing the L2 regularization could help reduce overfitting and improve the generalization to the testing data. That's my best guess for this situation.
upvoted 0 times
...
Stefany
5 months ago
Okay, let's think this through. If the model is performing better on the training data, that could mean it's overfitting. Reducing the dropout rate might help, but I'm not sure. I'll have to think about this a bit more.
upvoted 0 times
...
Ivan
5 months ago
Hmm, this seems like a tricky one. I'm not totally sure, but I think reducing the learning rate might help prevent overfitting and improve the model's performance on the testing data.
upvoted 0 times
...
Shawnda
1 year ago
Wait, did someone say 'dents and scratches'? I'm just picturing a bunch of car insurance adjusters playing bumper cars to test the model. Now that's dedication!
upvoted 0 times
...
Kristofer
1 year ago
Increasing the momentum hyperparameter? Sounds like the model is already moving too fast and leaving the testing data in the dust. Slow it down!
upvoted 0 times
...
Berry
1 year ago
Reducing the dropout_rate? That's just asking for trouble! Dropout is key to preventing overfitting, my friend.
upvoted 0 times
Willard
1 year ago
B: Yeah, dropout is important to prevent overfitting.
upvoted 0 times
...
Nohemi
1 year ago
A: I think reducing the dropout_rate might not be the best idea.
upvoted 0 times
...
...
Gennie
1 year ago
Increasing the L2 hyperparameter could add more regularization and prevent overfitting on the training data. Worth a try!
upvoted 0 times
Isabella
1 year ago
D: That's a good suggestion, but I still think increasing the value of the momentum hyperparameter could also help in improving the model's performance.
upvoted 0 times
...
Tawanna
1 year ago
C: I see your point, but I think reducing the value of the dropout_rate hyperparameter might also be a good approach to try.
upvoted 0 times
...
India
1 year ago
B: I disagree, I believe increasing the value of the L2 hyperparameter would be more effective in preventing overfitting.
upvoted 0 times
...
Trina
1 year ago
A: I think reducing the value of the learning_rate hyperparameter could help improve the model's performance on the testing data.
upvoted 0 times
...
...
Franchesca
1 year ago
I think reducing the learning_rate hyperparameter is the way to go. Slower learning can help the model generalize better to the testing data.
upvoted 0 times
Sophia
1 year ago
C: I agree, slowing down the learning process might improve the model's performance on the testing data.
upvoted 0 times
...
Remona
1 year ago
B: Maybe increasing the value of the L2 hyperparameter could also help.
upvoted 0 times
...
Virgie
1 year ago
A: I think reducing the learning_rate hyperparameter is a good idea.
upvoted 0 times
...
...
Carlene
1 year ago
Why do you think that?
upvoted 0 times
...
Micaela
1 year ago
I disagree, I believe option D would be more effective.
upvoted 0 times
...
Carlene
1 year ago
I think the ML specialist should choose option C.
upvoted 0 times
...

Save Cancel