New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Microsoft DP-100 Exam - Topic 1 Question 130 Discussion

Actual exam question for Microsoft's DP-100 exam
Question #: 130
Topic #: 1
[All DP-100 Questions]

You are a data scientist building a deep convolutional neural network (CNN) for image classification.

The CNN model you built shows signs of overfitting.

You need to reduce overfitting and converge the model to an optimal fit.

Which two actions should you perform? Each correct answer presents a complete solution.

NOTE: Each correct selection is worth one point.

Show Suggested Answer Hide Answer

Contribute your Thoughts:

0/2000 characters
Barney
2 months ago
Definitely agree on regularization and data augmentation!
upvoted 0 times
...
Justine
2 months ago
Reducing training data? That sounds counterintuitive.
upvoted 0 times
...
Jackie
3 months ago
Training data augmentation works wonders too!
upvoted 0 times
...
Lezlie
3 months ago
Wait, won't adding more layers just make overfitting worse?
upvoted 0 times
...
Valentin
3 months ago
Adding L1/L2 regularization is a solid choice!
upvoted 0 times
...
Vincenza
3 months ago
Adding more layers seems counterintuitive for overfitting, so I’m leaning towards regularization and data augmentation, but I hope I remember correctly!
upvoted 0 times
...
Dorothy
3 months ago
I practiced a similar question, and I feel like using data augmentation is definitely one of the right answers, but I’m uncertain about the regularization part.
upvoted 0 times
...
Margret
4 months ago
I think adding L1/L2 regularization could help, but I also recall that data augmentation is a common technique to combat overfitting.
upvoted 0 times
...
Golda
4 months ago
I remember that reducing the amount of training data is usually not a good idea for overfitting, but I’m not sure about the other options.
upvoted 0 times
...
Mammie
4 months ago
Okay, I've got it! Adding regularization and data augmentation are definitely the way to go. Those are the two best strategies to reduce overfitting and converge the model to an optimal fit. I feel good about this one.
upvoted 0 times
...
Tegan
4 months ago
I'm a bit unsure about this one. I know overfitting is a common problem with deep CNNs, but I'm not sure which specific actions would be most effective. I'll have to think it through carefully before making my choices.
upvoted 0 times
...
Nobuko
4 months ago
I'm feeling pretty confident about this one. I think the key is to use a combination of techniques to tackle the overfitting issue. Adding L1/L2 regularization and using data augmentation seem like the best options to me.
upvoted 0 times
...
Edward
5 months ago
Okay, let's see here. Reducing the training data seems counterintuitive if the model is overfitting, so I'll probably skip that one. Adding a dense layer with 64 units could help, but I'm not sure if that's the best approach.
upvoted 0 times
...
Avery
5 months ago
Hmm, this seems like a tricky one. I think I'll start by considering the options that directly address overfitting, like adding regularization or using data augmentation.
upvoted 0 times
...
German
5 months ago
I agree with Erinn, regularization can help prevent overfitting.
upvoted 0 times
...
Gracia
7 months ago
Definitely C and D. Reducing overfitting is all about finding that sweet spot between complexity and generalization. Gotta have that balance, my dude.
upvoted 0 times
Reuben
5 months ago
I agree, adding L1/L2 regularization and using training data augmentation can help reduce overfitting.
upvoted 0 times
...
...
Erinn
7 months ago
I think we should add L1/L2 regularization to reduce overfitting.
upvoted 0 times
...
Youlanda
7 months ago
C and D, baby! Regularization and data augmentation are the way to go. Gotta keep that model in shape, ya know?
upvoted 0 times
...

Save Cancel