Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Amazon Exam MLS-C01 Topic 6 Question 87 Discussion

Actual exam question for Amazon's MLS-C01 exam
Question #: 87
Topic #: 6
[All MLS-C01 Questions]

A machine learning engineer is building a bird classification model. The engineer randomly separates a dataset into a training dataset and a validation dataset. During the training phase, the model achieves very high accuracy. However, the model did not generalize well during validation of the validation dataset. The engineer realizes that the original dataset was imbalanced.

What should the engineer do to improve the validation accuracy of the model?

Show Suggested Answer Hide Answer
Suggested Answer: A

Stratified sampling is a technique that preserves the class distribution of the original dataset when creating a smaller or split dataset. This means that the proportion of examples from each class in the original dataset is maintained in the smaller or split dataset. Stratified sampling can help improve the validation accuracy of the model by ensuring that the validation dataset is representative of the original dataset and not biased towards any class. This can reduce the variance and overfitting of the model and increase its generalization ability. Stratified sampling can be applied to both oversampling and undersampling methods, depending on whether the goal is to increase or decrease the size of the dataset.

The other options are not effective ways to improve the validation accuracy of the model. Acquiring additional data about the majority classes in the original dataset will only increase the imbalance and make the model more biased towards the majority classes. Using a smaller, randomly sampled version of the training dataset will not guarantee that the class distribution is preserved and may result in losing important information from the minority classes. Performing systematic sampling on the original dataset will also not ensure that the class distribution is preserved and may introduce sampling bias if the original dataset is ordered or grouped by class.

References:

* Stratified Sampling for Imbalanced Datasets

* Imbalanced Data

* Tour of Data Sampling Methods for Imbalanced Classification


Contribute your Thoughts:

Kaycee
7 days ago
Haha, using a smaller, randomly sampled version of the training data? That's like trying to lose weight by cutting off your foot - not gonna work, my dude. Definitely don't go with option C.
upvoted 0 times
...
Isaiah
8 days ago
Acquiring more data for the majority classes, as option B suggests, could also be a good approach. But that might take a lot of time and effort. Stratified sampling seems like the more efficient solution here.
upvoted 0 times
...
Julianna
9 days ago
I'd go with option A, stratified sampling. That way, you can ensure that the training and validation sets have the same distribution of classes, which should help the model generalize better. Randomized sampling can sometimes lead to skewed class distributions in the splits.
upvoted 0 times
...
Maira
10 days ago
Oh man, this question is a tricky one. We've all been there, building a model that does great on the training data but tanks on the validation set. Sounds like the engineer has an imbalanced dataset on their hands, which is a common problem in ML.
upvoted 0 times
...

Save Cancel