New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Amazon MLS-C01 Exam - Topic 1 Question 85 Discussion

Actual exam question for Amazon's MLS-C01 exam
Question #: 85
Topic #: 1
[All MLS-C01 Questions]

An ecommerce company has developed a XGBoost model in Amazon SageMaker to predict whether a customer will return a purchased item. The dataset is imbalanced. Only 5% of customers return items

A data scientist must find the hyperparameters to capture as many instances of returned items as possible. The company has a small budget for compute.

How should the data scientist meet these requirements MOST cost-effectively?

Show Suggested Answer Hide Answer
Suggested Answer: B

The best solution to meet the requirements is to tune the csv_weight hyperparameter and the scale_pos_weight hyperparameter by using automatic model tuning (AMT). Optimize on {''HyperParameterTuningJobObjective'': {''MetricName'': ''validation:f1'', ''Type'': ''Maximize''}}.

The csv_weight hyperparameter is used to specify the instance weights for the training data in CSV format. This can help handle imbalanced data by assigning higher weights to the minority class examples and lower weights to the majority class examples. The scale_pos_weight hyperparameter is used to control the balance of positive and negative weights. It is the ratio of the number of negative class examples to the number of positive class examples. Setting a higher value for this hyperparameter can increase the importance of the positive class and improve the recall. Both of these hyperparameters can help the XGBoost model capture as many instances of returned items as possible.

Automatic model tuning (AMT) is a feature of Amazon SageMaker that automates the process of finding the best hyperparameter values for a machine learning model. AMT uses Bayesian optimization to search the hyperparameter space and evaluate the model performance based on a predefined objective metric. The objective metric is the metric that AMT tries to optimize by adjusting the hyperparameter values. For imbalanced classification problems, accuracy is not a good objective metric, as it can be misleading and biased towards the majority class. A better objective metric is the F1 score, which is the harmonic mean of precision and recall. The F1 score can reflect the balance between precision and recall and is more suitable for imbalanced data. The F1 score ranges from 0 to 1, where 1 is the best possible value. Therefore, the type of the objective should be ''Maximize'' to achieve the highest F1 score.

By tuning the csv_weight and scale_pos_weight hyperparameters and optimizing on the F1 score, the data scientist can meet the requirements most cost-effectively. This solution requires tuning only two hyperparameters, which can reduce the computation time and cost compared to tuning all possible hyperparameters. This solution also uses the appropriate objective metric for imbalanced classification, which can improve the model performance and capture more instances of returned items.

References:

* XGBoost Hyperparameters

* Automatic Model Tuning

* How to Configure XGBoost for Imbalanced Classification

* Imbalanced Data


Contribute your Thoughts:

0/2000 characters
Curt
3 months ago
I thought tuning all hyperparameters would be better for accuracy?
upvoted 0 times
...
Danica
3 months ago
Tuning csv_weight and scale_pos_weight is definitely the way to go.
upvoted 0 times
...
Emelda
4 months ago
Wait, why would you minimize the f1 score? That sounds off.
upvoted 0 times
...
Daniela
4 months ago
I agree, focusing on f1 score is key here!
upvoted 0 times
...
Germaine
4 months ago
Option B seems solid for imbalanced data.
upvoted 0 times
...
Carman
4 months ago
I thought we were supposed to minimize some metrics for imbalanced datasets, but I'm confused about whether we should maximize or minimize in this case.
upvoted 0 times
...
Daron
4 months ago
I practiced a similar question, and I think optimizing for F1 score makes sense since we want to capture more returns. So, C seems like a good choice.
upvoted 0 times
...
Laurel
4 months ago
I'm not entirely sure, but I feel like tuning all hyperparameters could be too costly given the budget constraints. Maybe we should just focus on the key ones?
upvoted 0 times
...
Dalene
5 months ago
I remember that for imbalanced datasets, focusing on metrics like F1 score is crucial, so I think option B or C might be the way to go.
upvoted 0 times
...
Fabiola
5 months ago
Okay, I think I've got this. Tuning the csv_weight and scale_pos_weight hyperparameters while optimizing for F1 score seems like the most targeted approach to address the imbalanced dataset and the company's requirements. I'm feeling pretty confident about this one.
upvoted 0 times
...
Donette
5 months ago
Hmm, I'm not sure about the scale_pos_weight hyperparameter. I know it's used to handle class imbalance, but I'm not sure how to best tune it. Maybe I should do some research on that before the exam.
upvoted 0 times
...
Rueben
5 months ago
I'm a bit confused about the objective function. Should we be maximizing or minimizing the F1 score? The question mentions a small budget, so I'm not sure if optimizing for accuracy would be the most cost-effective approach.
upvoted 0 times
...
Ruth
5 months ago
This looks like a classic imbalanced classification problem. I think tuning the hyperparameters to optimize for F1 score would be the best approach to capture as many returned items as possible.
upvoted 0 times
...
Alona
5 months ago
Hmm, I'm a bit confused. We have the number of shares at the start and end of the year, as well as the profit before and after tax. How do we use all of that information to get the earnings per share?
upvoted 0 times
...
Delisa
5 months ago
Hmm, I'm a bit unsure about this one. The terms "current size," "expected growth rate," and "market competitiveness" seem relevant, but I'm not totally confident which criterion they fall under.
upvoted 0 times
...
William
5 months ago
I think the TGW Orchestrator feature might be the way to go here. It's designed to help manage and control routing in a Transit Gateway environment, which could be useful for preventing partner-related issues. But I'll need to confirm that's the correct interpretation.
upvoted 0 times
...
Kassandra
5 months ago
I think the key is to understand the different roles and responsibilities in the organization. The senior user role is likely distinct from the other options provided, so I'll need to carefully consider the differences.
upvoted 0 times
...
Juliana
5 months ago
I don't think Microsoft allows for support tickets just for encryption requirements. There must be another way they recommend.
upvoted 0 times
...
Joanne
5 months ago
I think option A might work, but I'm not entirely sure if using Global Mouse Clicks is the best approach for PDFs.
upvoted 0 times
...
Charlene
2 years ago
You know, I was leaning towards option C as well. But then I remembered that the company has a small budget, and tuning all the hyperparameters might be overkill. Maybe we could compromise and try option B first, and if that's not cutting it, we can always go back and tune everything. But let's not get too fancy, we need to keep this simple and cost-effective.
upvoted 0 times
...
Valentine
2 years ago
Hmm, I'm not so sure. Optimizing for F1 score is a good idea, but I'm not convinced that just tuning the csv_weight and scale_pos_weight hyperparameters is enough. What if there are other hyperparameters that could really boost the model's performance on the minority class? I might lean more towards option C, where we tune all the hyperparameters but still optimize for F1 score.
upvoted 0 times
Lavera
2 years ago
C
upvoted 0 times
...
Marcelle
2 years ago
C
upvoted 0 times
...
Anissa
2 years ago
C
upvoted 0 times
...
Elinore
2 years ago
C
upvoted 0 times
...
...
Nana
2 years ago
I agree, option B seems like the most cost-effective approach. Tuning all the hyperparameters might be overkill, and the company has a small budget. Focusing on the key hyperparameters that can help with the imbalanced dataset is a smart move. And optimizing for F1 score instead of just accuracy is a good call.
upvoted 0 times
...
Loreta
2 years ago
Hmm, this is a tricky one. The dataset is imbalanced, with only 5% of customers returning items. We need to find a way to capture as many of those returned items as possible, but the company has a small budget for compute. I think option B might be the way to go. Tuning the csv_weight and scale_pos_weight hyperparameters specifically could help us focus on the minority class and maximize the F1 score, which is a good balance between precision and recall.
upvoted 0 times
...

Save Cancel