Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Professional Machine Learning Engineer Exam - Topic 6 Question 107 Discussion

Actual exam question for Google's Professional Machine Learning Engineer exam
Question #: 107
Topic #: 6
[All Professional Machine Learning Engineer Questions]

You are building a linear model with over 100 input features, all with values between -1 and 1. You suspect that many features are non-informative. You want to remove the non-informative features from your model while keeping the informative ones in their original form. Which technique should you use?

Show Suggested Answer Hide Answer
Suggested Answer: B

L1 regularization, also known as Lasso regularization, adds the sum of the absolute values of the model's coefficients to the loss function1.It encourages sparsity in the model by shrinking some coefficients to precisely zero2. This way, L1 regularization can perform feature selection and remove the non-informative features from the model while keeping the informative ones in their original form. Therefore, using L1 regularization is the best technique for this use case.


Regularization in Machine Learning - GeeksforGeeks

Regularization in Machine Learning (with Code Examples) - Dataquest

L1 And L2 Regularization Explained & Practical How To Examples

L1 and L2 as Regularization for a Linear Model

Contribute your Thoughts:

0/2000 characters
Iraida
7 days ago
PCA can lose interpretability. I’d stick with L1 for clarity.
upvoted 0 times
...
Johnetta
12 days ago
L1 regularization directly sets uninformative features to zero. Very efficient!
upvoted 0 times
...
Olive
17 days ago
Iterative dropout sounds interesting, but it might take too long.
upvoted 0 times
...
Stephen
23 days ago
Shapley values are great for understanding feature importance after modeling.
upvoted 0 times
...
Jerry
28 days ago
PCA is good, but it transforms features. I prefer keeping them as is.
upvoted 0 times
...
Miesha
2 months ago
I think L1 regularization is the best choice. It simplifies the model.
upvoted 0 times
...
Jill
2 months ago
Wait, iterative dropout? That sounds a bit unconventional for this scenario!
upvoted 0 times
...
Lucia
2 months ago
Definitely agree with L1 regularization! It's effective for feature selection.
upvoted 0 times
...
Luz
2 months ago
This question is like a buffet of feature selection methods - I'll have one of each, please!
upvoted 0 times
...
Jacquline
2 months ago
B) L1 regularization, the feature pruning ninja technique. Snip, snip!
upvoted 0 times
...
Cammy
2 months ago
C) Shapley values, the secret sauce for feature importance. Mmm, tasty.
upvoted 0 times
...
Genevieve
3 months ago
D) Iterative dropout sounds like a fun way to play feature selection roulette.
upvoted 0 times
...
Nu
3 months ago
B) L1 regularization is the way to go! Gotta love that sparsity.
upvoted 0 times
...
Temeka
3 months ago
I practiced a question similar to this where we used Shapley values to evaluate feature importance after model training.
upvoted 0 times
...
Tawna
3 months ago
I'm not entirely sure, but I think PCA is more about transforming features rather than just eliminating them.
upvoted 0 times
...
Rosina
3 months ago
I'm a bit confused here. A) PCA is more for dimensionality reduction, not really feature selection, right? I think I'll stick with B) L1 as the most straightforward option.
upvoted 0 times
...
Blythe
3 months ago
D) Iterative dropout could work, but that seems a bit more complicated than I'd want to try on an exam. I'm leaning towards B) L1 regularization - it's a classic feature selection method that I'm comfortable with.
upvoted 0 times
...
Benedict
4 months ago
Shapley values sound interesting, but is it really practical for 100+ features?
upvoted 0 times
...
Marion
4 months ago
I think L1 regularization is the way to go here!
upvoted 0 times
...
Louann
4 months ago
I remember discussing L1 regularization in class, and it seems like a good option since it can shrink some coefficients to zero.
upvoted 0 times
...
Tamesha
4 months ago
PCA won't really help keep features in their original form.
upvoted 0 times
...
Winfred
4 months ago
I feel like the iterative dropout technique could be useful, but I can't recall if it's commonly used for feature selection specifically.
upvoted 0 times
...
Victor
5 months ago
Hmm, I'm not sure. C) using Shapley values seems interesting, but I'm not super familiar with that technique. I'll have to look into it more.
upvoted 0 times
...
Chantell
5 months ago
I think I'd go with B) L1 regularization. That seems like a good way to automatically shrink the coefficients of the less important features to zero, keeping the important ones.
upvoted 0 times
Francesco
2 days ago
I agree, L1 regularization is effective for feature selection.
upvoted 0 times
...
...

Save Cancel