New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Professional Machine Learning Engineer Exam - Topic 6 Question 107 Discussion

Actual exam question for Google's Professional Machine Learning Engineer exam
Question #: 107
Topic #: 6
[All Professional Machine Learning Engineer Questions]

You are building a linear model with over 100 input features, all with values between -1 and 1. You suspect that many features are non-informative. You want to remove the non-informative features from your model while keeping the informative ones in their original form. Which technique should you use?

Show Suggested Answer Hide Answer
Suggested Answer: B

L1 regularization, also known as Lasso regularization, adds the sum of the absolute values of the model's coefficients to the loss function1.It encourages sparsity in the model by shrinking some coefficients to precisely zero2. This way, L1 regularization can perform feature selection and remove the non-informative features from the model while keeping the informative ones in their original form. Therefore, using L1 regularization is the best technique for this use case.


Regularization in Machine Learning - GeeksforGeeks

Regularization in Machine Learning (with Code Examples) - Dataquest

L1 And L2 Regularization Explained & Practical How To Examples

L1 and L2 as Regularization for a Linear Model

Contribute your Thoughts:

0/2000 characters
Miesha
17 hours ago
I think L1 regularization is the best choice. It simplifies the model.
upvoted 0 times
...
Jill
6 days ago
Wait, iterative dropout? That sounds a bit unconventional for this scenario!
upvoted 0 times
...
Lucia
11 days ago
Definitely agree with L1 regularization! It's effective for feature selection.
upvoted 0 times
...
Luz
16 days ago
This question is like a buffet of feature selection methods - I'll have one of each, please!
upvoted 0 times
...
Jacquline
21 days ago
B) L1 regularization, the feature pruning ninja technique. Snip, snip!
upvoted 0 times
...
Cammy
27 days ago
C) Shapley values, the secret sauce for feature importance. Mmm, tasty.
upvoted 0 times
...
Genevieve
1 month ago
D) Iterative dropout sounds like a fun way to play feature selection roulette.
upvoted 0 times
...
Nu
1 month ago
B) L1 regularization is the way to go! Gotta love that sparsity.
upvoted 0 times
...
Temeka
1 month ago
I practiced a question similar to this where we used Shapley values to evaluate feature importance after model training.
upvoted 0 times
...
Tawna
2 months ago
I'm not entirely sure, but I think PCA is more about transforming features rather than just eliminating them.
upvoted 0 times
...
Rosina
2 months ago
I'm a bit confused here. A) PCA is more for dimensionality reduction, not really feature selection, right? I think I'll stick with B) L1 as the most straightforward option.
upvoted 0 times
...
Blythe
2 months ago
D) Iterative dropout could work, but that seems a bit more complicated than I'd want to try on an exam. I'm leaning towards B) L1 regularization - it's a classic feature selection method that I'm comfortable with.
upvoted 0 times
...
Benedict
2 months ago
Shapley values sound interesting, but is it really practical for 100+ features?
upvoted 0 times
...
Marion
2 months ago
I think L1 regularization is the way to go here!
upvoted 0 times
...
Louann
2 months ago
I remember discussing L1 regularization in class, and it seems like a good option since it can shrink some coefficients to zero.
upvoted 0 times
...
Tamesha
3 months ago
PCA won't really help keep features in their original form.
upvoted 0 times
...
Winfred
3 months ago
I feel like the iterative dropout technique could be useful, but I can't recall if it's commonly used for feature selection specifically.
upvoted 0 times
...
Victor
3 months ago
Hmm, I'm not sure. C) using Shapley values seems interesting, but I'm not super familiar with that technique. I'll have to look into it more.
upvoted 0 times
...
Chantell
3 months ago
I think I'd go with B) L1 regularization. That seems like a good way to automatically shrink the coefficients of the less important features to zero, keeping the important ones.
upvoted 0 times
...

Save Cancel