New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Professional Machine Learning Engineer Exam - Topic 4 Question 84 Discussion

Actual exam question for Google's Professional Machine Learning Engineer exam
Question #: 84
Topic #: 4
[All Professional Machine Learning Engineer Questions]

You work for a bank. You have created a custom model to predict whether a loan application should be flagged for human review. The input features are stored in a BigQuery table. The model is performing well and you plan to deploy it to production. Due to compliance requirements the model must provide explanations for each prediction. You want to add this functionality to your model code with minimal effort and provide explanations that are as accurate as possible What should you do?

Show Suggested Answer Hide Answer
Suggested Answer: D

Contribute your Thoughts:

0/2000 characters
Zona
3 months ago
D could work, but isn't it more complex to implement?
upvoted 0 times
...
Blondell
3 months ago
C seems like a solid choice for compliance needs.
upvoted 0 times
...
Darrel
4 months ago
Wait, can we really trust Shapley values for this?
upvoted 0 times
...
Audry
4 months ago
I think B is better for more control over the model.
upvoted 0 times
...
Tamera
4 months ago
Option A sounds like the easiest way to get integrated explanations.
upvoted 0 times
...
Rosann
4 months ago
Updating the custom serving container seems like a straightforward approach, but I’m unsure if it’s the most compliant option for providing explanations.
upvoted 0 times
...
Madonna
5 months ago
I feel like uploading the model to Vertex AI Model Registry could be a good option, especially with feature-based attribution, but I’m not entirely confident about the Shapley sampling part.
upvoted 0 times
...
Judy
5 months ago
I think using BigQuery ML with the ML.EXPLAIN_PREDICT method sounds familiar from our practice questions, but I can't recall the details about the num_integral_steps parameter.
upvoted 0 times
...
Nakisha
5 months ago
I remember we discussed the importance of providing explanations for model predictions, but I'm not sure which method is the simplest to implement.
upvoted 0 times
...
Tawanna
5 months ago
I'm a bit confused by the different options presented here. Can we use AutoML to create a model with integrated Vertex Explainable AI, or do we need to go with a custom model approach? I'm not sure which one would be the best fit for this scenario.
upvoted 0 times
...
Cortney
5 months ago
Hmm, the key here is that the model needs to provide explanations for each prediction due to compliance requirements. I think option C might be the best approach - uploading the custom model to Vertex AI and configuring feature-based attribution using sampled Shapley.
upvoted 0 times
...
Davida
5 months ago
This seems like a tricky question. I'm not sure if I fully understand the requirements around compliance and model explanations. I'll need to think this through carefully.
upvoted 0 times
...
Tamar
5 months ago
Okay, I'm feeling pretty confident about this one. The question is asking for a solution that provides accurate model explanations with minimal effort. Based on that, I'd go with option B and use the ML.EXPLAIN_PREDICT method in BigQuery ML.
upvoted 0 times
...
Teri
5 months ago
This is a good example of a question that requires carefully reading the details and understanding the relationships between the different Azure resources. I'll make sure to take my time and not rush through the answer.
upvoted 0 times
...
Estrella
10 months ago
Option A sounds like the 'Easy Mode' for getting explainability. I hope it doesn't come with a 'Pay-to-Win' microtransaction plan though.
upvoted 0 times
Major
8 months ago
User 4: Let's hope it's just straightforward and doesn't come with any hidden costs.
upvoted 0 times
...
Una
9 months ago
User 3: I hope it doesn't end up being a 'Pay-to-Win' situation though.
upvoted 0 times
...
Catrice
9 months ago
User 2: Yeah, it's like the 'Easy Mode' for explainability.
upvoted 0 times
...
Skye
9 months ago
User 1: Option A does seem like the easiest way to add explanations to the model.
upvoted 0 times
...
...
Franchesca
10 months ago
Option D is my pick. Updating the custom serving container to include sampled Shapley-based explanations seems like the most accurate way to explain the model's predictions.
upvoted 0 times
Danica
9 months ago
That sounds like the most efficient way to meet compliance requirements while ensuring accurate explanations.
upvoted 0 times
...
Dominga
9 months ago
Agreed, updating the custom serving container with sampled Shapley-based explanations is the way to go.
upvoted 0 times
...
Gabriele
9 months ago
I think option D is the best choice too. It will provide accurate explanations for each prediction.
upvoted 0 times
...
...
Lai
10 months ago
Option B looks promising, using BigQuery ML's EXPLAIN_PREDICT method. It's a good way to get explanations without modifying the model too much.
upvoted 0 times
Tina
9 months ago
Felix: Let's go ahead and implement the BigQuery ML deep neural network model with the EXPLAIN_PREDICT method. It should help us meet the compliance requirements easily.
upvoted 0 times
...
Jesusa
10 months ago
User 3: I think we should go with Option B then. It's a simple solution that can provide the necessary explanations for compliance requirements.
upvoted 0 times
...
Felix
10 months ago
User 2: I agree. It's important to have accurate explanations for each prediction, and using the EXPLAIN_PREDICT method in BigQuery ML can help with that.
upvoted 0 times
...
Kaycee
10 months ago
User 1: Option B sounds like a good choice. It seems like a straightforward way to add explanations to the model.
upvoted 0 times
...
...
Kimbery
11 months ago
I'm leaning towards Option C. Uploading the custom model to Vertex AI Model Registry and configuring feature-based attribution using sampled Shapley allows me to keep more control over the model.
upvoted 0 times
...
Teddy
11 months ago
I'm not sure about option A. I think option C might provide more accurate explanations with feature-based attribution.
upvoted 0 times
...
Nakisha
11 months ago
I agree with Jacquelyne. Using AutoML with Vertex Explainable AI seems like the most efficient way to meet compliance requirements.
upvoted 0 times
...
Ronald
11 months ago
Option A seems like the easiest way to get explainability with minimal effort. AutoML Tabular models come with built-in Vertex Explainable AI, so that's an attractive choice.
upvoted 0 times
Raina
10 months ago
Yeah, uploading the custom model to Vertex AI Model Registry and configuring feature-based attribution with sampled Shapley sounds like a good approach for accurate explanations.
upvoted 0 times
...
Rickie
10 months ago
Creating a BigQuery ML deep neural network model with the EXPLAIN_PREDICT method could also work, but option A seems more straightforward.
upvoted 0 times
...
Yuki
10 months ago
I agree, using the integrated Vertex Explainable AI would make it easier to add explanations to the model code.
upvoted 0 times
...
Valda
10 months ago
I think option A is the best choice. AutoML Tabular models with Vertex Explainable AI would provide accurate explanations with minimal effort.
upvoted 0 times
...
...
Jacquelyne
11 months ago
I think option A sounds like a good choice for adding explanations to the model.
upvoted 0 times
...

Save Cancel