Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Professional Machine Learning Engineer Exam - Topic 3 Question 92 Discussion

Actual exam question for Google's Professional Machine Learning Engineer exam
Question #: 92
Topic #: 3
[All Professional Machine Learning Engineer Questions]

Your organization manages an online message board A few months ago, you discovered an increase in toxic language and bullying on the message board. You deployed an automated text classifier that flags certain comments as toxic or harmful. Now some users are reporting that benign comments referencing their religion are being misclassified as abusive Upon further inspection, you find that your classifier's false positive rate is higher for comments that reference certain underrepresented religious groups. Your team has a limited budget and is already overextended. What should you do?

Show Suggested Answer Hide Answer
Suggested Answer: C

The best way to operationalize your training process is to use Vertex AI Pipelines, which allows you to create and run scalable, portable, and reproducible workflows for your ML models. Vertex AI Pipelines also integrates with Vertex AI Metadata, which tracks the provenance, lineage, and artifacts of your ML models. By using a Vertex AI CustomTrainingJobOp component, you can train your model using the same code as in your Jupyter notebook. By using a ModelUploadOp component, you can upload your trained model to Vertex AI Model Registry, which manages the versions and endpoints of your models. By using Cloud Scheduler and Cloud Functions, you can trigger your Vertex AI pipeline to run weekly, according to your plan.Reference:

Vertex AI Pipelines documentation

Vertex AI Metadata documentation

Vertex AI CustomTrainingJobOp documentation

ModelUploadOp documentation

Cloud Scheduler documentation

[Cloud Functions documentation]


Contribute your Thoughts:

0/2000 characters
Graham
4 months ago
Raising the threshold could let more toxic stuff slip through.
upvoted 0 times
...
Hyun
4 months ago
Replacing the model might just create new issues.
upvoted 0 times
...
Tasia
4 months ago
Really? I’m surprised the model misclassifies religious comments.
upvoted 0 times
...
Kathryn
4 months ago
I disagree, human moderation is the way to go.
upvoted 0 times
...
Florencia
5 months ago
Sounds like adding synthetic data could help!
upvoted 0 times
...
Iola
5 months ago
Human moderation sounds like a solid option, but I worry about scalability. Can we really afford that with our current resources?
upvoted 0 times
...
Oliva
5 months ago
I think replacing the model could be risky, especially since we have budget constraints. We might need to stick with what we have.
upvoted 0 times
...
Donette
5 months ago
I'm not sure if just raising the threshold is a good idea. It might let more harmful comments slip through.
upvoted 0 times
...
Judy
5 months ago
I remember we talked about adding synthetic data to improve model accuracy. That could help with the false positives, right?
upvoted 0 times
...
Whitley
5 months ago
Replacing the model entirely is an option, but that could be a lot of work. I'll keep that in the back of my mind as a last resort.
upvoted 0 times
...
Makeda
5 months ago
Adding synthetic training data seems like a smart move to improve the model's performance. I'll make sure to note that down.
upvoted 0 times
...
Mignon
5 months ago
Okay, let's break this down. I think raising the threshold for toxic comments could be a good start, but we'll need to monitor it closely.
upvoted 0 times
...
Shantay
5 months ago
Hmm, I'm a bit stumped on this one. I'll have to review the details and see if I can spot the best solution.
upvoted 0 times
...
Kip
6 months ago
This is a tricky one. I'll need to think carefully about the pros and cons of each option.
upvoted 0 times
...
Mollie
11 months ago
Replacing the model? Sounds like a lot of work. Why not just teach the AI to recognize sarcasm and irony? Problem solved!
upvoted 0 times
Nathalie
10 months ago
User 3: Raising the threshold for what is considered toxic could reduce false positives without replacing the model.
upvoted 0 times
...
Launa
10 months ago
C) Replace your model with a different text classifier.
upvoted 0 times
...
Aimee
10 months ago
User 2: Removing the model and using human moderation might be more effective in the long run.
upvoted 0 times
...
Silva
10 months ago
B) Remove the model and replace it with human moderation.
upvoted 0 times
...
Mabelle
11 months ago
A) Add synthetic training data where those phrases are used in non-toxic ways
upvoted 0 times
...
Quinn
11 months ago
User 1: Adding synthetic training data could help improve the accuracy of the classifier.
upvoted 0 times
...
...
Larue
11 months ago
Synthetic data, huh? Sounds like a job for the AI squad. Though I'd keep an eye on the 'robot overlord' situation, just in case.
upvoted 0 times
...
Tandra
11 months ago
Hmm, option D sounds like the most practical approach given the budget constraints. Why make life harder for the mods, right?
upvoted 0 times
Royce
11 months ago
User 2: Yeah, it's important to make sure the mods aren't overwhelmed with unnecessary flagged comments.
upvoted 0 times
...
Becky
11 months ago
User 1: I agree, option D seems like the best choice to reduce false positives.
upvoted 0 times
...
...
Galen
11 months ago
I agree with Tracey, raising the threshold could be a simpler solution.
upvoted 0 times
...
Leigha
12 months ago
Oh boy, the old 'toxic language' vs. 'religious sensitivity' conundrum. I bet this one's a real headache for the dev team!
upvoted 0 times
Francisca
10 months ago
C) Replace your model with a different text classifier.
upvoted 0 times
...
Basilia
10 months ago
B) Remove the model and replace it with human moderation.
upvoted 0 times
...
Coral
11 months ago
A) Add synthetic training data where those phrases are used in non-toxic ways
upvoted 0 times
...
...
Tracey
12 months ago
But wouldn't it be better to raise the threshold for comments instead?
upvoted 0 times
...
Adrianna
12 months ago
I think we should add synthetic training data for those phrases.
upvoted 0 times
...

Save Cancel