New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Professional Data Engineer Exam - Topic 4 Question 30 Discussion

Actual exam question for Google's Professional Data Engineer exam
Question #: 30
Topic #: 4
[All Professional Data Engineer Questions]

You are working on a niche product in the image recognition domain. Your team has developed a model that is dominated by custom C++ TensorFlow ops your team has implemented. These ops are used inside your main training loop and are performing bulky matrix multiplications. It currently takes up to several days to train a model. You want to decrease this time significantly and keep the cost low by using an accelerator on Google Cloud. What should you do?

Show Suggested Answer Hide Answer
Suggested Answer: B

Contribute your Thoughts:

0/2000 characters
Amie
4 months ago
D is a waste of resources, just upgrade to TPUs or GPUs instead!
upvoted 0 times
...
Lorean
4 months ago
Wait, can you really just switch to TPUs without any changes? Sounds risky!
upvoted 0 times
...
Erasmo
4 months ago
A seems too optimistic, you’ll need adjustments for TPUs.
upvoted 0 times
...
Temeka
4 months ago
I think C could be a good option too, but it might be slower.
upvoted 0 times
...
Sherell
5 months ago
Definitely go with B, TPUs need those custom ops to work well!
upvoted 0 times
...
Annabelle
5 months ago
Staying on CPUs seems like a bad idea, but I wonder if increasing the cluster size could help at all. It feels like a last resort option.
upvoted 0 times
...
Huey
5 months ago
I'm leaning towards using Cloud GPUs since they might be easier to integrate with custom ops, but I’m not completely confident about the performance compared to TPUs.
upvoted 0 times
...
Rebecka
5 months ago
I think we practiced a question where we had to implement GPU kernel support for custom ops before using TPUs. That might be necessary here too.
upvoted 0 times
...
Nichelle
5 months ago
I remember we discussed how TPUs can significantly speed up training, but I'm not sure if they can work with custom C++ ops without modifications.
upvoted 0 times
...
Stephen
5 months ago
I think the key here is that the R programmers are tasked with copying the data. That sounds like the Extract phase to me, where they'd use their R skills to pull the data from the source system.
upvoted 0 times
...
Chantay
5 months ago
Ah, this is a good one. I remember discussing subscriber keys and data deduplication in class. I'm feeling confident I can nail this.
upvoted 0 times
...
Jesusa
5 months ago
I recall a similar question where the focus was on application settings, and I'm pretty confident that it was about using the global policy.
upvoted 0 times
...

Save Cancel