New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Professional Data Engineer Exam - Topic 4 Question 43 Discussion

Actual exam question for Google's Professional Data Engineer exam
Question #: 43
Topic #: 4
[All Professional Data Engineer Questions]

You are building a new data pipeline to share data between two different types of applications: jobs generators and job runners. Your solution must scale to accommodate increases in usage and must accommodate the addition of new applications without negatively affecting the performance of existing ones. What should you do?

Show Suggested Answer Hide Answer
Suggested Answer: A

Contribute your Thoughts:

0/2000 characters
Kenny
4 months ago
Wait, can Cloud Spanner handle that much data?
upvoted 0 times
...
Vivan
4 months ago
D sounds interesting, but is it really necessary?
upvoted 0 times
...
Louvenia
4 months ago
C seems too limiting for future growth.
upvoted 0 times
...
Luz
4 months ago
I think A could work too, but not as efficiently.
upvoted 0 times
...
Markus
5 months ago
B is definitely the way to go for scalability!
upvoted 0 times
...
Clarence
5 months ago
I vaguely recall that using Cloud Pub/Sub helps with asynchronous processing, which seems crucial for job runners, but I can't remember all the details.
upvoted 0 times
...
Odette
5 months ago
I practiced a similar question where we had to choose between SQL and NoSQL options, and I feel like Cloud Spanner could be a good fit for scalability, but I'm not entirely confident.
upvoted 0 times
...
Alison
5 months ago
I think Cloud Pub/Sub might be the right choice here since it allows for decoupling and can handle increased loads, but I need to double-check the specifics.
upvoted 0 times
...
Freeman
5 months ago
I remember we discussed the importance of scalability in data pipelines, but I'm not sure if using an API is the best approach for this scenario.
upvoted 0 times
...
Yolando
5 months ago
This seems like a tricky situation. I'll need to carefully review the Scrum principles and processes to determine the best approach.
upvoted 0 times
...
Stephaine
5 months ago
Option C seems like the way to go. Casting the objects to sObject and using sObject.get('Name') is a simple and straightforward solution that should work for any object type.
upvoted 0 times
...

Save Cancel