New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Salesforce Certified Heroku Architect (Plat-Arch-206) Exam - Topic 1 Question 20 Discussion

Actual exam question for Salesforce's Salesforce Certified Heroku Architect (Plat-Arch-206) exam
Question #: 20
Topic #: 1
[All Salesforce Certified Heroku Architect (Plat-Arch-206) Questions]

Universal Containers (UC)uses Apache Kafka on Heroku to stream shipment inventory data in real time throughout the world. A Kafka topic is used to send messages with updates on the shipping container GPS coordinates as they are in transit. UC is using a Heroku Kafka basic-0 plan.The topic was provisioned with 8 partitions, 1 week of retention, and no compaction. The keys for the events are being assigned by Heroku Kafka, which means that they will be randomly distributed between the partitions.

UC has a single-dyno consumer application that persists the data to their Enterprise Data Warehouse (EDW). Recently, they've been noticing data loss in the EDW.

What should an Architect with Kafka experience recommend?

Show Suggested Answer Hide Answer
Suggested Answer: D

Contribute your Thoughts:

0/2000 characters
Alton
3 months ago
Using Redis for message receipts is a smart move for reliability!
upvoted 0 times
...
Vi
3 months ago
Wait, why are they losing data if they're using Kafka? That seems odd.
upvoted 0 times
...
Justine
4 months ago
Upgrading the plan could definitely solve capacity issues.
upvoted 0 times
...
Deandrea
4 months ago
Compaction won't help with data loss, just older messages.
upvoted 0 times
...
Leslee
4 months ago
Sounds like they need to scale up those consumer dynos!
upvoted 0 times
...
Chantell
4 months ago
Scaling up the consumer dynos makes sense, but I’m a bit confused about how that interacts with the partitions. Would we really need one consumer per partition?
upvoted 0 times
...
Cecily
4 months ago
Using Redis for message receipt sounds familiar, especially for ensuring 'at-least' once delivery. I think we practiced a similar scenario in class.
upvoted 0 times
...
Son
5 months ago
I think upgrading the Kafka plan could be a good option, but I wonder if that alone would solve the data loss issue.
upvoted 0 times
...
Eleni
5 months ago
I remember we discussed the importance of message delivery guarantees in Kafka, but I'm not sure if enabling compaction would really help with data loss.
upvoted 0 times
...
Christiane
5 months ago
Whoa, this is a tricky one. I don't want to mess up the termination process and end up in legal trouble. Maybe I should review the company's employee handbook before answering this.
upvoted 0 times
...
Rashida
5 months ago
Hmm, this looks like a tricky one. I'll need to carefully consider the options and think through the best approach.
upvoted 0 times
...
Kirk
5 months ago
Hmm, I'm a bit unsure about the differences between shared file systems and cloud storage for high availability. I'll need to think this through carefully.
upvoted 0 times
...
Sang
10 months ago
I like how C tackles the problem from multiple angles - the Redis store and the scaled-up consumers. That's a more robust solution.
upvoted 0 times
Ora
8 months ago
Yeah, C definitely seems like the way to go to ensure data integrity.
upvoted 0 times
...
Lore
8 months ago
Agreed, C seems like the most comprehensive solution to prevent data loss.
upvoted 0 times
...
Rossana
8 months ago
I think C is the best option here. It covers all the bases.
upvoted 0 times
...
Junita
8 months ago
B) Upgrade to a larger Apache Kafka for Heroku plan, which has greater data capacity.
upvoted 0 times
...
Barabara
9 months ago
A) Enable compaction on the topic to drop older messages, which will drop older messages with the same key.
upvoted 0 times
...
Nan
9 months ago
C) Use Heroku Redis to store message receipt information to account for 'at-least' once delivery, which will guarantee that messages are never processed more than once. Scale up the consumer dynos to match the number of partitions so that there is one process for each partition.
upvoted 0 times
...
...
Zoila
10 months ago
Compaction might help with older messages, but it won't address the data loss issue. C is the best option to guarantee message delivery.
upvoted 0 times
Stefanie
8 months ago
C
upvoted 0 times
...
Argelia
9 months ago
A
upvoted 0 times
...
...
Howard
10 months ago
Option B seems like overkill. Upgrading the Kafka plan is not necessary if the issue is with the consumer application.
upvoted 0 times
Kenneth
8 months ago
Option B seems like overkill. Upgrading the Kafka plan is not necessary if the issue is with the consumer application.
upvoted 0 times
...
Leonida
8 months ago
C) Use Heroku Redis to store message receipt information to account for 'at-least' once delivery, which will guarantee that messages are never processed more than once. Scale up the consumer dynos to match the number of partitions so that there is one process for each partition.
upvoted 0 times
...
Lashawn
9 months ago
A) Enable compaction on the topic to drop older messages, which will drop older messages with the same key.
upvoted 0 times
...
Leoma
9 months ago
Option B seems like overkill. Upgrading the Kafka plan is not necessary if the issue is with the consumer application.
upvoted 0 times
...
Lashanda
9 months ago
C) Use Heroku Redis to store message receipt information to account for 'at-least' once delivery, which will guarantee that messages are never processed more than once. Scale up the consumer dynos to match the number of partitions so that there is one process for each partition.
upvoted 0 times
...
Whitley
9 months ago
A) Enable compaction on the topic to drop older messages, which will drop older messages with the same key.
upvoted 0 times
...
...
Amina
10 months ago
I think the correct answer is C. Using Heroku Redis to store message receipt information and scaling up the consumer dynos will help ensure at-least once delivery and prevent data loss.
upvoted 0 times
...
Tamie
10 months ago
I believe upgrading to a larger Apache Kafka plan might also solve the issue.
upvoted 0 times
...
Norah
10 months ago
I agree with Cassi. Compaction will help prevent data loss in the EDW.
upvoted 0 times
...
Cassi
10 months ago
I think we should enable compaction on the topic to drop older messages.
upvoted 0 times
...
Sharika
10 months ago
I believe upgrading to a larger Apache Kafka plan might also solve the issue.
upvoted 0 times
...
Rodrigo
11 months ago
I agree with Ellsworth. Compaction will help prevent data loss in the EDW.
upvoted 0 times
...
Ellsworth
11 months ago
I think we should enable compaction on the topic to drop older messages.
upvoted 0 times
...

Save Cancel