Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Confluent Exam CCDAK Topic 4 Question 74 Discussion

Actual exam question for Confluent's CCDAK exam
Question #: 74
Topic #: 4
[All CCDAK Questions]

Compaction is enabled for a topic in Kafka by setting log.cleanup.policy=compact. What is true about log compaction?

Show Suggested Answer Hide Answer
Suggested Answer: C

Kafka's new bidirectional client compatibility introduced in 0.10.2 allows this. Read more herehttps://www.confluent.io/blog/upgrading-apache-kafka-clients-just-got-easier/


Contribute your Thoughts:

Lorrie
3 hours ago
Hmm, I was thinking option C was the right answer. Doesn't Kafka de-duplicate messages based on the key hash during log compaction?
upvoted 0 times
...
Xochitl
7 days ago
I'm not sure about this. I think the answer might be A) After cleanup, only one message per key is retained with the first value. It could be more efficient to keep the initial value for each key.
upvoted 0 times
...
Torie
9 days ago
I think option D is the correct answer. Log compaction retains only the latest value for each unique key.
upvoted 0 times
...
Mireya
9 days ago
I agree with Lewis. It's important to retain the latest value for each key after compaction to ensure the most up-to-date information is available in the topic.
upvoted 0 times
...
Lewis
12 days ago
I think the answer is D) After cleanup, only one message per key is retained with the latest value. It makes sense to keep the most recent value for each key.
upvoted 0 times
...

Save Cancel