New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Confluent CCDAK Exam - Topic 4 Question 74 Discussion

Actual exam question for Confluent's CCDAK exam
Question #: 74
Topic #: 4
[All CCDAK Questions]

Compaction is enabled for a topic in Kafka by setting log.cleanup.policy=compact. What is true about log compaction?

Show Suggested Answer Hide Answer
Suggested Answer: C

Kafka's new bidirectional client compatibility introduced in 0.10.2 allows this. Read more herehttps://www.confluent.io/blog/upgrading-apache-kafka-clients-just-got-easier/


Contribute your Thoughts:

0/2000 characters
Annabelle
3 months ago
B is definitely wrong, compaction doesn’t compress messages.
upvoted 0 times
...
Iraida
3 months ago
Wait, does that mean older messages can just disappear?
upvoted 0 times
...
Gussie
3 months ago
Log compaction is super useful for reducing storage!
upvoted 0 times
...
Roselle
4 months ago
I thought it kept the first value, not the latest?
upvoted 0 times
...
Devorah
4 months ago
D is correct! Only the latest value is kept.
upvoted 0 times
...
Sabina
4 months ago
I practiced a similar question, and I think the key point is that compaction retains the latest message per key, so D seems correct.
upvoted 0 times
...
Myra
4 months ago
I’m a bit confused about the difference between compaction and regular cleanup. Does compaction really de-duplicate messages?
upvoted 0 times
...
Sage
4 months ago
I think option D sounds right because it mentions retaining the latest value, which is what we discussed in class.
upvoted 0 times
...
Ezekiel
5 months ago
I remember that log compaction keeps only the latest value for each key, but I'm not sure if it changes the offsets.
upvoted 0 times
...
Reynalda
5 months ago
I'm a bit confused by this question. I know log compaction is used to reduce storage, but I'm not sure about the details of how it works. I'll have to review my notes on Kafka to make sure I understand this properly before answering.
upvoted 0 times
...
Angella
5 months ago
Okay, let me see. I remember that log compaction is about reducing storage by removing duplicate messages. I think the key thing is that it keeps the latest value for each key, not the first. So I'm going to go with option D.
upvoted 0 times
...
Lonna
5 months ago
Hmm, I'm a bit unsure about this. I know log compaction has something to do with retaining only the latest value for each key, but I can't remember if it's the first or latest value that gets kept. I'll have to think this through carefully.
upvoted 0 times
...
Xenia
5 months ago
I'm pretty confident about this one. I think the answer is D - after cleanup, only one message per key is retained with the latest value.
upvoted 0 times
...
Cristina
10 months ago
Haha, I bet the exam writers are trying to trick us with these options. Kafka is all about distributed logging, so 'compaction' must be some sort of magic that makes it all work, right?
upvoted 0 times
Leanna
9 months ago
Compaction changes the offset of messages
upvoted 0 times
...
Jackie
9 months ago
D) After cleanup, only one message per key is retained with the latest value
upvoted 0 times
...
Rolande
9 months ago
A) After cleanup, only one message per key is retained with the first value
upvoted 0 times
...
...
Jeannetta
10 months ago
This is a tricky one. I know compaction changes the offsets, so I'm not sure if option A or D is the right choice. I'll have to think about this one more.
upvoted 0 times
...
Carmelina
10 months ago
I'm a bit confused. Isn't log compaction supposed to compress the messages as well? Option B seems like it could be the answer.
upvoted 0 times
Orville
10 months ago
User 2: Orville is right. Log compaction actually removes duplicate keys and retains the latest value.
upvoted 0 times
...
Gilma
10 months ago
User 1: Option B is not correct. Log compaction does not compress messages.
upvoted 0 times
...
...
Lorrie
10 months ago
Hmm, I was thinking option C was the right answer. Doesn't Kafka de-duplicate messages based on the key hash during log compaction?
upvoted 0 times
...
Xochitl
11 months ago
I'm not sure about this. I think the answer might be A) After cleanup, only one message per key is retained with the first value. It could be more efficient to keep the initial value for each key.
upvoted 0 times
...
Torie
11 months ago
I think option D is the correct answer. Log compaction retains only the latest value for each unique key.
upvoted 0 times
Gracia
9 months ago
Compaction changes the offset of messages
upvoted 0 times
...
Jacqueline
9 months ago
D) After cleanup, only one message per key is retained with the latest value
upvoted 0 times
...
Virgie
10 months ago
A) After cleanup, only one message per key is retained with the first value
upvoted 0 times
...
...
Mireya
11 months ago
I agree with Lewis. It's important to retain the latest value for each key after compaction to ensure the most up-to-date information is available in the topic.
upvoted 0 times
...
Lewis
11 months ago
I think the answer is D) After cleanup, only one message per key is retained with the latest value. It makes sense to keep the most recent value for each key.
upvoted 0 times
...

Save Cancel