New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Amazon DVA-C02 Exam - Topic 7 Question 36 Discussion

Actual exam question for Amazon's DVA-C02 exam
Question #: 36
Topic #: 7
[All DVA-C02 Questions]

A company built an online event platform For each event the company organizes quizzes and generates leaderboards that are based on the quiz scores. The company stores the leaderboard data in Amazon DynamoDB and retains the data for 30 days after an event is complete The company then uses a scheduled job to delete the old leaderboard data

The DynamoDB table is configured with a fixed write capacity. During the months when many events occur, the DynamoDB write API requests are throttled when the scheduled delete job runs.

A developer must create a long-term solution that deletes the old leaderboard data and optimizes write throughput

Which solution meets these requirements?

Show Suggested Answer Hide Answer
Suggested Answer: A

DynamoDB TTL (Time-to-Live):A native feature that automatically deletes items after a specified expiration time.

Efficiency:Eliminates the need for scheduled deletion jobs, optimizing write throughput by avoiding potential throttling conflicts.

Seamless Integration:TTL works directly within DynamoDB, requiring minimal development overhead.


DynamoDB TTL Documentation:https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html

Contribute your Thoughts:

0/2000 characters
Ona
3 months ago
TTL is definitely the way to go for automatic cleanup!
upvoted 0 times
...
Olive
3 months ago
Higher write capacity sounds like a temporary fix, not a solution.
upvoted 0 times
...
Vallie
3 months ago
Wait, can you really use Step Functions for this?
upvoted 0 times
...
Rozella
4 months ago
I disagree, B could work well with Streams too.
upvoted 0 times
...
Daniel
4 months ago
Option A seems like the best choice for TTL.
upvoted 0 times
...
Jenelle
4 months ago
I feel like AWS Step Functions could add unnecessary complexity here. The TTL option seems simpler and more efficient for deleting old data.
upvoted 0 times
...
Selma
4 months ago
I practiced a similar question where we had to optimize write throughput. I think increasing the write capacity temporarily could help, but it might not be the best long-term solution.
upvoted 0 times
...
Holley
4 months ago
I'm not entirely sure, but I think using DynamoDB Streams might be more complex than just setting a TTL. It could be overkill for this requirement.
upvoted 0 times
...
Rebeca
5 months ago
I remember we discussed using TTL attributes in DynamoDB to automatically delete items after a certain time. That seems like a good fit for this scenario.
upvoted 0 times
...
Hyun
5 months ago
Hmm, increasing the write capacity (D) when the delete job runs doesn't seem like a great long-term solution. That could get expensive and doesn't really address the root issue of the write throttling. I'm leaning towards the TTL option (A) as the best fit.
upvoted 0 times
...
Rasheeda
5 months ago
The key here is to find a way to delete the old data without impacting the write throughput. Option A with TTL seems like the simplest and most efficient approach. I'll make sure to understand how to properly configure the TTL attribute.
upvoted 0 times
...
Ammie
5 months ago
I'm a bit confused by the different options here. The DynamoDB Streams and Step Functions solutions (B and C) seem overly complex for this use case. I'll need to review the details of each approach more carefully.
upvoted 0 times
...
Mira
5 months ago
This seems like a straightforward DynamoDB optimization question. I think the TTL option (A) is the most elegant solution, as it allows automatic expiration of old data without having to manage a separate deletion process.
upvoted 0 times
...
Lillian
5 months ago
Wait, I'm a bit confused. Is it retrieve, preview, and then commit? Or is it something else? I better re-read the question to make sure I understand it.
upvoted 0 times
...
Bea
1 year ago
Option A is the clear winner here. TTL is built for this kind of use case. Set it and forget it, baby!
upvoted 0 times
Cathrine
1 year ago
Yeah, it's definitely the easiest way to automatically delete old data and optimize write throughput.
upvoted 0 times
...
Melissia
1 year ago
I agree, using a TTL attribute for the leaderboard data seems like the most efficient solution.
upvoted 0 times
...
Gracia
1 year ago
Option A is the clear winner here. TTL is built for this kind of use case. Set it and forget it, baby!
upvoted 0 times
...
...
Norah
1 year ago
D? Really? Increasing write capacity just to accommodate a scheduled delete job? That's like using a sledgehammer to crack a nut.
upvoted 0 times
...
Johnna
1 year ago
Hmm, I'm torn between B and C. Why not just use a serverless function triggered by a CloudWatch event? That's a simple yet effective solution.
upvoted 0 times
Jesus
1 year ago
A: Yeah, that sounds like a reliable solution for the company's needs.
upvoted 0 times
...
Angelica
1 year ago
B: I agree, it would help optimize write throughput and ensure old data is deleted in a timely manner.
upvoted 0 times
...
Bok
1 year ago
A: I think using DynamoDB Streams to schedule and delete the leaderboard data is the best option.
upvoted 0 times
...
...
Penney
1 year ago
I'd say C is the best choice. Step Functions can handle the scheduling and orchestration of the deletion process more robustly.
upvoted 0 times
Tonja
1 year ago
Setting a higher write capacity might help with throttling, but it doesn't address the long-term solution for deleting old data.
upvoted 0 times
...
Octavio
1 year ago
That's true. Step Functions can handle the scheduling and deletion in a more organized way.
upvoted 0 times
...
Nu
1 year ago
But with DynamoDB Streams, you can have more control over the deletion process and ensure it runs smoothly.
upvoted 0 times
...
Thurman
1 year ago
I think A could work too. Setting a TTL attribute would automatically delete the old data after 30 days.
upvoted 0 times
...
...
Rosendo
1 year ago
I'm not sure about option B) or C), but setting a higher write capacity with option D) could also work.
upvoted 0 times
...
Talia
1 year ago
Option B is the way to go. DynamoDB Streams make it easy to trigger a function to delete the old data without affecting write throughput.
upvoted 0 times
Valda
1 year ago
That makes sense. It's important to optimize write throughput while deleting old data.
upvoted 0 times
...
Roselle
1 year ago
I agree, using DynamoDB Streams sounds like the best solution for this scenario.
upvoted 0 times
...
Matthew
1 year ago
Option B is the way to go. DynamoDB Streams make it easy to trigger a function to delete the old data without affecting write throughput.
upvoted 0 times
...
...
Jaclyn
1 year ago
I agree with Verona. Using TTL would automatically delete the old data and optimize write throughput.
upvoted 0 times
...
Verona
1 year ago
I think option A) Configure a TTL attribute for the leaderboard data would be a good solution.
upvoted 0 times
...

Save Cancel