New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Amazon DVA-C02 Exam - Topic 3 Question 44 Discussion

Actual exam question for Amazon's DVA-C02 exam
Question #: 44
Topic #: 3
[All DVA-C02 Questions]

A developer is receiving an intermittent ProvisionedThroughputExceededException error from an application that is based on Amazon DynamoDB. According to the Amazon CloudWatch metrics for the table, the application is not exceeding the provisioned throughput. What could be the cause of the issue?

Show Suggested Answer Hide Answer
Suggested Answer: B

DynamoDB distributes throughput across partitions based on the hash key. A hot partition (caused by high usage of a specific hash key) can result in a ProvisionedThroughputExceededException, even if overall usage is below the provisioned capacity.

Why Option B is Correct:

Partition-Level Limits: Each partition has a limit of 3,000 read capacity units or 1,000 write capacity units per second.

Hot Partition: Excessive use of a single hash key can overwhelm its partition.

Why Not Other Options:

Option A: DynamoDB storage size does not affect throughput.

Option C: Provisioned scaling operations are unrelated to throughput errors.

Option D: Sort keys do not impact partition-level throughput.


DynamoDB Partition Key Design Best Practices

Contribute your Thoughts:

0/2000 characters
Karon
3 months ago
Really? I thought the provisioned throughput was always reliable.
upvoted 0 times
...
Paris
3 months ago
Wait, are we sure it's not a scaling operation issue?
upvoted 0 times
...
Rose
4 months ago
Definitely not the storage size, that's not how it works.
upvoted 0 times
...
Mindy
4 months ago
I think it's more likely the sort key that's causing the problem.
upvoted 0 times
...
Theron
4 months ago
Could be the hash key issue, happens often.
upvoted 0 times
...
Chantell
4 months ago
I thought the sort key could also play a role in throughput issues, but I'm not confident about that. It seems like a tricky question!
upvoted 0 times
...
Cammy
4 months ago
I'm not entirely sure, but I feel like the scaling operations might be a factor. I remember something about limits on how many writes can be processed in a short time.
upvoted 0 times
...
Karrie
5 months ago
I think I came across a similar question where the issue was tied to hot partitions. Could it be that the application is hitting a particular hash key too hard?
upvoted 0 times
...
Tashia
5 months ago
I remember reading that ProvisionedThroughputExceededException can happen even if the overall throughput isn't exceeded, so maybe it's related to a specific hash key?
upvoted 0 times
...
Ashlyn
5 months ago
Ah, I think I've got it! The issue is likely with a particular hash or sort key being overloaded. That would cause hotspots and trigger the ProvisionedThroughputExceededException, even if the overall metrics look fine. I'll focus on that possibility.
upvoted 0 times
...
Earleen
5 months ago
I'm a bit stumped on this one. The information provided doesn't seem to clearly point to any of the options. I'll have to review the DynamoDB concepts and see if I can figure out what else could be going on.
upvoted 0 times
...
Octavio
5 months ago
Okay, let's see. If the table size is larger than the provisioned size, that could definitely cause the exception. But the question says the metrics don't show that. Interesting...
upvoted 0 times
...
Marla
5 months ago
Hmm, the CloudWatch metrics aren't showing any issues with provisioned throughput, so that's a bit puzzling. I'll have to dig deeper into the potential causes.
upvoted 0 times
...
Dylan
5 months ago
This seems like a tricky one. I'll need to think through the different possibilities carefully.
upvoted 0 times
...
Annamae
1 year ago
This is why I always carry a backup of my backup. You never know when a ProvisionedThroughputExceededException is going to rear its ugly head!
upvoted 0 times
...
Denny
1 year ago
Ah, classic DynamoDB woes. I'd go with C - the table is exceeding the provisioned scaling operations. That would explain the intermittent nature of the problem.
upvoted 0 times
...
Sharen
1 year ago
I don't know, this one's got me stumped. The CloudWatch metrics not showing the issue is throwing me off. Maybe it's a glitch in the system? *laughs* Or maybe they need to invest in some crystal balls.
upvoted 0 times
...
Lyda
1 year ago
Option D is my guess. The sort key could be the culprit if the application is hitting that more heavily than expected.
upvoted 0 times
Myra
1 year ago
But what if it's actually the storage size of the table that's causing the problem?
upvoted 0 times
...
Vashti
1 year ago
I think you might be right. The sort key could be causing the issue.
upvoted 0 times
...
Frederick
1 year ago
It's possible, but we should also check if the table storage size is larger than provisioned.
upvoted 0 times
...
Keena
1 year ago
I think you might be right, the sort key could be causing the issue.
upvoted 0 times
...
...
Malcom
1 year ago
Hmm, this seems like it could be a tricky one. I'm leaning towards option B - the application exceeding capacity on a particular hash key. That could definitely cause some intermittent issues.
upvoted 0 times
Avery
1 year ago
User3: I agree, it's a tough call between those two options.
upvoted 0 times
...
Luz
1 year ago
User2: Maybe, but it could also be option C with the scaling operations.
upvoted 0 times
...
Mendy
1 year ago
User1: I think it might be option B too, the hash key capacity could be the issue.
upvoted 0 times
...
...
Carissa
1 year ago
I'm not sure, but option C could also be a possibility.
upvoted 0 times
...
Sharita
1 year ago
I agree with Nana, option B makes sense.
upvoted 0 times
...
Nana
1 year ago
I think the cause could be option B.
upvoted 0 times
...

Save Cancel