New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Amazon SAA-C03 Exam - Topic 3 Question 57 Discussion

Actual exam question for Amazon's SAA-C03 exam
Question #: 57
Topic #: 3
[All SAA-C03 Questions]

A developer used the AWS SDK to create an application that aggregates and produces log records for 10 services. The application delivers data to an Amazon Kinesis Data Streams stream.

Each record contains a log message with a service name, creation timestamp, and other log information. The stream has 15 shards in provisioned capacity mode. The stream uses service name as the partition key.

The developer notices that when all the services are producing logs, ProvisionedThroughputExceededException errors occur during PutRecord requests. The stream metrics show that the write capacity the applications use is below the provisioned capacity.

How should the developer resolve this issue?

Show Suggested Answer Hide Answer
Suggested Answer: C

Partition Key Issue:

Using 'service name' as the partition key results in uneven data distribution. Some shards may become hot due to excessive logs from certain services, leading to throttling errors.

Changing the partition key to 'creation timestamp' ensures a more even distribution of records across shards.

Incorrect Options Analysis:

Option A: On-demand capacity mode eliminates throughput management but is more expensive and does not address the root cause.

Option B: Adding more shards does not solve the issue if the partition key still creates hot shards.

Option D: Using separate streams increases complexity and is unnecessary.


Kinesis Data Streams Partition Key Best Practices

Contribute your Thoughts:

0/2000 characters
Justine
1 month ago
Wait, changing the partition key? Isn't that risky?
upvoted 0 times
...
Emile
2 months ago
Using separate streams for each service could help with isolation, but I’m not sure if it’s worth the extra management overhead.
upvoted 0 times
...
Ilene
2 months ago
Changing the partition key sounds interesting, but I feel like it might complicate things more than necessary.
upvoted 0 times
...
Ivan
2 months ago
I think doubling the number of shards could work, but I wonder if it might be overkill since the metrics show we're below capacity.
upvoted 0 times
...
Abel
2 months ago
I remember reading that changing the capacity mode to on-demand can help with scaling issues, but I'm not sure if that's the best option here.
upvoted 0 times
...
Leonora
2 months ago
I think doubling the shards is a solid move!
upvoted 0 times
...
Kristal
3 months ago
Sounds like a classic case of shard throttling.
upvoted 0 times
...
Pamella
3 months ago
I disagree, separate streams for each service seems excessive.
upvoted 0 times
...
Claudio
3 months ago
On-demand mode could really simplify things here.
upvoted 0 times
...
Tamera
3 months ago
I think the key here is to understand why the ProvisionedThroughputExceededException errors are occurring even though the write capacity is below the provisioned capacity. That's the clue to figuring out the right solution.
upvoted 0 times
...
Karina
3 months ago
Hmm, I'm not sure about changing the partition key. That could introduce other issues with how the data is organized. Using a separate stream for each service might work, but that seems like it could get complicated to manage. I'll need to think this through carefully.
upvoted 0 times
...
Kimberlie
4 months ago
I've seen issues like this before. Changing the capacity mode to on-demand might be the easiest fix, but doubling the number of shards could also help spread out the load. I'll need to weigh the pros and cons of each approach.
upvoted 0 times
...
Dahlia
4 months ago
Okay, let's see here. The key seems to be the ProvisionedThroughputExceededException errors, even though the write capacity is below the provisioned capacity. I'm a bit confused by that, but I think I have an idea.
upvoted 0 times
...
Paola
4 months ago
Hmm, this looks like a tricky one. I'll need to think through the different options carefully to figure out the best solution.
upvoted 0 times
...
Rashida
4 months ago
I'm pretty confident that the solution here is to increase the number of shards. That should give the stream more capacity to handle the high write volume without throttling.
upvoted 0 times
...
Crissy
4 months ago
Okay, I think I've got this. The issue is that the write capacity is being exceeded, even though the provisioned capacity is not being fully utilized. Changing the partition key or using separate streams for each service could help distribute the load more evenly.
upvoted 0 times
...
Angelyn
5 months ago
Hmm, I'm a bit confused by the question. I'll need to re-read it a few times to make sure I understand the key details before deciding on a solution.
upvoted 0 times
...
Louisa
5 months ago
This looks like a tricky one. I'll need to think through the different options carefully to figure out the best approach.
upvoted 0 times
...
Jamika
10 months ago
Option C? Really? Changing the partition key to the timestamp? That's like trying to put out a fire with gasoline.
upvoted 0 times
Shonda
10 months ago
I think the developer should consider increasing the number of shards in the stream to handle the increased load from all the services producing logs.
upvoted 0 times
...
Valentin
10 months ago
Option C might actually work in this case. Changing the partition key to the timestamp could help distribute the load more evenly.
upvoted 0 times
...
...
Lisbeth
10 months ago
Ah, the age-old problem of too many cooks in the kitchen. Option B is definitely the way to go, unless you want to end up with a 'Kinesis Data Streams Inferno' on your hands.
upvoted 0 times
...
Kasandra
11 months ago
I think changing the capacity mode to on-demand could be the most cost-effective solution.
upvoted 0 times
...
Judy
11 months ago
I believe using a separate Kinesis stream for each service could help.
upvoted 0 times
...
Silvana
11 months ago
I disagree, changing the partition key might be a better solution.
upvoted 0 times
...
Carma
11 months ago
I think option D is the way to go. Using separate streams for each service makes more sense than trying to fit everything into one stream.
upvoted 0 times
Gilma
9 months ago
That could work too, as long as the developer monitors the stream metrics to ensure they are not exceeding the provisioned capacity.
upvoted 0 times
...
Johna
9 months ago
That's a valid point. Maybe the developer can consider increasing the number of shards in the stream to handle the increased load.
upvoted 0 times
...
Anika
9 months ago
But wouldn't creating separate streams for each service increase the cost and complexity of managing the application?
upvoted 0 times
...
Sena
10 months ago
I agree, option D seems like the best solution to avoid the ProvisionedThroughputExceededException errors.
upvoted 0 times
...
...
Leanna
11 months ago
The correct answer is B. Increasing the number of shards should resolve the throttling issues.
upvoted 0 times
...
Nicolette
11 months ago
I think the developer should double the number of shards.
upvoted 0 times
...

Save Cancel