Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Amazon DVA-C02 Exam - Topic 1 Question 53 Discussion

Actual exam question for Amazon's DVA-C02 exam
Question #: 53
Topic #: 1
[All DVA-C02 Questions]

A developer is building a microservice that uses AWS Lambda to process messages from an Amazon Simple Queue Service (Amazon SQS) standard queue. The Lambda function calls external APIs to enrich the SOS message data before loading the data into an Amazon Redshift data warehouse. The SOS queue must handle a maximum of 1.000 messages per second.

During initial testing, the Lambda function repeatedly inserted duplicate data into the Amazon Redshift table. The duplicate data led to a problem with data analysis. All duplicate messages were submitted to the queue within 1 minute of each other.

How should the developer resolve this issue?

Show Suggested Answer Hide Answer
Suggested Answer: A

Contribute your Thoughts:

0/2000 characters
Katy
3 months ago
Reducing Lambda concurrency won't fix the root problem.
upvoted 0 times
...
Suzi
3 months ago
Wait, can you really deduplicate in a standard queue?
upvoted 0 times
...
Marshall
3 months ago
A FIFO queue with deduplication is the way to go!
upvoted 0 times
...
Mattie
3 months ago
Definitely option A! It solves the duplicate issue directly.
upvoted 0 times
...
Tayna
4 months ago
I'm surprised they didn't catch the duplicates earlier!
upvoted 0 times
...
Elfriede
4 months ago
I think option D is a bit confusing. Message group IDs are for FIFO queues, right? I’m not sure if they apply to standard queues.
upvoted 0 times
...
Gianna
4 months ago
I practiced a similar question about message processing, and I think using temporary storage to track processed messages could work. So, option C sounds plausible.
upvoted 0 times
...
Shawna
4 months ago
I'm not entirely sure, but reducing Lambda concurrency seems like it could help. Maybe option B? But I feel like it doesn't directly address the duplicates.
upvoted 0 times
...
Mollie
4 months ago
I remember we discussed FIFO queues in class, and how they can help with deduplication. I think option A might be the right choice.
upvoted 0 times
...
Shawnta
5 months ago
Hmm, using Lambda's temporary storage to track processed messages could be an interesting solution, but I'm not sure if that's the most efficient way to handle this. I'll need to weigh the pros and cons of each option.
upvoted 0 times
...
Willodean
5 months ago
I'm pretty confident I know the right approach here. Creating an SQS FIFO queue and enabling message deduplication seems like the most straightforward way to solve this problem.
upvoted 0 times
...
Ahmad
5 months ago
I'm a bit confused by the question. Does the Lambda function need to be modified, or is the solution all about the SQS queue configuration?
upvoted 0 times
...
Georgene
5 months ago
Okay, I think I've got a good strategy here. I'll focus on the SQS queue settings and see if I can find a way to deduplicate the messages.
upvoted 0 times
...
Dustin
5 months ago
Hmm, this seems like a tricky one. I'll need to carefully consider the options to avoid any more data duplication issues.
upvoted 0 times
...
Alfred
7 months ago
Haha, I bet the developer is cursing the SQS standard queue right about now. Option D seems like the logical choice to handle those pesky duplicates.
upvoted 0 times
...
Toi
7 months ago
Hold up, what about creating an SQS FIFO queue and enabling deduplication (Option A)? That could be a more robust solution than relying on temporary storage.
upvoted 0 times
Zita
6 months ago
User 2: D) Configure a message group ID for every sent message. Enable message deduplication on the SQS standard queue.
upvoted 0 times
...
Alona
6 months ago
User 1: A) Create an SOS FIFO queue. Enable message deduplication on the SOS FIFO queue.
upvoted 0 times
...
...
Samuel
7 months ago
That's a good point, Leota. It could help keep track of processed messages and avoid duplicates.
upvoted 0 times
...
Leota
7 months ago
But wouldn't using Lambda's temporary storage as suggested in option C be more efficient?
upvoted 0 times
...
Laurel
7 months ago
I'm not sure about that. Option D also seems like a valid choice to prevent duplicate data.
upvoted 0 times
...
Samuel
7 months ago
I agree with Leota. Using an SOS FIFO queue with message deduplication seems like the best solution.
upvoted 0 times
...
Leota
7 months ago
I think the developer should go with option A.
upvoted 0 times
...
Tarra
8 months ago
Wait, did you say 1,000 messages per second? That's a lot of data! I hope the developer's laptop can handle all the API calls without catching on fire.
upvoted 0 times
...
Alysa
8 months ago
Option C looks like the way to go. Using Lambda's temporary storage to track processed messages seems like a clean and efficient solution.
upvoted 0 times
Talia
7 months ago
I agree, that sounds like a good approach to prevent duplicate data from being loaded into Redshift.
upvoted 0 times
...
Aliza
7 months ago
Option C looks like the way to go. Using Lambda's temporary storage to track processed messages seems like a clean and efficient solution.
upvoted 0 times
...
...

Save Cancel