New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Amazon BDS-C00 Exam - Topic 2 Question 111 Discussion

Actual exam question for Amazon's BDS-C00 exam
Question #: 111
Topic #: 2
[All BDS-C00 Questions]

A company is building a new application is AWS. The architect needs to design a system to collect application log events. The design should be a repeatable pattern that minimizes data loss if an application instance fails, and keeps a durable copy of all log data for at least 30 days.

What is the simplest architecture that will allow the architect to analyze the logs?

Show Suggested Answer Hide Answer
Suggested Answer: B

Contribute your Thoughts:

0/2000 characters
Lina
3 months ago
I think option A is overkill for just log analysis.
upvoted 0 times
...
Catarina
3 months ago
Wait, why not just use CloudWatch Logs directly? Seems simpler!
upvoted 0 times
...
Wilda
3 months ago
Not sure about option C, local disk feels risky for data loss.
upvoted 0 times
...
Elroy
4 months ago
I agree, S3 + Lambda is a great combo for scalability!
upvoted 0 times
...
Anglea
4 months ago
Option B seems solid, S3 is durable and cost-effective.
upvoted 0 times
...
Felix
4 months ago
I recall that using EMR for analysis might be overkill for just log data. I wonder if there's a more straightforward solution among the options.
upvoted 0 times
...
Skye
4 months ago
I feel like using CloudWatch Logs could be a good choice since it integrates well with other AWS services, but I'm not completely confident.
upvoted 0 times
...
Sheron
4 months ago
I think writing logs to S3 and using Lambda sounds familiar. We practiced a similar question about data durability with S3.
upvoted 0 times
...
Clay
5 months ago
I remember we discussed using Kinesis Firehose for real-time data streaming, but I'm not sure if it's the simplest option here.
upvoted 0 times
...
Huey
5 months ago
I'm a little confused by all the different services and options here. I'm not sure which one is the "simplest" architecture. Maybe I should just go with option C since it mentions CloudWatch Logs, which I'm more familiar with. But I'm not 100% sure that's the best choice.
upvoted 0 times
...
Nakita
5 months ago
Okay, I think I've got this. Option A with Kinesis Firehose and Redshift looks like the way to go. It's a repeatable pattern that will keep a durable copy of the logs, and Redshift is great for analysis. I'm feeling pretty confident about this one.
upvoted 0 times
...
Tambra
5 months ago
Hmm, I'm a bit unsure about this one. I'm trying to decide between options B and C. I like the idea of using Elasticsearch for analysis, but I'm not sure if writing to the local disk and using the CloudWatch Logs agent is the best approach. I'll need to think this through a bit more.
upvoted 0 times
...
Fidelia
5 months ago
This looks like a pretty straightforward question. I think I'd go with option B - writing the logs to S3 and then using a Lambda function to load them into Elasticsearch. That seems like the simplest approach that meets all the requirements.
upvoted 0 times
...
An
9 months ago
Option B seems the most flexible, with S3 and Elasticsearch providing a durable and searchable log solution. Plus, Lambda functions are always fun to tinker with.
upvoted 0 times
Johnson
8 months ago
That could work too, but I think the flexibility of S3 and Elasticsearch is worth considering.
upvoted 0 times
...
Dick
8 months ago
But wouldn't writing directly to Kinesis Firehose and loading into Redshift be more efficient for analysis?
upvoted 0 times
...
Gracia
8 months ago
I agree, using Lambda functions to load the events into Elasticsearch is a smart choice.
upvoted 0 times
...
Willie
9 months ago
Option B seems the most flexible, with S3 and Elasticsearch providing a durable and searchable log solution.
upvoted 0 times
...
...
Raymon
9 months ago
Haha, Option D is like taking a rocket to the grocery store. HDFS on EMR for simple log analysis? Talk about overkill!
upvoted 0 times
...
An
10 months ago
I'm not a fan of Option C. Depending on the CloudWatch Logs agent could introduce reliability issues, and I'd rather not add another service to the mix.
upvoted 0 times
Kristel
8 months ago
I see your point about Option C. Relying on the CloudWatch Logs agent could be risky.
upvoted 0 times
...
Rosamond
8 months ago
I agree, Option B also seems like a solid option. Using S3 and Lambda for analysis sounds reliable.
upvoted 0 times
...
Carey
9 months ago
Option A sounds like a good choice. Directly writing to Kinesis Firehose seems efficient.
upvoted 0 times
...
...
Jesusita
10 months ago
Option A looks like the simplest solution. Kinesis Firehose and Redshift are a robust combination for reliable log ingestion and analysis.
upvoted 0 times
Luisa
8 months ago
I think Option B could also work well, with S3 and Lambda to load events into Elasticsearch for analysis.
upvoted 0 times
...
Paris
8 months ago
I agree, using Kinesis Firehose to load events into Redshift makes it easy to analyze the logs.
upvoted 0 times
...
Kirby
9 months ago
Option A looks like the simplest solution. Kinesis Firehose and Redshift are a robust combination for reliable log ingestion and analysis.
upvoted 0 times
...
...
Lisha
10 months ago
I personally prefer option C. Using CloudWatch Logs agent seems like a simpler and more reliable approach.
upvoted 0 times
...
Dell
11 months ago
I disagree, I believe option B is better. Using S3 and Lambda function is a more cost-effective solution.
upvoted 0 times
...
Andrew
11 months ago
I think option A is the best choice. Kinesis Firehose can handle large volumes of data efficiently.
upvoted 0 times
...

Save Cancel