New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Amazon MLA-C01 Exam - Topic 1 Question 6 Discussion

Actual exam question for Amazon's MLA-C01 exam
Question #: 6
Topic #: 1
[All MLA-C01 Questions]

A company has a Retrieval Augmented Generation (RAG) application that uses a vector database to store embeddings of documents. The company must migrate the application to AWS and must implement a solution that provides semantic search of text files. The company has already migrated the text repository to an Amazon S3 bucket.

Which solution will meet these requirements?

Show Suggested Answer Hide Answer
Suggested Answer: C

Contribute your Thoughts:

0/2000 characters
Rozella
3 months ago
C is definitely the way to go for efficiency!
upvoted 0 times
...
Tamala
3 months ago
I agree, Kendra is designed for that purpose!
upvoted 0 times
...
Trina
3 months ago
B could work too, but it feels more complex than necessary.
upvoted 0 times
...
Kiera
3 months ago
Wait, can Textract really handle semantic searches? Sounds off.
upvoted 0 times
...
Tamekia
3 months ago
Option C seems like the best fit for semantic search with Kendra.
upvoted 0 times
...
Ilona
4 months ago
I vaguely recall that Kendra has built-in capabilities for semantic search, which could simplify the implementation.
upvoted 0 times
...
Shawnna
4 months ago
I feel like using Amazon Textract might not be the best option here since it’s more focused on extracting text rather than semantic search.
upvoted 0 times
...
Dorsey
4 months ago
I'm not entirely sure, but I think AWS Batch might be overkill for just generating embeddings. We practiced a similar question with SageMaker before, though.
upvoted 0 times
...
Cristal
4 months ago
I remember we discussed using Amazon Kendra for semantic search in class. It seems like a good fit for this scenario.
upvoted 0 times
...
Louisa
4 months ago
I'm a bit confused by the question. It's talking about a RAG application, but none of the options seem to directly address that. I'm not sure if I'm missing something or if the question is just not very clear. I might need to ask the instructor for some clarification before I can confidently select an answer.
upvoted 0 times
...
Tijuana
5 months ago
Okay, let me think this through. We need to migrate the RAG application to AWS and implement a semantic search solution for the text files in S3. Option C with Kendra seems like the most straightforward approach, as it's designed specifically for this type of use case. I think I'll go with that unless I can find a compelling reason why one of the other options might be better.
upvoted 0 times
...
Malika
5 months ago
Hmm, I'm a bit unsure about this one. The question is asking for a solution that provides semantic search, but it's not clear to me how the other options like AWS Batch, SageMaker, and Textract would handle that. I might need to do some more research on the capabilities of each service to decide which one is the best fit.
upvoted 0 times
...
Melita
5 months ago
This seems like a pretty straightforward question. I think I'll go with option C - using the Amazon Kendra S3 connector to ingest the documents and then querying Kendra for the semantic searches. That seems like the most direct solution to the requirements.
upvoted 0 times
...
Rozella
7 months ago
I'll go with C, but I'm secretly hoping the answer is D just so I can use the word 'asynchronous' in an exam question.
upvoted 0 times
...
Alyce
8 months ago
Haha, I bet the person who wrote this question is a big fan of Kendra. C is definitely the easiest answer, but where's the fun in that?
upvoted 0 times
...
Pa
8 months ago
Hmm, I'm not sure about C. I think B might be a more flexible solution, allowing for more customization with the SageMaker notebook and Feature Store.
upvoted 0 times
Moon
6 months ago
Yeah, using SageMaker for generating embeddings seems like a flexible solution.
upvoted 0 times
...
Rolland
7 months ago
I agree, B sounds like a good option for more customization.
upvoted 0 times
...
Emelda
7 months ago
Yeah, using SageMaker for generating embeddings seems like a flexible solution.
upvoted 0 times
...
Rhea
7 months ago
I agree, B sounds like a good option for more customization.
upvoted 0 times
...
...
Corinne
8 months ago
I agree, C is the way to go. Kendra is specifically designed for this kind of use case, and it should handle the semantic search requirements nicely.
upvoted 0 times
Rene
7 months ago
I think C is the best option too. Kendra seems like the most efficient solution for this scenario.
upvoted 0 times
...
Johnetta
7 months ago
C) Use the Amazon Kendra S3 connector to ingest the documents from the S3 bucket into Amazon Kendra. Query Amazon Kendra to perform the semantic searches.
upvoted 0 times
...
...
Helga
8 months ago
Option C seems like the most straightforward solution here. Kendra's S3 connector should make it easy to ingest the documents and provide semantic search capabilities.
upvoted 0 times
France
7 months ago
Yeah, Kendra's S3 connector will definitely simplify the process of ingesting the documents and performing semantic searches.
upvoted 0 times
...
Zoila
7 months ago
I agree, using Amazon Kendra for semantic search sounds like the best approach for this scenario.
upvoted 0 times
...
Rory
8 months ago
Option C seems like the most straightforward solution here. Kendra's S3 connector should make it easy to ingest the documents and provide semantic search capabilities.
upvoted 0 times
...
...
Timothy
8 months ago
I'm leaning towards option B. Using SageMaker for generating embeddings seems like a solid approach.
upvoted 0 times
...
Lizbeth
8 months ago
I disagree, I believe option C is the way to go. Amazon Kendra is specifically designed for semantic searches.
upvoted 0 times
...
Frank
8 months ago
I think option A is the best choice because it uses AWS Batch and Glue for processing and storing embeddings.
upvoted 0 times
...

Save Cancel