Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Amazon Exam MLA-C01 Topic 1 Question 6 Discussion

Actual exam question for Amazon's MLA-C01 exam
Question #: 6
Topic #: 1
[All MLA-C01 Questions]

A company has a Retrieval Augmented Generation (RAG) application that uses a vector database to store embeddings of documents. The company must migrate the application to AWS and must implement a solution that provides semantic search of text files. The company has already migrated the text repository to an Amazon S3 bucket.

Which solution will meet these requirements?

Show Suggested Answer Hide Answer
Suggested Answer: C

Contribute your Thoughts:

Corinne
11 hours ago
I agree, C is the way to go. Kendra is specifically designed for this kind of use case, and it should handle the semantic search requirements nicely.
upvoted 0 times
...
Helga
15 days ago
Option C seems like the most straightforward solution here. Kendra's S3 connector should make it easy to ingest the documents and provide semantic search capabilities.
upvoted 0 times
...
Timothy
18 days ago
I'm leaning towards option B. Using SageMaker for generating embeddings seems like a solid approach.
upvoted 0 times
...
Lizbeth
20 days ago
I disagree, I believe option C is the way to go. Amazon Kendra is specifically designed for semantic searches.
upvoted 0 times
...
Frank
24 days ago
I think option A is the best choice because it uses AWS Batch and Glue for processing and storing embeddings.
upvoted 0 times
...

Save Cancel