New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Amazon AIF-C01 Exam - Topic 2 Question 23 Discussion

Actual exam question for Amazon's AIF-C01 exam
Question #: 23
Topic #: 2
[All AIF-C01 Questions]

A company has developed an ML model for image classification. The company wants to deploy the model to production so that a web application can use the model.

The company needs to implement a solution to host the model and serve predictions without managing any of the underlying infrastructure.

Which solution will meet these requirements?

Show Suggested Answer Hide Answer
Suggested Answer: A

Amazon SageMaker Serverless Inference is the correct solution for deploying an ML model to production in a way that allows a web application to use the model without the need to manage the underlying infrastructure.

Amazon SageMaker Serverless Inference provides a fully managed environment for deploying machine learning models. It automatically provisions, scales, and manages the infrastructure required to host the model, removing the need for the company to manage servers or other underlying infrastructure.

Why Option A is Correct:

No Infrastructure Management: SageMaker Serverless Inference handles the infrastructure management for deploying and serving ML models. The company can simply provide the model and specify the required compute capacity, and SageMaker will handle the rest.

Cost-Effectiveness: The serverless inference option is ideal for applications with intermittent or unpredictable traffic, as the company only pays for the compute time consumed while handling requests.

Integration with Web Applications: This solution allows the model to be easily accessed by web applications via RESTful APIs, making it an ideal choice for hosting the model and serving predictions.

Why Other Options are Incorrect:

B . Use Amazon CloudFront to deploy the model: CloudFront is a content delivery network (CDN) service for distributing content, not for deploying ML models or serving predictions.

C . Use Amazon API Gateway to host the model and serve predictions: API Gateway is used for creating, deploying, and managing APIs, but it does not provide the infrastructure or the required environment to host and run ML models.

D . Use AWS Batch to host the model and serve predictions: AWS Batch is designed for running batch computing workloads and is not optimized for real-time inference or hosting machine learning models.

Thus, A is the correct answer, as it aligns with the requirement of deploying an ML model without managing any underlying infrastructure.


Contribute your Thoughts:

0/2000 characters
Isaiah
3 months ago
Totally agree with A, it's super easy to scale!
upvoted 0 times
...
Sharee
3 months ago
Wait, can AWS Batch really handle this? Seems off.
upvoted 0 times
...
Tammy
3 months ago
B is not the right tool for this job, just saying.
upvoted 0 times
...
Jacki
3 months ago
I think C could work too, but not as efficient.
upvoted 0 times
...
Marleen
3 months ago
A is definitely the best choice for serverless deployment!
upvoted 0 times
...
Myrtie
4 months ago
I have a vague memory of AWS Batch being used for batch processing, not real-time predictions, so I doubt option D is the right choice.
upvoted 0 times
...
Laquanda
4 months ago
I practiced a similar question, and I feel like CloudFront is more for content delivery, so I don't think option B is correct.
upvoted 0 times
...
Cristal
4 months ago
I'm not entirely sure, but I remember something about API Gateway being used for serving predictions. Could it be option C?
upvoted 0 times
...
Barrie
4 months ago
I think option A, using Amazon SageMaker Serverless Inference, sounds right since it’s designed for deploying ML models without managing infrastructure.
upvoted 0 times
...
Yvonne
4 months ago
This is a tricky one. I'm not sure if CloudFront or API Gateway would be the right fit here. AWS Batch could work, but it might be overkill for just hosting and serving an ML model. I think I'll go with the SageMaker option, as that seems to be the most tailored solution for the given requirements.
upvoted 0 times
...
Luisa
5 months ago
Okay, let me think this through. The key requirements are hosting the model and serving predictions without managing infrastructure. SageMaker Serverless Inference sounds like it would handle that nicely. I'll make sure to read the details, but I think that's the best option.
upvoted 0 times
...
Mirta
5 months ago
Hmm, I'm a bit unsure about this one. I know CloudFront is for content delivery, so that doesn't seem quite right. And I'm not too familiar with AWS Batch, so I'll have to think about that one. I'm leaning towards API Gateway, but I'll need to double-check the details.
upvoted 0 times
...
Karl
5 months ago
This seems like a straightforward question about deploying an ML model without managing infrastructure. I think I'll go with option A, Amazon SageMaker Serverless Inference, since that's designed for exactly this use case.
upvoted 0 times
...

Save Cancel