Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Amazon AIP-C01 Exam - Topic 5 Question 7 Discussion

Actual exam question for Amazon's AIP-C01 exam
Question #: 7
Topic #: 5
[All AIP-C01 Questions]

An insurance company uses existing Amazon SageMaker AI infrastructure to support a web-based application that allows customers to predict what their insurance premiums will be. The company stores customer data that is used to train the SageMaker AI model in an Amazon S3 bucket. The dataset is growing rapidly. The company wants a solution to continuously re-train the model. The solution must automatically re-train and re-deploy the model to the application when an employee uploads a new customer data file to the S3 bucket.

Which solution will meet these requirements?

Show Suggested Answer Hide Answer
Suggested Answer: D

Option D is the best fit because it implements a reliable event-driven MLOps workflow that automates retraining and redeployment with clear orchestration, auditability, and production-grade error handling. The requirement is explicit: whenever a new file is uploaded to Amazon S3, the system must retrain and then redeploy the model used by a web application. A common AWS pattern is to use an S3 event notification to trigger an AWS Lambda function, which then starts a controlled workflow. In option D, Lambda serves as the event handler that reacts immediately to the S3 upload event and passes the necessary context (bucket, object key, dataset version) into an AWS Step Functions Standard state machine.

Step Functions Standard is appropriate for model retraining pipelines because training and deployment steps can be long-running and benefit from durable state, retries, and failure handling. It provides execution history, making it easier to troubleshoot why a particular retraining run failed and to prove which dataset version produced which model version. This operational visibility is critical when the dataset is ''growing rapidly'' and retraining is frequent.

Within the workflow, Amazon SageMaker Pipelines is the right service to run the ML lifecycle stages in a repeatable way: data processing (if needed), training, evaluation/quality checks, model registration, and deployment to an endpoint used by the application. SageMaker Pipelines is purpose-built for CI/CD-style ML, supporting automated redeployments when a new approved model artifact is produced. By calling a pipeline execution from Step Functions, the company can add governance gates (for example, only deploy if evaluation metrics meet thresholds), and can apply consistent rollback and notification steps when deployment fails.

The other options are weaker: A confuses inference with retraining and does not provide deployment orchestration. B adds unnecessary webhook complexity and describes an awkward event bus configuration. C introduces Autopilot/Data Wrangler, which may be useful but adds extra moving parts and is not required to meet the trigger-and-redeploy requirement.


Contribute your Thoughts:

0/2000 characters

Currently there are no comments in this discussion, be the first to comment!


Save Cancel