Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Amazon Exam SAA-C03 Topic 4 Question 28 Discussion

Actual exam question for Amazon's SAA-C03 exam
Question #: 28
Topic #: 4
[All SAA-C03 Questions]

A company wants to run its payment application on AWS The application receives payment notifications from mobile devices Payment notifications require a basic validation before they are sent for further processing

The backend processing application is long running and requires compute and memory to be adjusted The company does not want to manage the infrastructure

Which solution will meet these requirements with the LEAST operational overhead?

Show Suggested Answer Hide Answer
Suggested Answer: D

This option is the best solution because it allows the company to run its payment application on AWS with minimal operational overhead and infrastructure management. By using Amazon API Gateway, the company can create a secure and scalable API to receive payment notifications from mobile devices. By using AWS Lambda, the company can run a serverless function to validate the payment notifications and send them to the backend application. Lambda handles the provisioning, scaling, and security of the function, reducing the operational complexity and cost. By using Amazon ECS with AWS Fargate, the company can run the backend application on a fully managed container service that scales the compute resources automatically and does not require any EC2 instances to manage. Fargate allocates the right amount of CPU and memory for each container and adjusts them as needed.

A) Create an Amazon Simple Queue Service (Amazon SQS) queue Integrate the queue with an Amazon EventBndge rule to receive payment notifications from mobile devices Configure the rule to validate payment notifications and send the notifications to the backend application Deploy the backend application on Amazon Elastic Kubernetes Service (Amazon EKS) Anywhere Create a standalone cluster. This option is not optimal because it requires the company to manage the Kubernetes cluster that runs the backend application. Amazon EKS Anywhere is a deployment option that allows the company to create and operate Kubernetes clusters on-premises or in other environments outside AWS. The company would need to provision, configure, scale, patch, and monitor the cluster nodes, which can increase the operational overhead and complexity. Moreover, the company would need to ensure the connectivity and security between the AWS services and the EKS Anywhere cluster, which can also add challenges and risks.

B) Create an Amazon API Gateway API Integrate the API with anAWS Step Functions state ma-chine to receive payment notifications from mobile devices Invoke the state machine to validate payment notifications and send the notifications to the backend application Deploy the backend application on Amazon Elastic Kubernetes Sen/ice (Amazon EKS). Configure an EKS cluster with self-managed nodes. This option is not ideal because it requires the company to manage the EC2 instances that host the Kubernetes cluster that runs the backend application. Amazon EKS is a fully managed service that runs Kubernetes on AWS, but it still requires the company to manage the worker nodes that run the containers. The company would need to provision, configure, scale, patch, and monitor the EC2 instances, which can increase the operational overhead and infrastructure costs. Moreover, using AWS Step Functions to validate the payment notifications may be unnecessary and complex, as the validation logic can be implemented in a simpler way with Lambda or other services.

C) Create an Amazon Simple Queue Sen/ice (Amazon SQS) queue Integrate the queue with an Amazon EventBridge rule to receive payment notifications from mobile devices Configure the rule to validate payment notifications and send the notifications to the backend application Deploy the backend application on Amazon EC2 Spot Instances Configure a Spot Fleet with a default al-location strategy. This option is not cost-effective because it requires the company to manage the EC2 instances that run the backend application. The company would need to provision, configure, scale, patch, and monitor the EC2 instances, which can increase the operational overhead and infrastructure costs. Moreover, using Spot Instances can introduce the risk of interruptions, as Spot Instances are reclaimed by AWS when the demand for On-Demand Instances increases. The company would need to handle the interruptions gracefully and ensure the availability and reliability of the backend application.


1Amazon API Gateway - Amazon Web Services

2AWS Lambda - Amazon Web Services

3Amazon Elastic Container Service - Amazon Web Services

4AWS Fargate - Amazon Web Services

Contribute your Thoughts:

Dorathy
9 days ago
I'm not sure, I'm kind of torn between options B and D. Both seem to be using managed services in different ways, but I'm worried that the self-managed nodes in option B might introduce more operational overhead than the Fargate option.
upvoted 0 times
...
Lashon
10 days ago
Hmm, this is a tricky one. I'm leaning towards option D because it uses a fully managed service like Fargate for the backend application, and it also leverages API Gateway and Lambda to handle the payment notification processing. That seems like it would minimize the operational overhead the most.
upvoted 0 times
...
Elenora
11 days ago
Yeah, I agree. The question is really testing our knowledge of AWS services and how they can be combined to create a solution that meets the given requirements. I think the options are trying to assess our understanding of different approaches, like using SQS, API Gateway, Lambda, and managed Kubernetes or Fargate.
upvoted 0 times
...
Bok
12 days ago
This question seems to be testing our understanding of AWS services and how they can be used to build a scalable and managed solution for a payment processing application. I think the key requirements here are to minimize operational overhead, handle compute and memory adjustments, and use managed services where possible.
upvoted 0 times
...

Save Cancel