Optimization]
A company uses an Amazon Simple Queue Service (Amazon SQS) queue and Amazon EC2 instances in an Auto Scaling group with target tracking for a web application. The company collects the ASGAverageNetworkIn metric but notices that instances do not scale fast enough during peak traffic. There are a large number of SQS messages accumulating in the queue.
A CloudOps engineer must reduce the number of SQS messages during peak periods.
Which solution will meet this requirement?
According to the AWS Cloud Operations and Auto Scaling documentation, scaling applications that consume Amazon SQS messages should be driven by queue backlog per instance, not by general system metrics such as network traffic or CPU.
The correct approach is to calculate a custom metric using CloudWatch metric math that divides the SQS metric ApproximateNumberOfMessagesVisible by the number of active EC2 instances in the Auto Scaling group. This ''backlog per instance'' value represents the average number of messages waiting to be processed by each instance.
Then, the CloudOps engineer can create a target tracking policy that automatically scales out or in based on maintaining a desired backlog threshold. This approach ensures dynamic, workload-driven scaling behavior that reacts in near real time to message volume.
Step and simple scaling (Options C and D) require manual thresholds and do not automatically balance the load per instance.
Thus, Option B---using CloudWatch metric math to define queue backlog per instance for target tracking---is the most effective and AWS-recommended CloudOps practice.
A SysOps administrator needs to implement a solution that protects credentials for an Amazon RDS for MySQL DB instance. The solution must rotate the credentials automatically one time every week.
Which combination of steps will meet these requirements? (Select TWO.)
Comprehensive and Detailed Explanation From Exact Extract of AWS CloudOps Documents:
The correct answers are B and D. AWS CloudOps documentation clearly states that AWS Secrets Manager is the recommended service for storing and managing database credentials securely. Secrets Manager integrates natively with Amazon RDS and supports automatic, scheduled secret rotation.
To rotate credentials weekly, Secrets Manager requires a Lambda rotation function. AWS provides managed rotation templates for Amazon RDS for MySQL that update the database password and the stored secret atomically. This combination ensures credentials are protected, rotated automatically, and audited with minimal operational effort.
Option A is incorrect because RDS Proxy does not store or rotate credentials; it only retrieves them from Secrets Manager. Option C is incorrect because Systems Manager Parameter Store does not support native automatic rotation. Option E is incorrect because Automation runbooks are not the recommended mechanism for secrets rotation and add unnecessary complexity.
AWS CloudOps best practices strongly recommend Secrets Manager with Lambda-based rotation for database credential protection and compliance.
AWS Secrets Manager User Guide -- Automatic Rotation
Amazon RDS User Guide -- Credential Management
AWS SysOps Administrator Study Guide -- Secrets and Key Management
A company runs applications on Amazon EC2 instances. The company wants to ensure that SSH ports on the EC2 instances are never open. The company has enabled AWS Config and has set up the restricted-ssh AWS managed rule.
A CloudOps engineer must implement a solution to remediate SSH port access for noncompliant security groups.
What should the engineer do to meet this requirement with the MOST operational efficiency?
The AWS Cloud Operations and Governance documentation specifies that AWS Config can be paired with AWS Systems Manager Automation runbooks for automatic remediation of noncompliant resources.
For SSH restrictions, the restricted-ssh managed rule detects any security group allowing inbound traffic on port 22. To automatically remediate these findings, AWS provides the AWS-DisableIncomingSSHOnPort22 runbook. This runbook programmatically removes inbound rules that allow port 22 traffic from affected security groups.
This approach achieves continuous compliance with minimal human intervention. By contrast, sending notifications (Option A) does not enforce remediation, API-based scripts (Option C) add operational overhead, and manual remediation (Option D) violates automation best practices.
Therefore, the most efficient CloudOps solution is Option B, using AWS Config with the AWS-DisableIncomingSSHOnPort22 automation runbook for automatic, scalable enforcement.
A company is running an ecommerce application on AWS. The application maintains many open but idle connections to an Amazon Aurora DB cluster. During times of peak usage, the database produces the following error message: "Too many connections." The database clients are also experiencing errors.
Which solution will resolve these errors?
Comprehensive and Detailed Explanation From Exact Extract of AWS CloudOps Documents:
The correct solution is B. Configure RDS Proxy, because RDS Proxy is specifically designed to manage and pool database connections for Amazon Aurora and Amazon RDS. AWS CloudOps documentation states that RDS Proxy reduces database load and prevents connection exhaustion by reusing existing connections and managing spikes in application demand.
In this scenario, the ecommerce application maintains many idle connections, which consume database connection slots even when not actively used. During peak traffic, new connections cannot be established, resulting in the ''Too many connections'' error. RDS Proxy sits between the application and the Aurora DB cluster, maintaining a smaller, efficient pool of database connections and multiplexing application requests over those connections.
Option A is incorrect because RCUs and WCUs apply to DynamoDB, not Aurora. Option C is incorrect because enhanced networking improves network throughput and latency but does not manage database connections. Option D is incorrect because changing instance types does not address idle connection buildup and can still result in connection exhaustion.
AWS CloudOps best practices recommend RDS Proxy for applications with connection-heavy workloads, unpredictable traffic patterns, or serverless components.
Amazon RDS User Guide -- RDS Proxy concepts and benefits
Amazon Aurora User Guide -- Managing database connections
AWS SysOps Administrator Study Guide -- Database reliability and scaling
A company's ecommerce application is running on Amazon EC2 instances that are behind an Application Load Balancer (ALB). The instances are in an Auto Scaling group. Customers report that the website is occasionally down. When the website is down, it returns an HTTP 500 (server error) status code to customer browsers.
The Auto Scaling group's health check is configured for EC2 status checks, and the instances appear healthy.
Which solution will resolve the problem?
In this scenario, the EC2 instances pass their EC2 status checks, indicating that the operating system is responsive. However, the application hosted on the instance is failing intermittently, returning HTTP 500 errors. This demonstrates a discrepancy between the instance-level health and the application-level health.
According to AWS CloudOps best practices under Monitoring, Logging, Analysis, Remediation and Performance Optimization (SOA-C03 Domain 1), Auto Scaling groups should incorporate Elastic Load Balancing (ELB) health checks instead of relying solely on EC2 status checks. The ELB health check probes the application endpoint (for example, HTTP or HTTPS target group health checks), ensuring that the application itself is functioning correctly.
When an instance fails an ELB health check, Amazon EC2 Auto Scaling will automatically mark the instance as unhealthy and replace it with a new one, ensuring continuous availability and performance optimization.
Extract from AWS CloudOps (SOA-C03) Study Guide -- Domain 1:
''Implement monitoring and health checks using ALB and EC2 Auto Scaling integration. Application Load Balancer health checks allow Auto Scaling to terminate and replace instances that fail application-level health checks, ensuring consistent application performance.''
Extract from AWS Auto Scaling Documentation:
''When you enable the ELB health check type for your Auto Scaling group, Amazon EC2 Auto Scaling considers both EC2 status checks and Elastic Load Balancing health checks to determine instance health. If an instance fails the ELB health check, it is automatically replaced.''
Therefore, the correct answer is B, as it ensures proper application-level monitoring and remediation using ALB-integrated ELB health checks---a core CloudOps operational practice for proactive incident response and availability assurance.
References (AWS CloudOps Verified Source Extracts):
AWS Certified CloudOps Engineer -- Associate (SOA-C03) Exam Guide: Domain 1 -- Monitoring, Logging, and Remediation.
AWS Auto Scaling User Guide: Health checks for Auto Scaling instances (Elastic Load Balancing integration).
AWS Well-Architected Framework -- Operational Excellence and Reliability Pillars.
AWS Elastic Load Balancing Developer Guide -- Target group health checks and monitoring.
Currently there are no comments in this discussion, be the first to comment!
Johnna
19 days agoJoseph
26 days agoBeckie
1 month agoWilliam
1 month agoPrecious
2 months agoLetha
2 months agoFatima
2 months agoMozell
2 months agoEvette
3 months agoDell
3 months agoDesmond
3 months agoGiovanna
3 months agoFredric
4 months agoRozella
4 months agoCordell
4 months agoTammara
4 months agoGlory
5 months agoLatanya
5 months agoVerdell
5 months agoDean
5 months agoAugustine
6 months agoSommer
6 months agoSalome
6 months ago