Optimization]
A company uses an Amazon Simple Queue Service (Amazon SQS) queue and Amazon EC2 instances in an Auto Scaling group with target tracking for a web application. The company collects the ASGAverageNetworkIn metric but notices that instances do not scale fast enough during peak traffic. There are a large number of SQS messages accumulating in the queue.
A CloudOps engineer must reduce the number of SQS messages during peak periods.
Which solution will meet this requirement?
According to the AWS Cloud Operations and Auto Scaling documentation, scaling applications that consume Amazon SQS messages should be driven by queue backlog per instance, not by general system metrics such as network traffic or CPU.
The correct approach is to calculate a custom metric using CloudWatch metric math that divides the SQS metric ApproximateNumberOfMessagesVisible by the number of active EC2 instances in the Auto Scaling group. This ''backlog per instance'' value represents the average number of messages waiting to be processed by each instance.
Then, the CloudOps engineer can create a target tracking policy that automatically scales out or in based on maintaining a desired backlog threshold. This approach ensures dynamic, workload-driven scaling behavior that reacts in near real time to message volume.
Step and simple scaling (Options C and D) require manual thresholds and do not automatically balance the load per instance.
Thus, Option B---using CloudWatch metric math to define queue backlog per instance for target tracking---is the most effective and AWS-recommended CloudOps practice.
Ona
3 days ago