A DevOps engineer is setting up an Amazon Elastic Container Service (Amazon ECS) blue/green deployment for an application by using AWS CodeDeploy and AWS CloudFormation. During the deployment window, the application must be highly available and CodeDeploy must shift 10% of traffic to a new version of the application every minute until all traffic is shifted.
Which configuration should the DevOps engineer add in the CloudFormation template to meet these requirements?
This corresponds to Option B: Add the AWS::CodeDeployBlueGreen transform and the AWS::CodeDeploy::BlueGreen hook parameter with the CodeDeployDefault.ECSLinear10PercentEvery1Minutes deployment configuration.
A company releases a new application in a new AWS account. The application includes an AWS Lambda function that processes messages from an Amazon Simple Queue Service (Amazon SOS) standard queue. The Lambda function stores the results in an Amazon S3 bucket for further downstream processing. The Lambda function needs to process the messages within a specific period of time after the messages are published. The Lambda function has a batch size of 10 messages and takes a few seconds to process a batch of messages.
As load increases on the application's first day of service, messages in the queue accumulate at a greater rate than the Lambda function can process the messages. Some messages miss the required processing timelines. The logs show that many messages in the queue have data that is not valid. The company needs to meet the timeline requirements for messages that have valid data.
Which solution will meet these requirements?
Step 2: Using an SQS Dead-Letter Queue (DLQ) Configuring a dead-letter queue (DLQ) for SQS will ensure that messages with invalid data, or those that cannot be processed successfully, are moved to the DLQ. This prevents such messages from clogging the queue and allows the system to focus on processing valid messages.
Action: Configure an SQS dead-letter queue for the main queue.
Why: A DLQ helps isolate problematic messages, preventing them from continuously reappearing in the queue and causing processing delays for valid messages.
Step 3: Maintaining the Lambda Function's Batch Size Keeping the current batch size allows the Lambda function to continue processing multiple messages at once. By addressing the failed items separately, there's no need to increase or reduce the batch size.
Action: Maintain the Lambda function's current batch size.
Why: Changing the batch size is unnecessary if the invalid messages are properly handled by reporting failed items and using a DLQ.
This corresponds to Option D: Keep the Lambda function's batch size the same. Configure the Lambda function to report failed batch items. Configure an SQS dead-letter queue.
A company wants to use AWS Systems Manager documents to bootstrap physical laptops for developers The bootstrap code Is stored in GitHub A DevOps engineer has already created a Systems Manager activation, installed the Systems Manager agent with the registration code, and installed an activation ID on all the laptops.
Which set of steps should be taken next?
Configure the Systems Manager Document to Use the aws-downloadContent Plugin with a sourceType of GitHub and sourcelnfo with the Repository Details:
The aws-downloadContent plugin can download content from various sources, including GitHub, which is necessary for bootstrapping the laptops with the code stored in the GitHub repository.
schemaVersion: '2.2'
description: 'Download and run bootstrap script from GitHub'
mainSteps:
- action: aws:downloadContent
name: downloadBootstrapScript
inputs:
sourceType: GitHub
sourceInfo: '{'owner':'my-org','repository':'my-repo','path':'scripts/bootstrap.sh','getOptions':'branch:main'}'
destinationPath: /tmp/bootstrap.sh
- action: aws:runShellScript
name: runBootstrapScript
inputs:
runCommand:
- chmod +x /tmp/bootstrap.sh
- /tmp/bootstrap.sh
This setup ensures that the bootstrap code is downloaded from GitHub and executed on the laptops using Systems Manager.
A company has an AWS Control Tower landing zone. The company's DevOps team creates a workload OU. A development OU and a production OU are nested under the workload OU. The company grants users full access to the company's AWS accounts to deploy applications.
The DevOps team needs to allow only a specific management 1AM role to manage the 1AM roles and policies of any AWS accounts In only the production OU.
Which combination of steps will meet these requirements? {Select TWO.)
You need to understand how SCP inheritance works in AWS. The way it works for Deny policies is different that allow policies.
Allow polices are passing down to children ONLY if they don't have an allow policy.
Deny policies always pass down to children.
That's why there is always an SCP set to the Root to allow everything by default. If you limit this policy, the whole organization will be limited, not matter what other policies are saying for the other OUs. So it's not A. It's not D because it restricts the wrong OU.
A company uses Amazon EC2 as its primary compute platform. A DevOps team wants to audit the company's EC2 instances to check whether any prohibited applications have been installed on the EC2 instances.
Which solution will meet these requirements with the MOST operational efficiency?
* Configure AWS Systems Manager on Each Instance:
AWS Systems Manager provides a unified interface for managing AWS resources. Install the Systems Manager agent on each EC2 instance to enable inventory management and other features.
* Use AWS Systems Manager Inventory:
Systems Manager Inventory collects metadata about your instances and the software installed on them. This data includes information about applications, network configurations, and more.
Enable Systems Manager Inventory on all EC2 instances to gather detailed information about installed applications.
* Use Systems Manager Resource Data Sync to Synchronize and Store Findings in an Amazon S3 Bucket:
Resource Data Sync aggregates inventory data from multiple accounts and regions into a single S3 bucket, making it easier to query and analyze the data.
Configure Resource Data Sync to automatically transfer inventory data to an S3 bucket for centralized storage.
* Create an AWS Lambda Function that Runs When New Objects are Added to the S3 Bucket:
Use an S3 event to trigger a Lambda function whenever new inventory data is added to the S3 bucket.
The Lambda function can parse the inventory data and check for the presence of prohibited applications.
* Configure the Lambda Function to Identify Prohibited Applications:
The Lambda function should be programmed to scan the inventory data for any known prohibited applications and generate alerts or take appropriate actions if such applications are found.
Example Lambda function in Python
import json
import boto3
def lambda_handler(event, context):
s3 = boto3.client('s3')
bucket = event['Records'][0]['s3']['bucket']['name']
key = event['Records'][0]['s3']['object']['key']
response = s3.get_object(Bucket=bucket, Key=key)
inventory_data = json.loads(response['Body'].read().decode('utf-8'))
prohibited_apps = ['app1', 'app2']
for instance in inventory_data['Instances']:
for app in instance['Applications']:
if app['Name'] in prohibited_apps:
# Send notification or take action
print(f'Prohibited application found: {app['Name']} on instance {instance['InstanceId']}')
return {'statusCode': 200, 'body': json.dumps('Check completed')}
By leveraging AWS Systems Manager Inventory, Resource Data Sync, and Lambda, this solution provides an efficient and automated way to audit EC2 instances for prohibited applications.
Cheryl
7 days agoLon
20 days agoEmeline
2 months agoElmer
2 months agoJustine
2 months agoJosefa
3 months agoVernice
3 months agoMilly
3 months agoCherilyn
3 months agoHerman
4 months ago