Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Amazon Exam DOP-C02 Topic 6 Question 29 Discussion

Actual exam question for Amazon's DOP-C02 exam
Question #: 29
Topic #: 6
[All DOP-C02 Questions]

A DevOps engineer manages a large commercial website that runs on Amazon EC2. The website uses Amazon Kinesis Data Streams to collect and process web togs. The DevOps engineer manages the Kinesis consumer application, which also runs on Amazon EC2.

Sudden increases of data cause the Kinesis consumer application to (all behind and the Kinesis data streams drop records before the records can be processed. The DevOps engineer must implement a solution to improve stream handling.

Which solution meets these requirements with the MOST operational efficiency?

Show Suggested Answer Hide Answer
Suggested Answer: A

* Configure AWS Systems Manager on Each Instance:

AWS Systems Manager provides a unified interface for managing AWS resources. Install the Systems Manager agent on each EC2 instance to enable inventory management and other features.

* Use AWS Systems Manager Inventory:

Systems Manager Inventory collects metadata about your instances and the software installed on them. This data includes information about applications, network configurations, and more.

Enable Systems Manager Inventory on all EC2 instances to gather detailed information about installed applications.

* Use Systems Manager Resource Data Sync to Synchronize and Store Findings in an Amazon S3 Bucket:

Resource Data Sync aggregates inventory data from multiple accounts and regions into a single S3 bucket, making it easier to query and analyze the data.

Configure Resource Data Sync to automatically transfer inventory data to an S3 bucket for centralized storage.

* Create an AWS Lambda Function that Runs When New Objects are Added to the S3 Bucket:

Use an S3 event to trigger a Lambda function whenever new inventory data is added to the S3 bucket.

The Lambda function can parse the inventory data and check for the presence of prohibited applications.

* Configure the Lambda Function to Identify Prohibited Applications:

The Lambda function should be programmed to scan the inventory data for any known prohibited applications and generate alerts or take appropriate actions if such applications are found.

Example Lambda function in Python

import json

import boto3

def lambda_handler(event, context):

s3 = boto3.client('s3')

bucket = event['Records'][0]['s3']['bucket']['name']

key = event['Records'][0]['s3']['object']['key']

response = s3.get_object(Bucket=bucket, Key=key)

inventory_data = json.loads(response['Body'].read().decode('utf-8'))

prohibited_apps = ['app1', 'app2']

for instance in inventory_data['Instances']:

for app in instance['Applications']:

if app['Name'] in prohibited_apps:

# Send notification or take action

print(f'Prohibited application found: {app['Name']} on instance {instance['InstanceId']}')

return {'statusCode': 200, 'body': json.dumps('Check completed')}

By leveraging AWS Systems Manager Inventory, Resource Data Sync, and Lambda, this solution provides an efficient and automated way to audit EC2 instances for prohibited applications.


AWS Systems Manager Inventory

AWS Systems Manager Resource Data Sync

S3 Event Notifications

AWS Lambda

Contribute your Thoughts:

Luis
17 days ago
Oh, man, can you imagine the look on the DevOps engineer's face when those logs started piling up? I'd be reaching for the coffee and the antacids!
upvoted 0 times
...
Jamal
20 days ago
I don't know, option D just seems like a band-aid solution. Increasing the number of shards might help in the short term, but it doesn't address the root cause of the problem.
upvoted 0 times
Sylvia
4 days ago
A: I think option A is the best solution. Storing the logs in Amazon S3 and processing the data with Amazon EMR seems like a more efficient approach.
upvoted 0 times
...
...
Felicidad
30 days ago
Haha, I bet the DevOps engineer is losing their mind trying to keep up with all those logs! Option A sounds like a lot of work, but it could provide some serious insights.
upvoted 0 times
...
Shonda
1 months ago
I'm leaning towards option C. Running the consumer as a Lambda function could provide more scalability and reduce the management overhead.
upvoted 0 times
Reed
22 days ago
Option C sounds like a good choice. Running as a Lambda function could help with scalability.
upvoted 0 times
...
...
Mira
2 months ago
That's a valid point, but I still think option A is better in terms of long-term efficiency and scalability.
upvoted 0 times
...
Sherell
2 months ago
Hmm, option B seems like the most straightforward approach. Scaling the consumer application and increasing the Kinesis stream retention should help handle the sudden data spikes.
upvoted 0 times
Jesusita
8 days ago
True, increasing the number of shards could definitely help with processing the data faster.
upvoted 0 times
...
Yuki
23 days ago
B sounds good, but D might be more effective in handling the sudden data spikes.
upvoted 0 times
...
Allene
25 days ago
D) Increase the number of shards in the Kinesis data streams to increase the overall throughput so that the consumer application processes the data faster.
upvoted 0 times
...
Isadora
29 days ago
Yes, scaling the consumer application and increasing the retention period of the Kinesis data streams should improve stream handling efficiency.
upvoted 0 times
...
Willow
1 months ago
I agree, option B seems like a practical solution to handle the sudden data spikes.
upvoted 0 times
...
Charolette
1 months ago
B) Horizontally scale the Kinesis consumer application by adding more EC2 instances based on the Amazon CloudWatch GetRecords IteratorAgeMilliseconds metric Increase the retention period of the Kinesis data streams.
upvoted 0 times
...
...
Ezekiel
2 months ago
I disagree, I believe option B is more efficient. Scaling the Kinesis consumer application based on CloudWatch metrics will help handle sudden increases in data.
upvoted 0 times
...
Mira
2 months ago
I think option A is the best solution because storing logs in Amazon S3 will ensure durability and using Amazon EMR for processing will improve efficiency.
upvoted 0 times
...

Save Cancel