New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Amazon SCS-C02 Exam - Topic 8 Question 21 Discussion

Actual exam question for Amazon's SCS-C02 exam
Question #: 21
Topic #: 8
[All SCS-C02 Questions]

A company uses Amazon EC2 instances to host frontend services behind an Application Load Balancer. Amazon Elastic Block Store (Amazon EBS) volumes are attached to the EC2 instances. The company uses Amazon S3 buckets to store large files for images and music.

The company has implemented a security architecture oit>AWS to prevent, identify, and isolate potential ransomware attacks. The company now wants to further reduce risk.

A security engineer must develop a disaster recovery solution that can recover to normal operations if an attacker bypasses preventive and detective controls. The solution must meet an RPO of 1 hour.

Which solution will meet these requirements?

Show Suggested Answer Hide Answer
Suggested Answer: D

To ensure minimal latency and regional availability of secrets, encrypting secrets in us-east-1 with a customer-managed KMS key and then replicating them to us-west-1 for encryption with the same key is the optimal approach. This method leverages customer-managed KMS keys for enhanced control and ensures that secrets are available in both regions, adhering to disaster recovery principles and minimizing latency by using regional endpoints.


Contribute your Thoughts:

0/2000 characters
Kerry
3 months ago
Not sure if GuardDuty alone is enough for recovery.
upvoted 0 times
...
Ronnie
3 months ago
Definitely agree with using AWS Backup for hourly backups.
upvoted 0 times
...
Bernardo
3 months ago
Surprised that EBS snapshots every 4 hours is even an option!
upvoted 0 times
...
Bulah
4 months ago
I think daily backups are too risky for ransomware.
upvoted 0 times
...
Catherin
4 months ago
A backups every hour sounds solid!
upvoted 0 times
...
Sena
4 months ago
Option D mentions EBS snapshots every 4 hours, which doesn't align with the 1-hour RPO. I think that one is definitely out.
upvoted 0 times
...
Kimberlie
4 months ago
I feel like we practiced a question similar to this where using CloudFormation templates was highlighted as a best practice for disaster recovery. Maybe option A or C?
upvoted 0 times
...
Lonna
4 months ago
I'm not entirely sure, but I think option B's daily backups wouldn't meet the RPO requirement of 1 hour. That seems risky.
upvoted 0 times
...
Hillary
5 months ago
I remember we discussed the importance of having backups every hour for critical systems, so option A seems like it could be the right choice.
upvoted 0 times
...
Kristeen
5 months ago
I've got a good feeling about this. The question provides a lot of context, and I think Option A is the way to go. Frequent backups, replicable architecture, and version control - that should do the trick.
upvoted 0 times
...
Anastacia
5 months ago
I'm not too sure about this one. There are a lot of moving parts with the EC2 instances, EBS volumes, and S3 buckets. I'll need to carefully consider the pros and cons of each option.
upvoted 0 times
...
Shizue
5 months ago
Okay, let's break this down step-by-step. The key requirements are a 1-hour RPO and the ability to recover from a ransomware attack. I think I have a good strategy in mind.
upvoted 0 times
...
Francisca
5 months ago
Hmm, I'm a bit confused about the different AWS services mentioned. I'll need to review the details of each one to figure out the best approach.
upvoted 0 times
...
Corinne
5 months ago
This looks like a pretty straightforward disaster recovery question. I think I can handle this one.
upvoted 0 times
...
Krystal
5 months ago
Okay, I think I've got this. The key is to create an extension for the CalendarID EDT, which will allow me to increase the length of the BasicCalendarID column in the WorkCalendar table. Sounds like a good solution to me.
upvoted 0 times
...
Glen
5 months ago
I feel pretty confident about this one. The scan log is likely tracking the creation timestamp and success/failure status of the tasks, so I'll select those.
upvoted 0 times
...
Trinidad
5 months ago
Hmm, I'm a bit unsure about this one. I'm not sure if designing a data migration scenario is the only thing I should do. Maybe I should also consider analyzing the data integration points.
upvoted 0 times
...
Rasheeda
5 months ago
Access Control Lists, that's the one! Properly configuring permissions and access controls is crucial for hardening a system. I feel confident about this answer.
upvoted 0 times
...
Sherrell
5 months ago
I think we discussed how rising interest rates could negatively impact the debt service coverage ratio, but I'm not entirely sure.
upvoted 0 times
...
Leoma
5 months ago
When we talked about Blue Prism, selecting the external monitoring tool seemed to be a standard method, but I wonder if it captures all types of errors.
upvoted 0 times
...
Clorinda
10 months ago
C looks good, but I'm a bit worried about the manual intervention required for the recovery procedures in Security Hub. Automation is key for a fast RPO.
upvoted 0 times
Clarinda
9 months ago
D) Create EBS snapshots every 4 hours Enable Amazon GuardDuty Malware Protection. Create automation to immediately restore the most recent snapshot for any EC2 instances that produce an Execution:EC2/MaliciousFile finding in GuardDuty.
upvoted 0 times
...
Markus
9 months ago
C) Use Amazon Security Lake to create a centralized data lake for AWS CloudTrail logs and VPC flow logs. Use the logs for automated response Enable AWS Security Hub to establish a single location for recovery procedures. Create AWS CloudFormation templates that replicate existing architecture components. Use AWS CodeCommit to store the CloudFormation templates alongside application configuration code.
upvoted 0 times
...
Cordelia
10 months ago
A) Use AWS Backup to create backups of the EC2 instances and S3 buckets every hour. Create AWS CloudFormation templates that replicate existing architecture components. Use AWS CodeCommit to store the CloudFormation templates alongside application configuration code.
upvoted 0 times
...
...
Earnestine
10 months ago
Haha, D is like the 'nuclear option' - just blow away any infected instances and restore the latest snapshot. I wonder if that would actually work in a real-world scenario.
upvoted 0 times
...
Alba
10 months ago
I'm not a fan of B - relying on logs alone for automated response feels a bit risky. I'd prefer a solution that has more proactive recovery capabilities.
upvoted 0 times
Shantay
9 months ago
D) Create EBS snapshots every 4 hours Enable Amazon GuardDuty Malware Protection. Create automation to immediately restore the most recent snapshot for any EC2 instances that produce an Execution:EC2/MaliciousFile finding in GuardDuty.
upvoted 0 times
...
Weldon
9 months ago
C) Use Amazon Security Lake to create a centralized data lake for AWS CloudTrail logs and VPC flow logs. Use the logs for automated response Enable AWS Security Hub to establish a single location for recovery procedures. Create AWS CloudFormation templates that replicate existing architecture components. Use AWS CodeCommit to store the CloudFormation templates alongside application configuration code.
upvoted 0 times
...
Anthony
9 months ago
A) Use AWS Backup to create backups of the EC2 instances and S3 buckets every hour. Create AWS CloudFormation templates that replicate existing architecture components. Use AWS CodeCommit to store the CloudFormation templates alongside application configuration code.
upvoted 0 times
...
...
Chara
11 months ago
I'm not sure, but option D also seems like a good choice with EBS snapshots every 4 hours and GuardDuty Malware Protection.
upvoted 0 times
...
Jess
11 months ago
I disagree, I believe option C is better as it uses Security Hub for recovery procedures and creates a centralized data lake for logs.
upvoted 0 times
...
Cyril
11 months ago
Option A seems like the most comprehensive solution, with regular backups and version control for the infrastructure. I like how it covers both EC2 and S3 components.
upvoted 0 times
Lourdes
9 months ago
Micheal: Definitely, having version control for infrastructure is key in case of an attack.
upvoted 0 times
...
Roosevelt
9 months ago
User 3: It's good that they are also using CloudFormation templates for easy replication.
upvoted 0 times
...
Micheal
9 months ago
User 2: I agree, having backups every hour is crucial for minimizing data loss.
upvoted 0 times
...
Mari
10 months ago
User 1: Option A does seem like a solid choice for disaster recovery.
upvoted 0 times
...
...
Mari
11 months ago
I think option A is the best choice because it creates backups every hour and replicates the architecture components.
upvoted 0 times
...

Save Cancel