New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Amazon PAS-C01 Exam - Topic 4 Question 44 Discussion

Actual exam question for Amazon's PAS-C01 exam
Question #: 44
Topic #: 4
[All PAS-C01 Questions]

Business users are reporting timeouts during periods of peak query activity on an enterprise SAP HANAdata mart An SAP system administrator has discovered that at peak volume the CPU utilization increases rapidly to 100% for extended periods on the x1.32xlarge Amazon EC2 instance where the database is installed However the SAP HANA database is occupying only 1 120 GiB of the available 1 952 GiB on the instance 10 wart times are not increasing Extensive query tuning and system tuning have not resolved this performance problem

Which solutions should the SAP system administrator use to improve the performance? (Select TWO.)

Show Suggested Answer Hide Answer
Suggested Answer: C, E

Contribute your Thoughts:

0/2000 characters
Glynda
3 months ago
Wait, why not just optimize the queries more instead of changing instances?
upvoted 0 times
...
Erinn
3 months ago
Moving to a scale-out architecture seems like a solid plan!
upvoted 0 times
...
Vesta
3 months ago
Not sure if reducing the global_allocation_limit will help much.
upvoted 0 times
...
Kelvin
4 months ago
Definitely need more vCPUs for better performance!
upvoted 0 times
...
Bette
4 months ago
Sounds like a classic case of CPU bottleneck.
upvoted 0 times
...
Mitsue
4 months ago
I wonder if changing the EBS volume type would really make a difference. It seems like a good option, but I’m not confident it addresses the CPU bottleneck directly.
upvoted 0 times
...
Maryrose
4 months ago
I feel like we had a similar question about scaling out in our study group. Moving to a scale-out architecture could definitely distribute the load better.
upvoted 0 times
...
Tesha
4 months ago
I'm not entirely sure, but I think reducing the global_allocation_limit might not be the best choice since the database isn't fully utilizing the available memory.
upvoted 0 times
...
Tammi
5 months ago
I remember we discussed the importance of CPU resources in our last practice session. Maybe migrating to a higher memory instance could help with the performance issues?
upvoted 0 times
...
Tyra
5 months ago
This is a tough one, but I think I'd start by trying to optimize the queries and the database configuration. Reducing the global_allocation_limit seems like it could be a risky move, and I'm not sure a scale-out setup is necessary here. I'd focus on getting the most out of the current instance first before looking at a hardware upgrade.
upvoted 0 times
...
Cristal
5 months ago
Okay, I think I've got a handle on this. The key here is that the CPU is the bottleneck, not the memory. So increasing the instance size or going to a scale-out setup won't necessarily fix the problem. I'd start by looking at the queries and seeing if there's any way to optimize them. And if that doesn't work, migrating to a compute-optimized instance type could be the way to go.
upvoted 0 times
...
Arlene
5 months ago
This seems like a tricky performance issue. I'd start by looking at the CPU utilization - if it's maxing out at 100% during peak times, that's likely the root cause of the timeouts. Migrating to a larger instance with more vCPUs could help, but I'd also want to explore other options like tuning the database parameters.
upvoted 0 times
...
Tesha
5 months ago
Hmm, I'm a bit confused by this one. The database is only using about half the available memory, so I'm not sure why the CPU would be maxing out. Maybe there's an issue with the queries or the way the data is indexed? Reducing the global_allocation_limit seems like it could be risky, but moving to a scale-out architecture might be worth considering.
upvoted 0 times
...
Keneth
5 months ago
I recall a discussion that said if the response does not match the stimulus, it should be specific reinforcement. So I might go with D, but I'm not entirely sure.
upvoted 0 times
...
Elfriede
1 year ago
Hey, at least they're not reporting timeouts during periods of peak query activity on a non-enterprise SAP HANA data mart. That would be just plain embarrassing.
upvoted 0 times
Val
1 year ago
B) Migrate the SAP HANA database to an EC2 High Memory instance with a larger number of available vCPUs
upvoted 0 times
...
Huey
1 year ago
A) Reduce the global_allocation_limit parameter to i 120 GiB
upvoted 0 times
...
...
Natalya
1 year ago
A scale-out architecture could be a good long-term solution, but it might be overkill for this issue. Let's try the simpler options first.
upvoted 0 times
Lashaun
1 year ago
I agree, starting with these simpler options could help improve performance before considering a scale-out architecture.
upvoted 0 times
...
Bettyann
1 year ago
D) Modify the Amazon Elastic Block Store (Amazon EBS) volume type from General Purpose to Provisioned lOPS for ail SAP HANA data volumes
upvoted 0 times
...
Gladys
1 year ago
B) Migrate the SAP HANA database to an EC2 High Memory instance with a larger number of available vCPUs
upvoted 0 times
...
...
Natalya
1 year ago
I don't know, man. Have you tried turning it off and on again? That usually works, right?
upvoted 0 times
...
Tiera
1 year ago
Reducing the global_allocation_limit seems like a risky move. We don't want to artificially limit the database's memory usage when it's not the root cause of the problem.
upvoted 0 times
Graciela
1 year ago
C) Move to a scale-out architecture for SAP HANA with at least three x1 16xlarge instances
upvoted 0 times
...
German
1 year ago
I agree, reducing the global_allocation_limit could cause more issues. Migrating to a High Memory instance seems like a better solution.
upvoted 0 times
...
Lina
1 year ago
D) Modify the Amazon Elastic Block Store (Amazon EBS) volume type from General Purpose to Provisioned lOPS for ail SAP HANA data volumes
upvoted 0 times
...
Pa
1 year ago
I agree, reducing the global_allocation_limit could cause more issues. Migrating to a High Memory instance and changing the EBS volume type seem like safer options.
upvoted 0 times
...
Joesph
1 year ago
D) Modify the Amazon Elastic Block Store (Amazon EBS) volume type from General Purpose to Provisioned lOPS for all SAP HANA data volumes
upvoted 0 times
...
Marylyn
1 year ago
B) Migrate the SAP HANA database to an EC2 High Memory instance with a larger number of available vCPUs
upvoted 0 times
...
Mireya
1 year ago
B) Migrate the SAP HANA database to an EC2 High Memory instance with a larger number of available vCPUs
upvoted 0 times
...
...
Elfrieda
1 year ago
I'm not sure about option A. Reducing the global_allocation_limit parameter may not address the underlying issue of high CPU utilization.
upvoted 0 times
...
Vallie
1 year ago
I agree with Katheryn. Option C might also be beneficial to distribute the workload across multiple instances.
upvoted 0 times
...
Katheryn
1 year ago
I think option B could help by providing more vCPUs for better performance.
upvoted 0 times
...
Noel
1 year ago
Hmm, I think the solution is to migrate to a larger EC2 instance with more vCPUs. The CPU is maxing out, so we need more horsepower to handle the load.
upvoted 0 times
Matt
1 year ago
B) Migrate the SAP HANA database to an EC2 High Memory instance with a larger number of available vCPUs
upvoted 0 times
...
Daniel
1 year ago
A) Reduce the global_allocation_limit parameter to i 120 GiB
upvoted 0 times
...
...

Save Cancel