New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Salesforce Certified Heroku Architect (Plat-Arch-206) Exam - Topic 6 Question 28 Discussion

Actual exam question for Salesforce's Salesforce Certified Heroku Architect (Plat-Arch-206) exam
Question #: 28
Topic #: 6
[All Salesforce Certified Heroku Architect (Plat-Arch-206) Questions]

Universal Containers has recently experienced a higher volume of traffic on their mobile app hosted on Heroku. When Universal Containers was running 4 standard-2x dynos with 1 GB RAM each, they encountered multiple H12 ("request timeout") errors. The app never consumed more than 800 MB of RAM. They then switched to performance-m dynos, with 2.5 GB RAM, and set autoscaling to a maximum of 2 dynos. However, they still encountered H12 ("request timeout") errors.

What remediation should an Architect recommend to alleviate this problem?

Show Suggested Answer Hide Answer
Suggested Answer: A, C, E

Contribute your Thoughts:

0/2000 characters
Laurel
3 months ago
Adding a logging add-on could help identify the real issue too!
upvoted 0 times
...
Madalyn
3 months ago
Autoscaling might not be the best option here, manual scaling could work better.
upvoted 0 times
...
Marci
4 months ago
Wait, why are they still getting H12 errors with performance-m dynos?
upvoted 0 times
...
Jerilyn
4 months ago
I think upgrading to performance-L dynos could really help!
upvoted 0 times
...
Carman
4 months ago
Moving long-running tasks to worker dynos sounds smart.
upvoted 0 times
...
Shayne
4 months ago
I practiced a similar question where autoscaling was ineffective. I wonder if switching to manual scaling could stabilize the app, but I'm not convinced that's the best approach here.
upvoted 0 times
...
Buck
4 months ago
Adding a logging add-on might help us understand the root cause of the timeouts, but I don't think it directly addresses the H12 errors.
upvoted 0 times
...
Dewitt
4 months ago
I'm not entirely sure, but I think upgrading to performance-L dynos could help with the memory issue. However, it seems like a more expensive solution.
upvoted 0 times
...
Reiko
5 months ago
I remember studying H12 errors and how they relate to request timeouts. Moving long-running tasks to worker dynos seems like a solid option to reduce the load on web dynos.
upvoted 0 times
...
Ulysses
5 months ago
Okay, I think I've got a strategy here. Since autoscaling isn't working, I'd recommend trying a manual scaling option of 2 dynos. That way, they can ensure they have enough resources without potentially over-provisioning. Seems like the most straightforward solution.
upvoted 0 times
...
Linette
5 months ago
Hmm, I'm not sure if adding a logging add-on would really solve the timeout issue. That seems more like a troubleshooting step rather than a direct remediation. I'd focus on the scaling and task management aspects first.
upvoted 0 times
...
Samuel
5 months ago
I'm a bit confused by this question. The app is not consuming more than 800 MB of RAM, so I'm not sure why they're still encountering timeout errors. Upgrading to larger dynos might be overkill in this case.
upvoted 0 times
...
Josefa
5 months ago
This looks like a classic performance issue with the Heroku app. I think moving long-running tasks to worker dynos could be a good first step to try and alleviate the problem.
upvoted 0 times
...
Lawrence
5 months ago
Hmm, I'm not sure. IaaS might also work well since you have more control over the infrastructure and can customize it for your testing needs.
upvoted 0 times
...
Katheryn
5 months ago
Hmm, I'm a bit unsure about the difference between SMTP, IMAP, and Outlook. I'll need to review my notes on email protocols to make sure I select the right options.
upvoted 0 times
...
Dwight
5 months ago
I'm a bit confused by this question. The options seem to cover a range of different topics related to software testing and XML. I'll have to review my notes on XML and querying before I can confidently answer this.
upvoted 0 times
...
Chaya
5 months ago
I'm a bit torn between options A and C. They both seem to involve a similar process of identifying gaps, solutions, and dependencies, and then grouping them into work packages. I'll need to carefully compare the details of each approach to decide which one is truly the best.
upvoted 0 times
...
Laurel
5 months ago
I'm trying to recall if Elastic IP addresses are only for NAT instances. I should double-check whether they need to be attached to the EC2 directly or something else.
upvoted 0 times
...
Becky
5 months ago
I've always mixed up Tcrway and GA; I think GA is connected to ACK networks, but I need to double-check that!
upvoted 0 times
...
Freeman
9 months ago
Wait, so they're getting timeout errors on an app that's not even using all the RAM it has? Someone call the IT Helpdesk, we've got a real brain teaser here!
upvoted 0 times
...
Helene
9 months ago
Replacing autoscaling with manual scaling? What is this, the Stone Age? Autoscaling is the way to go, even if it takes some trial and error to get it right.
upvoted 0 times
Julio
8 months ago
D) Replace autoscaling with a manual scaling option of 2.
upvoted 0 times
...
Kati
8 months ago
Autoscaling is definitely the way to go, but maybe upgrading to performance-L dynos could also help.
upvoted 0 times
...
Lilli
8 months ago
C) Upgrade to performance-L dynos with 14 GB RAM.
upvoted 0 times
...
Jennie
9 months ago
A) Move long-running tasks to worker dynos.
upvoted 0 times
...
...
Tamekia
10 months ago
Upgrading to performance-L dynos with 14 GB RAM sounds like overkill. If the app never used more than 800 MB, that's a lot of extra resources.
upvoted 0 times
...
Becky
10 months ago
Adding a logging add-on might help diagnose the issue, but it doesn't really address the underlying problem of timeouts.
upvoted 0 times
Magda
8 months ago
C) Upgrade to performance-L dynos with 14 GB RAM.
upvoted 0 times
...
Val
9 months ago
B) Add a logging add-on from the Elements marketplace.
upvoted 0 times
...
Josephine
9 months ago
A) Move long-running tasks to worker dynos.
upvoted 0 times
...
...
Latonia
10 months ago
Moving long-running tasks to worker dynos seems like the obvious choice here. That way, the main app can focus on handling incoming requests more efficiently.
upvoted 0 times
...
Dahlia
10 months ago
I'm not sure, maybe upgrading to performance-L dynos with more RAM could also solve the issue.
upvoted 0 times
...
Loreen
10 months ago
I agree with Tasia, that could help alleviate the H12 errors.
upvoted 0 times
...
Tasia
10 months ago
I think we should move long-running tasks to worker dynos.
upvoted 0 times
...
Shantell
11 months ago
But wouldn't upgrading to performance-L dynos with more RAM also be a good solution?
upvoted 0 times
...
Charisse
11 months ago
I agree with Billye, that could help alleviate the H12 errors.
upvoted 0 times
...
Billye
11 months ago
I think we should move long-running tasks to worker dynos.
upvoted 0 times
...

Save Cancel