New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Salesforce Certified MuleSoft Platform Architect (Mule-Arch-201) Exam - Topic 7 Question 35 Discussion

Actual exam question for Salesforce's Salesforce Certified MuleSoft Platform Architect (Mule-Arch-201) exam
Question #: 35
Topic #: 7
[All Salesforce Certified MuleSoft Platform Architect (Mule-Arch-201) Questions]

A Rate Limiting policy is applied to an API implementation to protect the back-end system. Recently, there have been surges in demand that cause some API client

POST requests to the API implementation to be rejected with policy-related errors, causing delays and complications to the API clients.

How should the API policies that are applied to the API implementation be changed to reduce the frequency of errors returned to API clients, while still protecting the back-end

system?

Show Suggested Answer Hide Answer
Suggested Answer: D

When managing high traffic to an API, especially with POST requests, it is crucial to ensure the API's policies both protect the back-end systems and provide a smooth client experience. Here's the approach to reducing errors:

Rate Limiting Policy: This policy enforces a limit on the number of requests within a defined time period. However, rate limiting alone may cause clients to hit limits during demand surges, leading to errors.

Adding an SLA-based Spike Control Policy:

Spike Control is designed to handle sudden increases in traffic by smoothing out bursts of requests, which is particularly useful during high-demand periods.

By configuring SLA-based Spike Control, you can define thresholds for specific client tiers. For instance, premium clients might have higher limits or more flexibility in traffic bursts than standard clients.

Why Option D is Correct:

Keeping the Rate Limiting policy continues to provide baseline protection for the back-end.

Adding the SLA-based Spike Control policy allows for differentiated control, where requests are queued or delayed during bursts rather than outright rejected. This approach significantly reduces error responses to clients while still controlling overall traffic.

of Incorrect Options:

Option A (adding Client ID Enforcement) would not reduce errors related to traffic surges.

Option B (HTTP Caching) is not applicable as caching is generally ineffective for non-idempotent requests like POST.

Option C (only Spike Control without Rate Limiting) may leave the back-end system vulnerable to sustained high traffic levels, reducing protection.

Reference For more information on configuring Rate Limiting and SLA-based Spike Control policies, refer to MuleSoft documentation on API Policies and Rate Limiting.


Contribute your Thoughts:

0/2000 characters
Leonor
3 days ago
Adding Client ID Enforcement could help manage traffic better.
upvoted 0 times
...
Annmarie
8 days ago
Wait, removing Rate Limiting? That sounds risky!
upvoted 0 times
...
Quentin
13 days ago
I think option D makes the most sense!
upvoted 0 times
...
Aliza
18 days ago
Keeping the Rate Limiting policy is crucial for backend protection.
upvoted 0 times
...
Regenia
24 days ago
I'm with the crowd on this one. Option D is the way to go. Protecting the back-end while keeping the clients happy, that's the goal.
upvoted 0 times
...
Tran
29 days ago
Option D is the clear winner here. Gotta love those SLA-based policies, am I right?
upvoted 0 times
...
Venita
1 month ago
I’m a bit confused about the best option here. I feel like HTTP Caching could help reduce load, but I’m not sure if it addresses the rejection errors directly.
upvoted 0 times
...
Rodolfo
1 month ago
I practiced a similar question where we had to balance client needs with back-end protection. Keeping the Rate Limiting policy and adding SLA-based Spike Control sounds like a solid approach.
upvoted 0 times
...
Nathan
1 month ago
I think removing the Rate Limiting policy might be risky, especially since it protects the back-end. Maybe the Spike Control policy could help, but I need to double-check its effectiveness.
upvoted 0 times
...
Cheryl
2 months ago
This is a tricky one, but I think I've got a handle on it. Option D stands out to me as the best choice - keeping the Rate Limiting policy and adding an SLA-based Spike Control policy. That should help smooth out those demand surges while still protecting the back-end.
upvoted 0 times
...
Teresita
2 months ago
I'm a bit confused by the differences between the policy options. I'll need to review the details of each one to decide which makes the most sense here. Maybe I'll start by ruling out the options that don't seem to directly address the issue.
upvoted 0 times
...
Malinda
2 months ago
Okay, let's see here. The key seems to be finding a way to handle those surges in demand without just rejecting requests. I'm thinking option C, removing the Rate Limiting and adding a Spike Control policy, could be a good approach to try.
upvoted 0 times
...
Timothy
2 months ago
I think option D is the best. It balances protection and client needs.
upvoted 0 times
...
Jolanda
2 months ago
Option D seems like the best choice to me. SLA-based Spike Control policy should help manage the surges in demand while still protecting the back-end system.
upvoted 0 times
...
Terrilyn
3 months ago
I remember studying rate limiting and how it can help manage traffic, but I'm not sure if adding a Client ID Enforcement policy would really solve the issue.
upvoted 0 times
...
Asha
3 months ago
Haha, I bet the API clients are really feeling the "rate limiting" pain. Option D is definitely the way to fix that.
upvoted 0 times
...
Ardella
3 months ago
I agree with Jolanda. Option D is the way to go. Keeping the Rate Limiting policy and adding the SLA-based Spike Control policy is a smart move.
upvoted 0 times
...
Twana
3 months ago
Hmm, the question is asking about how to reduce errors while still protecting the back-end. I'm leaning towards option D - keeping the Rate Limiting policy and adding an SLA-based Spike Control policy. That seems like it could strike a good balance.
upvoted 0 times
...
Lettie
3 months ago
This looks like a tricky one. I'm not sure if I fully understand the implications of the different policy options. I'll need to think it through carefully.
upvoted 0 times
...

Save Cancel