An organization wants to make sure only known partners can invoke the organization's APIs. To achieve this security goal, the organization wants to enforce a Client ID Enforcement policy in API Manager so that only registered partner applications can invoke the organization's APIs. In what type of API implementation does MuleSoft recommend adding an API proxy to enforce the Client ID Enforcement policy, rather than embedding the policy directly in the application's JVM?
Correct Answe r: A Non-Mule application
*****************************************
>> All type of Mule applications (Mule 3/ Mule 4/ with APIkit/ with Custom Java Code etc) running on Mule Runtimes support the Embedded Policy Enforcement on them.
>> The only option that cannot have or does not support embedded policy enforcement and must have API Proxy is for Non-Mule Applications.
So, Non-Mule application is the right answer.
The implementation of a Process API must change.
What is a valid approach that minimizes the impact of this change on API clients?
Correct Answe r: Implement required changes to the Process API implementation so that, whenever possible, the Process API's RAML definition remains unchanged.
*****************************************
Key requirement in the question is:
>> Approach that minimizes the impact of this change on API clients
Based on above:
>> Updating the RAML definition would possibly impact the API clients if the changes require any thing mandatory from client side. So, one should try to avoid doing that until really necessary.
>> Implementing the changes as a completely different API and then redirectly the clients with 3xx status code is really upsetting design and heavily impacts the API clients.
>> Organisations and IT cannot simply postpone the changes required until all API consumers acknowledge they are ready to migrate to a new Process API or API version. This is unrealistic and not possible.
The best way to handle the changes always is to implement required changes to the API implementations so that, whenever possible, the API's RAML definition remains unchanged.
The asset version 2.0.0 of the Order API is successfully published in Exchange and configured in API Manager with the Autodiscovery API ID correctly linked to the
API implementation, A new GET method is added to the existing API specification, and after updates, the asset version of the Order API is 2.0.1,
What happens to the Autodiscovery API ID when the new asset version is updated in API Manager?
Understanding API Autodiscovery in MuleSoft:
API Autodiscovery links an API implementation in Anypoint Platform with its configuration in API Manager. This is controlled by the API ID which is set in the API Autodiscovery element in the Mule application.
The API ID remains consistent across minor updates to the API asset version in Exchange (e.g., from 2.0.0 to 2.0.1) as long as it is the same API.
Effect of Asset Version Update on API Autodiscovery:
When the asset version is updated (e.g., from 2.0.0 to 2.0.1), the API ID remains the same. Therefore, no changes are needed in the Autodiscovery configuration within the Mule application. The Autodiscovery will continue to link the API implementation to the latest version in API Manager.
Evaluating the Options:
Option A: Incorrect, as the API ID does not automatically change with minor asset version updates.
Option B: Incorrect, as the API ID remains the same, so no update is needed in the API implementation.
Option C (Correct Answer): The API ID does not change, so no changes are necessary in the API implementation for the new asset version.
Option D: Incorrect, as there is no need to update the API implementation in the Autodiscovery global element for minor version changes.
Conclusion:
Option C is the correct answer, as the API ID remains unchanged with minor version updates, and no changes are needed in the API Autodiscovery configuration.
Refer to MuleSoft documentation on API Autodiscovery and version management for more details.
A Rate Limiting policy is applied to an API implementation to protect the back-end system. Recently, there have been surges in demand that cause some API client
POST requests to the API implementation to be rejected with policy-related errors, causing delays and complications to the API clients.
How should the API policies that are applied to the API implementation be changed to reduce the frequency of errors returned to API clients, while still protecting the back-end
system?
When managing high traffic to an API, especially with POST requests, it is crucial to ensure the API's policies both protect the back-end systems and provide a smooth client experience. Here's the approach to reducing errors:
Rate Limiting Policy: This policy enforces a limit on the number of requests within a defined time period. However, rate limiting alone may cause clients to hit limits during demand surges, leading to errors.
Adding an SLA-based Spike Control Policy:
Spike Control is designed to handle sudden increases in traffic by smoothing out bursts of requests, which is particularly useful during high-demand periods.
By configuring SLA-based Spike Control, you can define thresholds for specific client tiers. For instance, premium clients might have higher limits or more flexibility in traffic bursts than standard clients.
Why Option D is Correct:
Keeping the Rate Limiting policy continues to provide baseline protection for the back-end.
Adding the SLA-based Spike Control policy allows for differentiated control, where requests are queued or delayed during bursts rather than outright rejected. This approach significantly reduces error responses to clients while still controlling overall traffic.
of Incorrect Options:
Option A (adding Client ID Enforcement) would not reduce errors related to traffic surges.
Option B (HTTP Caching) is not applicable as caching is generally ineffective for non-idempotent requests like POST.
Option C (only Spike Control without Rate Limiting) may leave the back-end system vulnerable to sustained high traffic levels, reducing protection.
Reference For more information on configuring Rate Limiting and SLA-based Spike Control policies, refer to MuleSoft documentation on API Policies and Rate Limiting.
A retail company is using an Order API to accept new orders. The Order API uses a JMS queue to submit orders to a backend order management service. The normal load for orders is being handled using two (2) CloudHub workers, each configured with 0.2 vCore. The CPU load of each CloudHub worker normally runs well below 70%. However, several times during the year the Order API gets four times (4x) the average number of orders. This causes the CloudHub worker CPU load to exceed 90% and the order submission time to exceed 30 seconds. The cause, however, is NOT the backend order management service, which still responds fast enough to meet the response SLA for the Order API. What is the MOST resource-efficient way to configure the Mule application's CloudHub deployment to help the company cope with this performance challenge?
Correct Answe r: Use a horizontal CloudHub autoscaling policy that triggers on CPU utilization greater than 70%
*****************************************
The scenario in the question is very clearly stating that the usual traffic in the year is pretty well handled by the existing worker configuration with CPU running well below 70%. The problem occurs only 'sometimes' occasionally when there is spike in the number of orders coming in.
So, based on above, We neither need to permanently increase the size of each worker nor need to permanently increase the number of workers. This is unnecessary as other than those 'occasional' times the resources are idle and wasted.
We have two options left now. Either to use horizontal Cloudhub autoscaling policy to automatically increase the number of workers or to use vertical Cloudhub autoscaling policy to automatically increase the vCore size of each worker.
Here, we need to take two things into consideration:
1. CPU
2. Order Submission Rate to JMS Queue
>> From CPU perspective, both the options (horizontal and vertical scaling) solves the issue. Both helps to bring down the usage below 90%.
>> However, If we go with Vertical Scaling, then from Order Submission Rate perspective, as the application is still being load balanced with two workers only, there may not be much improvement in the incoming request processing rate and order submission rate to JMS queue. The throughput would be same as before. Only CPU utilization comes down.
>> But, if we go with Horizontal Scaling, it will spawn new workers and adds extra hand to increase the throughput as more workers are being load balanced now. This way we can address both CPU and Order Submission rate.
Hence, Horizontal CloudHub Autoscaling policy is the right and best answer.
Rosio
8 days agoJanna
15 days agoYuriko
23 days agoSharita
1 month agoYun
1 month agoRosio
2 months agoCarlton
2 months agoKandis
2 months agoJunita
2 months agoBrandon
3 months agoCassi
3 months agoBambi
3 months agoBrock
3 months agoMing
4 months agoRosenda
4 months agoProvidencia
4 months agoLenna
4 months agoVictor
5 months agoJanet
5 months agoTrina
5 months agoErick
5 months agoJeniffer
6 months agoLeontine
6 months agoGoldie
8 months agoMose
9 months agoKirby
10 months agoLakeesha
12 months agoCandra
1 year agoBuck
1 year agoRomana
1 year agoTonette
1 year agoLouisa
1 year agoDenae
1 year agoHelga
1 year agoErick
1 year agoMollie
1 year agoAlaine
1 year agoFidelia
1 year agoMelinda
1 year agoFidelia
1 year agoTherese
1 year agoBoris
1 year agoRolland
1 year agoBeckie
1 year agoLai
1 year agoWenona
2 years agoIlene
2 years agoSophia
2 years agoCarry
2 years agoRoxanne
2 years agoSylvia
2 years agoMattie
2 years agoJacinta
2 years agoAntonio
2 years agoIlene
2 years ago