A retail company is using an Order API to accept new orders. The Order API uses a JMS queue to submit orders to a backend order management service. The normal load for orders is being handled using two (2) CloudHub workers, each configured with 0.2 vCore. The CPU load of each CloudHub worker normally runs well below 70%. However, several times during the year the Order API gets four times (4x) the average number of orders. This causes the CloudHub worker CPU load to exceed 90% and the order submission time to exceed 30 seconds. The cause, however, is NOT the backend order management service, which still responds fast enough to meet the response SLA for the Order API. What is the MOST resource-efficient way to configure the Mule application's CloudHub deployment to help the company cope with this performance challenge?
Correct Answe r: Use a horizontal CloudHub autoscaling policy that triggers on CPU utilization greater than 70%
*****************************************
The scenario in the question is very clearly stating that the usual traffic in the year is pretty well handled by the existing worker configuration with CPU running well below 70%. The problem occurs only 'sometimes' occasionally when there is spike in the number of orders coming in.
So, based on above, We neither need to permanently increase the size of each worker nor need to permanently increase the number of workers. This is unnecessary as other than those 'occasional' times the resources are idle and wasted.
We have two options left now. Either to use horizontal Cloudhub autoscaling policy to automatically increase the number of workers or to use vertical Cloudhub autoscaling policy to automatically increase the vCore size of each worker.
Here, we need to take two things into consideration:
1. CPU
2. Order Submission Rate to JMS Queue
>> From CPU perspective, both the options (horizontal and vertical scaling) solves the issue. Both helps to bring down the usage below 90%.
>> However, If we go with Vertical Scaling, then from Order Submission Rate perspective, as the application is still being load balanced with two workers only, there may not be much improvement in the incoming request processing rate and order submission rate to JMS queue. The throughput would be same as before. Only CPU utilization comes down.
>> But, if we go with Horizontal Scaling, it will spawn new workers and adds extra hand to increase the throughput as more workers are being load balanced now. This way we can address both CPU and Order Submission rate.
Hence, Horizontal CloudHub Autoscaling policy is the right and best answer.
An organization has built an application network following the API-led connectivity approach recommended by MuleSoft. To protect the application network against
attacks from malicious external API clients, the organization plans to apply JSON Threat Protection policies.
To which API-led connectivity layer should the JSON Threat Protection policies most commonly be applied?
Understanding JSON Threat Protection Policies:
JSON Threat Protection policies are used to protect APIs from attacks that exploit JSON payloads, such as oversized payloads, deeply nested objects, and excessive array elements. This helps prevent Denial of Service (DoS) attacks and other malicious payload-related threats.
These policies are typically applied to safeguard APIs that are directly exposed to external clients, where the risk of receiving malicious payloads is highest.
API-led Connectivity Layers:
Experience Layer: This layer is designed to expose APIs to end-users or external API clients, often acting as the interface that interacts with users or applications.
Process Layer: This layer is used for orchestration and aggregation of data from various System APIs, typically operating within a trusted environment and not directly exposed to external clients.
System Layer: This layer provides access to backend systems and databases, often within the organization's secure environment and not directly accessible to external clients.
Evaluating the Options:
Option A (All layers): While JSON Threat Protection can technically be applied to all layers, it is most commonly applied at the Experience layer, where APIs are exposed to external traffic and are more vulnerable to attacks.
Option B (System layer): The System layer is generally not exposed to external clients directly, so JSON Threat Protection is less critical here.
Option C (Process layer): Similar to the System layer, the Process layer is typically internal and not exposed directly to external clients, so JSON Threat Protection is less commonly applied.
Option D (Correct Answer): The Experience layer is the correct answer because it is the layer that directly interacts with external clients, making it the primary target for malicious payloads. Applying JSON Threat Protection here effectively protects the application network from external threats.
Conclusion:
Option D is the correct answer, as the Experience layer is the most common layer for applying JSON Threat Protection policies to protect against external attacks.
For further reference, consult MuleSoft's documentation on API security policies and best practices for securing APIs at the Experience layer.
An API implementation is deployed to CloudHub.
What conditions can be alerted on using the default Anypoint Platform functionality, where the alert conditions depend on the end-to-end request processing of the API implementation?
Correct Answe r: When the response time of API invocations exceeds a threshold
*****************************************
>> Alerts can be setup for all the given options using the default Anypoint Platform functionality
>> However, the question insists on an alert whose conditions depend on the end-to-end request processing of the API implementation.
>> Alert w.r.t 'Response Times' is the only one which requires end-to-end request processing of API implementation in order to determine if the threshold is exceeded or not.
When can CloudHub Object Store v2 be used?
CloudHub Object Store v2 is a managed key-value store provided by MuleSoft to support various use cases where temporary data storage is required. Here's why Option D is correct:
Key Length Support: Object Store v2 allows storage of keys with a length of up to 300 characters, making it suitable for applications needing flexible and descriptive keys.
Limitations on Size:
Object Store v2 is not intended for large payload storage and has a recommended size limit below 10 MB for each value. Payloads exceeding 15 MB may cause performance issues and are better suited to a file storage system or database.
Option B is incorrect because storing payloads above 15 MB exceeds Object Store's optimal usage specifications.
Key-Value Limits: Object Store v2 is designed for moderate, transient storage needs, and does not support unlimited storage. Thus, Option A is incorrect.
Backward Compatibility: Object Store v2 does not support Mule 4 applications running Object Store v1. Option C is incorrect as Object Store v1 and v2 are distinct.
Reference For more on CloudHub Object Store v2, refer to MuleSoft documentation on Object Store limitations and configuration.
What is a key performance indicator (KPI) that measures the success of a typical C4E that is immediately apparent in responses from the Anypoint Platform APIs?
Correct Answe r: The number of API specifications in RAML or OAS format published to Anypoint Exchange
*****************************************
>> The success of C4E always depends on their contribution to the number of reusable assets that they have helped to build and publish to Anypoint Exchange.
>> It is NOT due to any factors w.r.t # of outages, Manual vs CI/CD deployments or Publicly accessible HTTP endpoints
>> Anypoint Platform APIs helps us to quickly run and get the number of published RAML/OAS assets to Anypoint Exchange. This clearly depicts how successful a C4E team is based on number of returned assets in the response.
Janet
3 days agoTrina
17 days agoErick
18 days agoJeniffer
1 months agoLeontine
1 months agoGoldie
4 months agoMose
5 months agoKirby
6 months agoLakeesha
7 months agoCandra
8 months agoBuck
9 months agoRomana
9 months agoTonette
10 months agoLouisa
10 months agoDenae
10 months agoHelga
11 months agoErick
11 months agoMollie
11 months agoAlaine
11 months agoFidelia
12 months agoMelinda
12 months agoFidelia
12 months agoTherese
1 years agoBoris
1 years agoRolland
1 years agoBeckie
1 years agoLai
1 years agoWenona
1 years agoIlene
1 years agoSophia
1 years agoCarry
1 years agoRoxanne
1 years agoSylvia
1 years agoMattie
1 years agoJacinta
1 years agoAntonio
1 years agoIlene
1 years ago