Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

MuleSoft Exam MCIA-Level-1-Maintenance Topic 7 Question 25 Discussion

Actual exam question for MuleSoft's MCIA-Level-1-Maintenance exam
Question #: 25
Topic #: 7
[All MCIA-Level-1-Maintenance Questions]

An organization is successfully using API led connectivity, however, as the application network grows, all the manually performed tasks to publish share and discover, register, apply policies to, and deploy an API are becoming repetitive pictures driving the organization to automate this process using efficient CI/'CD pipeline. Considering Anypoint platforms capabilities how should the organization approach automating is API lifecycle?

Show Suggested Answer Hide Answer
Suggested Answer: C

Contribute your Thoughts:

Christiane
1 years ago
That makes sense. It ensures reliability for processing files while optimizing performance for batch job scope.
upvoted 0 times
...
Nan
1 years ago
I believe option C is the most suitable. Using Cloud hub persistent VM queues for FTPS files and disabling VM queue for batch job scope.
upvoted 0 times
...
Christiane
1 years ago
So, which option do you think is the best for this scenario?
upvoted 0 times
...
Nan
1 years ago
I agree, we need to consider the reliability requirements for FTPS files.
upvoted 0 times
...
Christiane
1 years ago
I think VM queues should be configured differently for FTPS file processing and batch job scope.
upvoted 0 times
...
Leatha
1 years ago
I suggest using Cloud hub persistent queues for FTPS files and disabling VM queues for the batch job scope
upvoted 0 times
...
Arthur
1 years ago
That could be a good point, but we need to consider the batch job scope as well
upvoted 0 times
...
Malinda
1 years ago
But wouldn't using Cloud hub persistent VM queues be better for FTPS file processing?
upvoted 0 times
...
Vallie
1 years ago
I agree with Arthur, it seems like the most reliable option
upvoted 0 times
...
Arthur
1 years ago
I think we should use Cloud hub persistent queues for FTPS file processing
upvoted 0 times
Kenneth
1 years ago
Yes, there is no need to configure VM queues for the batch jobs scope as it uses the worker's disc by default
upvoted 0 times
...
Kenneth
1 years ago
But what about the batch job scope? Should we disable VM queues for that?
upvoted 0 times
...
Kenneth
1 years ago
I agree, Cloud hub persistent queues would be the best option for FTPS file processing
upvoted 0 times
...
...
Theola
1 years ago
Exactly, Lorrine. Persistent queues will make sure the files are not lost if there are any issues during processing.
upvoted 0 times
...
Lorrine
1 years ago
Okay, that makes sense. Now, for the FTPS file processing, it seems like we need to use persistent VM queues on Cloudhub to ensure reliable processing.
upvoted 0 times
...
Paz
1 years ago
You're right, Lonna. The question mentions that the batch job scope uses the worker's disc by default. So we don't need to configure any VM queues for that part.
upvoted 0 times
...
Lonna
1 years ago
Hmm, I'm not sure about the batch job part. Isn't the default behavior for Mule batch jobs to use the worker's disc for VM queueing? We might need to double-check that.
upvoted 0 times
...
Nancey
1 years ago
I agree, this is a tricky one. We need to think about how to configure the VM queues to handle the FTPS file processing and the batch job scope effectively.
upvoted 0 times
...
Kyoko
1 years ago
This question seems to be testing our understanding of Mule application design and deployment on Cloudhub. The requirements mention FTPS file processing and a batch job, so we need to consider the reliability and performance aspects for each.
upvoted 0 times
...

Save Cancel