Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Salesforce Exam B2C Commerce Architect Topic 5 Question 63 Discussion

Actual exam question for Salesforce's B2C Commerce Architect exam
Question #: 63
Topic #: 5
[All B2C Commerce Architect Questions]

A client receives multiple feeds from third parties on the same SFTP location:

* Product prices (sftp: prod/prices)

* Stores information (sftp: prod/stores;

* Product information (sftp: prod/catalog)

* Categories information (sftp: prod/marketing)

* Content (sftp: prod/marketing)

Some of the feeds are placed on sftp multiple times a day, as the information is updated in the source system.

The Architect decides to have only two jobs:

* One that checks and downloads available feeds every hour

* One that imports the files from Webdav once a day before the data replication, using the standards steps available in the Job Framework

Which design is correctfor the import Job, taking the steps scope in consideration?

Show Suggested Answer Hide Answer
Suggested Answer: B

This design maximizes efficiency and concurrency. By having the jobs that import products, stores, prices, and content run in parallel, the system can handle multiple data streams simultaneously, reducing total processing time. The sequential execution of importing categories followed by reindexing ensures that all new and updated information is properly indexed and available for site use, following the completion of the import of more frequently updated data. This order respects dependencies between steps and aligns with best practices for handling complex data workflows in B2C Commerce environments.


Contribute your Thoughts:

Mammie
16 days ago
Option B looks good to me. Handling the categories separately in a sequential flow seems like a sensible approach, especially if it's a more complex operation.
upvoted 0 times
Murray
2 days ago
Option B looks good to me. Handling the categories separately in a sequential flow seems like a sensible approach, especially if it's a more complex operation.
upvoted 0 times
...
...
Elena
29 days ago
I think the correct design for the import job is option A.
upvoted 0 times
...
Myrtie
29 days ago
The four parallel flows for the key data entities make sense, but I'm not sure about the last sequential flow handling categories and reindex. Shouldn't categories be part of the parallel flows as well?
upvoted 0 times
Lenna
2 days ago
B) - foursibling flows execute steps in parallel: import products, stores, prices, content- last flow executes steps in sequence import categories, reindex
upvoted 0 times
...
Jarod
15 days ago
A) - four sibling flows execute steps in parallel: import products, stores, prices, content- fifth flow executes: import categories- last flow executes steps in sequence: reindex
upvoted 0 times
...
...

Save Cancel