Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Exam Associate Data Practitioner Topic 2 Question 11 Discussion

Actual exam question for Google's Associate Data Practitioner exam
Question #: 11
Topic #: 2
[All Associate Data Practitioner Questions]

You have an existing weekly Storage Transfer Service transfer job from Amazon S3 to a Nearline Cloud Storage bucket in Google Cloud. Each week, the job moves a large number of relatively small files. As the number of files to be transferred each week has grown over time, you are at risk of no longer completing the transfer in the allocated time frame. You need to decrease the total transfer time by replacing the process. Your solution should minimize costs where possible. What should you do?

Show Suggested Answer Hide Answer
Suggested Answer: B

Comprehensive and Detailed in Depth

Why B is correct:Creating parallel transfer jobs by using include and exclude prefixes allows you to split the data into smaller chunks and transfer them in parallel.

This can significantly increase throughput and reduce the overall transfer time.

Why other options are incorrect:A: Changing the storage class to Standard will not improve transfer speed.

C: Dataflow is a complex solution for a simple file transfer task.

D: Agent-based transfer is suitable for large files or network limitations, but not for a large number of small files.


Contribute your Thoughts:

Ruthann
1 months ago
A for 'Ah, the good old days when we had to do everything manually.' B for 'Bam, let's get this done with parallel power!'
upvoted 0 times
...
Becky
1 months ago
Just throw more servers at it, that'll fix it! Or, you know, use option D and let the machines do the work.
upvoted 0 times
Edison
6 days ago
Option D sounds like the best solution to speed up the transfer process.
upvoted 0 times
...
...
Viki
1 months ago
I'm leaning towards C. Dataflow seems like the right tool to handle this kind of large-scale data migration in a scalable way.
upvoted 0 times
Benedict
20 days ago
A) Create a transfer job using the Google Cloud CLI, and specify the Standard storage class with the ---custom-storage-class flag.
upvoted 0 times
...
Adela
23 days ago
B) Create parallel transfer jobs using include and exclude prefixes.
upvoted 0 times
...
Janey
25 days ago
C) Create a batch Dataflow job that is scheduled weekly to migrate the data from Amazon S3 to Cloud Storage.
upvoted 0 times
...
...
Jolene
2 months ago
I'm leaning towards option C with a batch Dataflow job for a more automated solution.
upvoted 0 times
...
Rosann
2 months ago
Option D seems intriguing, but I'm not sure if the added cost of Compute Engine instances is worth it. Might be overkill for this use case.
upvoted 0 times
Gerald
1 months ago
C) Create a batch Dataflow job that is scheduled weekly to migrate the data from Amazon S3 to Cloud Storage.
upvoted 0 times
...
Tess
2 months ago
B) Create parallel transfer jobs using include and exclude prefixes.
upvoted 0 times
...
Merilyn
2 months ago
A) Create a transfer job using the Google Cloud CLI, and specify the Standard storage class with the ---custom-storage-class flag.
upvoted 0 times
...
...
Melodie
2 months ago
I disagree, I believe option D with multiple transfer agents would be more efficient.
upvoted 0 times
...
Karima
3 months ago
I think option B is the way to go. Using parallel transfer jobs with prefixes sounds like the most efficient way to get this done.
upvoted 0 times
Judy
1 months ago
D) Create an agent-based transfer job that utilizes multiple transfer agents on Compute Engine instances.
upvoted 0 times
...
Carin
1 months ago
I agree, that seems like the best way to speed up the transfer process.
upvoted 0 times
...
Sheridan
2 months ago
B) Create parallel transfer jobs using include and exclude prefixes.
upvoted 0 times
...
...
Maybelle
3 months ago
I think option B sounds like a good idea to speed up the transfer.
upvoted 0 times
...

Save Cancel