Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Associate Data Practitioner Exam - Topic 2 Question 11 Discussion

Actual exam question for Google's Associate Data Practitioner exam
Question #: 11
Topic #: 2
[All Associate Data Practitioner Questions]

You have an existing weekly Storage Transfer Service transfer job from Amazon S3 to a Nearline Cloud Storage bucket in Google Cloud. Each week, the job moves a large number of relatively small files. As the number of files to be transferred each week has grown over time, you are at risk of no longer completing the transfer in the allocated time frame. You need to decrease the total transfer time by replacing the process. Your solution should minimize costs where possible. What should you do?

Show Suggested Answer Hide Answer
Suggested Answer: B

Comprehensive and Detailed in Depth

Why B is correct:Creating parallel transfer jobs by using include and exclude prefixes allows you to split the data into smaller chunks and transfer them in parallel.

This can significantly increase throughput and reduce the overall transfer time.

Why other options are incorrect:A: Changing the storage class to Standard will not improve transfer speed.

C: Dataflow is a complex solution for a simple file transfer task.

D: Agent-based transfer is suitable for large files or network limitations, but not for a large number of small files.


Contribute your Thoughts:

0/2000 characters
Aja
6 days ago
I’m leaning towards option D with the agent-based transfer job. It seems like using multiple agents could really speed things up, but I’m not confident about the cost implications.
upvoted 0 times
...
Viola
12 days ago
I think option C, the Dataflow job, could be a good choice since it can handle large data transfers efficiently. We practiced a similar scenario, but I’m a bit unclear on the scheduling part.
upvoted 0 times
...
Merri
17 days ago
I remember we discussed the importance of parallel processing in our last study session. Option B sounds like it could help with the transfer time, but I'm not entirely sure how to set up the include and exclude prefixes correctly.
upvoted 0 times
...
Dusti
23 days ago
This seems straightforward. I'd go with the Dataflow job option - it should be able to handle the large number of files and provide a reliable, scheduled transfer process.
upvoted 0 times
...
Ricarda
28 days ago
Okay, I think I have a strategy here. Creating parallel transfer jobs with include and exclude prefixes could help speed up the process while minimizing costs. I'll make sure to consider the other options as well.
upvoted 0 times
...
Azalee
1 month ago
Hmm, I'm a bit confused by the question. I'll need to review the details on the different storage classes and transfer options to determine the best approach.
upvoted 0 times
...
Asha
1 month ago
This looks like a tricky one. I'll need to think through the different options carefully to find the most efficient and cost-effective solution.
upvoted 0 times
...
Ruthann
5 months ago
A for 'Ah, the good old days when we had to do everything manually.' B for 'Bam, let's get this done with parallel power!'
upvoted 0 times
...
Becky
5 months ago
Just throw more servers at it, that'll fix it! Or, you know, use option D and let the machines do the work.
upvoted 0 times
Deangelo
3 months ago
I agree, let's go with option D and let the machines handle the heavy lifting.
upvoted 0 times
...
Gregoria
4 months ago
Throwing more servers at it might work temporarily, but option D is a more efficient and cost-effective solution in the long run.
upvoted 0 times
...
Elliot
4 months ago
Yeah, using multiple transfer agents on Compute Engine instances should definitely help decrease the total transfer time.
upvoted 0 times
...
Edison
4 months ago
Option D sounds like the best solution to speed up the transfer process.
upvoted 0 times
...
...
Viki
5 months ago
I'm leaning towards C. Dataflow seems like the right tool to handle this kind of large-scale data migration in a scalable way.
upvoted 0 times
Benedict
5 months ago
A) Create a transfer job using the Google Cloud CLI, and specify the Standard storage class with the ---custom-storage-class flag.
upvoted 0 times
...
Adela
5 months ago
B) Create parallel transfer jobs using include and exclude prefixes.
upvoted 0 times
...
Janey
5 months ago
C) Create a batch Dataflow job that is scheduled weekly to migrate the data from Amazon S3 to Cloud Storage.
upvoted 0 times
...
...
Jolene
6 months ago
I'm leaning towards option C with a batch Dataflow job for a more automated solution.
upvoted 0 times
...
Rosann
6 months ago
Option D seems intriguing, but I'm not sure if the added cost of Compute Engine instances is worth it. Might be overkill for this use case.
upvoted 0 times
Gerald
5 months ago
C) Create a batch Dataflow job that is scheduled weekly to migrate the data from Amazon S3 to Cloud Storage.
upvoted 0 times
...
Tess
5 months ago
B) Create parallel transfer jobs using include and exclude prefixes.
upvoted 0 times
...
Merilyn
5 months ago
A) Create a transfer job using the Google Cloud CLI, and specify the Standard storage class with the ---custom-storage-class flag.
upvoted 0 times
...
...
Melodie
6 months ago
I disagree, I believe option D with multiple transfer agents would be more efficient.
upvoted 0 times
...
Karima
6 months ago
I think option B is the way to go. Using parallel transfer jobs with prefixes sounds like the most efficient way to get this done.
upvoted 0 times
Judy
5 months ago
D) Create an agent-based transfer job that utilizes multiple transfer agents on Compute Engine instances.
upvoted 0 times
...
Carin
5 months ago
I agree, that seems like the best way to speed up the transfer process.
upvoted 0 times
...
Sheridan
5 months ago
B) Create parallel transfer jobs using include and exclude prefixes.
upvoted 0 times
...
...
Maybelle
6 months ago
I think option B sounds like a good idea to speed up the transfer.
upvoted 0 times
...

Save Cancel