Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Professional Machine Learning Engineer Exam - Topic 5 Question 80 Discussion

Actual exam question for Google's Professional Machine Learning Engineer exam
Question #: 80
Topic #: 5
[All Professional Machine Learning Engineer Questions]

You recently developed a deep learning model using Keras, and now you are experimenting with different training strategies. First, you trained the model using a single GPU, but the training process was too slow. Next, you distributed the training across 4 GPUs using tf.distribute.MirroredStrategy (with no other changes), but you did not observe a decrease in training time. What should you do?

Show Suggested Answer Hide Answer
Suggested Answer: D

Contribute your Thoughts:

0/2000 characters
Cherilyn
3 months ago
TPUs are cool, but not always necessary for speed.
upvoted 0 times
...
Denise
3 months ago
Definitely try distributing the dataset first.
upvoted 0 times
...
Nadine
4 months ago
I disagree, a custom training loop might be more effective.
upvoted 0 times
...
Britt
4 months ago
Surprised that 4 GPUs didn't help at all!
upvoted 0 times
...
Myra
4 months ago
Increasing the batch size can help speed things up!
upvoted 0 times
...
Anastacia
4 months ago
Using a TPU sounds interesting, but I think it might be overkill for this situation. I’d lean towards option A or D instead.
upvoted 0 times
...
Mary
5 months ago
I practiced a similar question, and I feel like a custom training loop could give more control, but I’m not confident that’s necessary here.
upvoted 0 times
...
Luz
5 months ago
I'm not entirely sure, but I think increasing the batch size could help utilize the GPUs better. That might be option D?
upvoted 0 times
...
Corazon
5 months ago
I remember reading that distributing the dataset can help with performance, so maybe option A is the right choice?
upvoted 0 times
...
Viva
5 months ago
This seems like a good opportunity to try out a custom training loop. I've heard that can really help optimize the training process, especially when working with multiple GPUs. I'm going to go with option B.
upvoted 0 times
...
Wynell
5 months ago
I'm a bit confused here. Distributing the dataset seems like a logical step, but the question says that didn't work. Maybe I should consider creating a custom training loop or using a TPU instead?
upvoted 0 times
...
Marge
5 months ago
Okay, let's think this through. If the training time didn't improve with 4 GPUs, then the issue might not be with the hardware. A custom training loop could help optimize the process, so I'm leaning towards option B.
upvoted 0 times
...
Raina
6 months ago
Hmm, this is a tricky one. I'm not sure why the training time didn't decrease with the 4 GPUs. Maybe I need to look into how to properly distribute the dataset.
upvoted 0 times
...
Joanne
6 months ago
Ah, I see now. Routers are layer 3 devices that use protocols to learn paths, not layer 2 switching. And they definitely aren't layer 4 devices. I've got my answer.
upvoted 0 times
...
Clorinda
6 months ago
Hmm, I'm a little unsure about this one. I'll need to think through the different IP addressing scenarios carefully.
upvoted 0 times
...
Halina
6 months ago
I'm a bit unsure about this one. I know enterprise analysis is important, but I'm not totally clear on the specific document that would outline the governances. I'll have to think this through carefully.
upvoted 0 times
...
Sharmaine
11 months ago
Ah, the age-old dilemma of training a deep learning model - GPUs, TPUs, and batch sizes, oh my! I say we just throw the whole thing in the microwave and see what happens. *chuckles*
upvoted 0 times
...
Rebbecca
11 months ago
Well, this is a tough one. I'm leaning towards option A - distributing the dataset with tf.distribute.Strategy.experimental_distribute_dataset. Seems like the most straightforward approach to me.
upvoted 0 times
Trinidad
10 months ago
Let's give it a try and see if it makes a difference.
upvoted 0 times
...
Carol
10 months ago
I agree, distributing the dataset might be the key to improving training time.
upvoted 0 times
...
Verona
10 months ago
I think option A is a good choice. It could help speed up the training process.
upvoted 0 times
...
...
Nada
11 months ago
Oh, I bet option D is the way to go! Increasing the batch size might just do the trick. After all, who needs GPUs when you have big batches, right? *wink*
upvoted 0 times
Curt
10 months ago
User 3: I agree, let's give it a shot and see if it makes a difference.
upvoted 0 times
...
Dorathy
10 months ago
User 2: Yeah, that could be a good solution to try out.
upvoted 0 times
...
Lourdes
10 months ago
User 1: I think increasing the batch size might help speed up the training process.
upvoted 0 times
...
...
Micheal
11 months ago
Interesting question. I think option C looks promising - using a TPU with tf.distribute.TPUStrategy could really speed up the training process.
upvoted 0 times
Leoma
9 months ago
C) Use a TPU with tf.distribute.TPUStrategy.
upvoted 0 times
...
Chi
10 months ago
B) Create a custom training loop.
upvoted 0 times
...
Dominque
10 months ago
A) Distribute the dataset with tf.distribute.Strategy.experimental_distribute_dataset
upvoted 0 times
...
...
Omega
11 months ago
Hmm, I would say option B. Creating a custom training loop can help you fine-tune the distribution of the training process and potentially improve the performance.
upvoted 0 times
...
Gail
12 months ago
I think we should also consider using a TPU with tf.distribute.TPUStrategy for faster training.
upvoted 0 times
...
Pok
12 months ago
I agree with Ellsworth. That might help improve the training time.
upvoted 0 times
...
Ellsworth
12 months ago
I think we should try distributing the dataset with tf.distribute.Strategy.experimental_distribute_dataset.
upvoted 0 times
...

Save Cancel