Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Exam Professional Machine Learning Engineer Topic 5 Question 80 Discussion

Actual exam question for Google's Professional Machine Learning Engineer exam
Question #: 80
Topic #: 5
[All Professional Machine Learning Engineer Questions]

You recently developed a deep learning model using Keras, and now you are experimenting with different training strategies. First, you trained the model using a single GPU, but the training process was too slow. Next, you distributed the training across 4 GPUs using tf.distribute.MirroredStrategy (with no other changes), but you did not observe a decrease in training time. What should you do?

Show Suggested Answer Hide Answer
Suggested Answer: D

Contribute your Thoughts:

Sharmaine
27 days ago
Ah, the age-old dilemma of training a deep learning model - GPUs, TPUs, and batch sizes, oh my! I say we just throw the whole thing in the microwave and see what happens. *chuckles*
upvoted 0 times
...
Rebbecca
28 days ago
Well, this is a tough one. I'm leaning towards option A - distributing the dataset with tf.distribute.Strategy.experimental_distribute_dataset. Seems like the most straightforward approach to me.
upvoted 0 times
Verona
9 days ago
I think option A is a good choice. It could help speed up the training process.
upvoted 0 times
...
...
Nada
1 months ago
Oh, I bet option D is the way to go! Increasing the batch size might just do the trick. After all, who needs GPUs when you have big batches, right? *wink*
upvoted 0 times
User 2: Yeah, that could be a good solution to try out.
upvoted 0 times
...
Lourdes
8 days ago
User 1: I think increasing the batch size might help speed up the training process.
upvoted 0 times
...
...
Micheal
1 months ago
Interesting question. I think option C looks promising - using a TPU with tf.distribute.TPUStrategy could really speed up the training process.
upvoted 0 times
Dominque
9 days ago
A) Distribute the dataset with tf.distribute.Strategy.experimental_distribute_dataset
upvoted 0 times
...
...
Omega
1 months ago
Hmm, I would say option B. Creating a custom training loop can help you fine-tune the distribution of the training process and potentially improve the performance.
upvoted 0 times
...
Gail
2 months ago
I think we should also consider using a TPU with tf.distribute.TPUStrategy for faster training.
upvoted 0 times
...
Pok
2 months ago
I agree with Ellsworth. That might help improve the training time.
upvoted 0 times
...
Ellsworth
2 months ago
I think we should try distributing the dataset with tf.distribute.Strategy.experimental_distribute_dataset.
upvoted 0 times
...

Save Cancel