Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Exam Professional Machine Learning Engineer Topic 5 Question 80 Discussion

Actual exam question for Google's Professional Machine Learning Engineer exam
Question #: 80
Topic #: 5
[All Professional Machine Learning Engineer Questions]

You recently developed a deep learning model using Keras, and now you are experimenting with different training strategies. First, you trained the model using a single GPU, but the training process was too slow. Next, you distributed the training across 4 GPUs using tf.distribute.MirroredStrategy (with no other changes), but you did not observe a decrease in training time. What should you do?

Show Suggested Answer Hide Answer
Suggested Answer: D

Contribute your Thoughts:

Gail
6 days ago
I think we should also consider using a TPU with tf.distribute.TPUStrategy for faster training.
upvoted 0 times
...
Pok
10 days ago
I agree with Ellsworth. That might help improve the training time.
upvoted 0 times
...
Ellsworth
13 days ago
I think we should try distributing the dataset with tf.distribute.Strategy.experimental_distribute_dataset.
upvoted 0 times
...

Save Cancel