You recently developed a deep learning model using Keras, and now you are experimenting with different training strategies. First, you trained the model using a single GPU, but the training process was too slow. Next, you distributed the training across 4 GPUs using tf.distribute.MirroredStrategy (with no other changes), but you did not observe a decrease in training time. What should you do?
Sharmaine
2 months agoRebbecca
2 months agoTrinidad
17 days agoCarol
20 days agoVerona
29 days agoNada
2 months agoCurt
17 days agoDorathy
20 days agoLourdes
28 days agoMicheal
2 months agoLeoma
8 days agoChi
13 days agoDominque
29 days agoOmega
2 months agoGail
2 months agoPok
2 months agoEllsworth
3 months ago