You recently developed a deep learning model using Keras, and now you are experimenting with different training strategies. First, you trained the model using a single GPU, but the training process was too slow. Next, you distributed the training across 4 GPUs using tf.distribute.MirroredStrategy (with no other changes), but you did not observe a decrease in training time. What should you do?
Sharmaine
27 days agoRebbecca
28 days agoVerona
9 days agoNada
1 months agoDorathy
Lourdes
8 days agoMicheal
1 months agoDominque
9 days agoOmega
1 months agoGail
2 months agoPok
2 months agoEllsworth
2 months ago