You recently developed a deep learning model using Keras, and now you are experimenting with different training strategies. First, you trained the model using a single GPU, but the training process was too slow. Next, you distributed the training across 4 GPUs using tf.distribute.MirroredStrategy (with no other changes), but you did not observe a decrease in training time. What should you do?
Cherilyn
3 months agoDenise
3 months agoNadine
4 months agoBritt
4 months agoMyra
4 months agoAnastacia
4 months agoMary
5 months agoLuz
5 months agoCorazon
5 months agoViva
5 months agoWynell
5 months agoMarge
5 months agoRaina
6 months agoJoanne
6 months agoClorinda
6 months agoHalina
6 months agoSharmaine
11 months agoRebbecca
11 months agoTrinidad
10 months agoCarol
10 months agoVerona
10 months agoNada
11 months agoCurt
10 months agoDorathy
10 months agoLourdes
10 months agoMicheal
11 months agoLeoma
9 months agoChi
10 months agoDominque
10 months agoOmega
11 months agoGail
12 months agoPok
12 months agoEllsworth
12 months ago