Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.
Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.
Path: blob/master/Custom and Distributed Training with Tensorflow/Week 4 - Distributed Training/C2_W4_Lab_2_multi-GPU-mirrored-strategy.ipynb
Views: 13370
Multi-GPU Mirrored Strategy
In this ungraded lab, you'll go through how to set up a Multi-GPU Mirrored Strategy. The lab environment only has a CPU but we placed the code here in case you want to try this out for yourself in a multiGPU device.
Notes:
If you are running this on Coursera, you'll see it gives a warning about no presence of GPU devices.
If you are running this in Colab, make sure you have selected your
runtime
to beGPU
.In both these cases, you'll see there's only 1 device that is available.
One device is sufficient for helping you understand these distribution strategies.
Imports
Setup Distribution Strategy
Prepare the Data
Define the Model
Configure custom training
Instead of model.compile()
, we're going to do custom training, so let's do that within a strategy scope.
Train and Test Steps Functions
Let's define a few utilities to facilitate the training.
Training Loop
We can now start training the model.