Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.
Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.
Path: blob/master/Custom and Distributed Training with Tensorflow/Week 3 - Graph Mode/C2W3_Assignment.ipynb
Views: 13369
Horse or Human? In-graph training loop Assignment
This assignment lets you practice how to train a Keras model on the horses_or_humans dataset with the entire training process performed in graph mode. These steps include:
loading batches
calculating gradients
updating parameters
calculating validation accuracy
repeating the loop until convergence
Setup
Import TensorFlow 2.0:
Prepare the dataset
Load the horses to human dataset, splitting 80% for the training set and 20% for the test set.
Pre-process an image (please complete this section)
You'll define a mapping function that resizes the image to a height of 224 by 224, and normalizes the pixels to the range of 0 to 1. Note that pixels range from 0 to 255.
You'll use the following function: tf.image.resize and pass in the (height,width) as a tuple (or list).
To normalize, divide by a floating value so that the pixel range changes from [0,255] to [0,1].
Expected Output:
Apply pre-processing to the datasets (please complete this section)
Apply the following steps to the training_examples:
Apply the
map_fn
to the training_examplesShuffle the training data using
.shuffle(buffer_size=)
and set the buffer size to the number of examples.Group these into batches using
.batch()
and set the batch size given by the parameter.
Hint: You can look at how validation_examples and test_examples are pre-processed to get a sense of how to chain together multiple function calls.
Expected Output:
Define the model
Define optimizer: (please complete these sections)
Define the Adam optimizer that is in the tf.keras.optimizers module.
Expected Output:
Define the loss function (please complete this section)
Define the loss function as the sparse categorical cross entropy that's in the tf.keras.losses module. Use the same function for both training and validation.
Expected Output:
Define the acccuracy function (please complete this section)
Define the accuracy function as the spare categorical accuracy that's contained in the tf.keras.metrics module. Use the same function for both training and validation.
Expected Output:
Call the three functions that you defined to set the optimizer, loss and accuracy
Define the training loop (please complete this section)
In the training loop:
Get the model predictions: use the model, passing in the input
x
Get the training loss: Call
train_loss
, passing in the truey
and the predictedy
.Calculate the gradient of the loss with respect to the model's variables: use
tape.gradient
and pass in the loss and the model'strainable_variables
.Optimize the model variables using the gradients: call
optimizer.apply_gradients
and pass in azip()
of the two lists: the gradients and the model'strainable_variables
.Calculate accuracy: Call
train_accuracy
, passing in the truey
and the predictedy
.
Expected Output:
You will see a Tensor with the same shape and dtype. The value might be different.
Define the 'train' function (please complete this section)
You'll first loop through the training batches to train the model. (Please complete these sections)
The
train
function will use a for loop to iteratively call thetrain_one_step
function that you just defined.You'll use
tf.print
to print the step number, loss, and train_accuracy.result() at each step. Remember to use tf.print when you plan to generate autograph code.
Next, you'll loop through the batches of the validation set to calculation the validation loss and validation accuracy. (This code is provided for you). At each iteration of the loop:
Use the model to predict on x, where x is the input from the validation set.
Use val_loss to calculate the validation loss between the true validation 'y' and predicted y.
Use val_accuracy to calculate the accuracy of the predicted y compared to the true y.
Finally, you'll print the validation loss and accuracy using tf.print. (Please complete this section)
print the final
loss
, which is the validation loss calculated by the last loop through the validation dataset.Also print the val_accuracy.result().
HINT If you submit your assignment and see this error for your stderr output:
Please check your calls to train_accuracy and val_accuracy to make sure that you pass in the true and predicted values in the correct order (check the documentation to verify the order of parameters).
Run the train
function to train your model! You should see the loss generally decreasing and the accuracy increasing.
Note: Please let the training finish before submitting and do not modify the next cell. It is required for grading. This will take around 5 minutes to run.
Evaluation
You can now see how your model performs on test images. First, let's load the test dataset and generate predictions:
Let's define a utility function for plotting an image and its prediction.
Plot the result of a single image
Choose an index and display the model's prediction for that image.