Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.
Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.
Path: blob/master/Natural Language Processing with Sequence Models/Week 3 - LSTMs and Named Entity Recognition/C3_W3_Lecture_Notebook_Vanishing_Gradients.ipynb
Views: 13373
Vanishing Gradients : Ungraded Lecture Notebook
In this notebook you'll take another look at vanishing gradients, from an intuitive standpoint.
Background
Adding layers to a neural network introduces multiplicative effects in both forward and backward propagation. The back prop in particular presents a problem as the gradient of activation functions can be very small. Multiplied together across many layers, their product can be vanishingly small! This results in weights not being updated in the front layers and training not progressing.
Gradients of the sigmoid function, for example, are in the range 0 to 0.25. To calculate gradients for the front layers of a neural network the chain rule is used. This means that these tiny values are multiplied starting at the last layer, working backwards to the first layer, with the gradients shrinking exponentially at each step.
Imports
Data, Activation & Gradient
Data
I'll start be creating some data, nothing special going on here. Just some values spread across the interval -5 to 5.
Try changing the range of values in the data to see how it impacts the plots that follow.
Activation
The example here is sigmoid() to squish the data x into the interval 0 to 1.
Gradient
This is the derivative of the sigmoid() activation function. It has a maximum of 0.25 at x = 0, the steepest point on the sigmoid plot.
Try changing the x value for finding the tangent line in the plot.
Plots
Sub Plots
Data values along the x-axis of the plots on the interval chosen for x, -5 to 5. Subplots:
x vs x
sigmoid of x
gradient of sigmoid
Notice how the y axis keeps compressing from the left plot to the right plot. The interval range has shrunk from 10 to 1 to 0.25. How did this happen? As |x| gets larger the sigmoid approaches asymptotes at 0 and 1, and the sigmoid gradient shrinks towards 0.
Try changing the range of values in the code block above to see how it impacts the plots.
Single Plot
Putting all 3 series on a single plot can help visualize the compression. Notice how hard it is to interpret because sigmoid and sigmoid gradient are so small compared to the scale of the input data x.
Trying changing the plot ylim to zoom in.
Numerical Impact
Multiplication & Decay
Multiplying numbers smaller than 1 results in smaller and smaller numbers. Below is an example that finds the gradient for an input x = 0 and multiplies it over n steps. Look how quickly it 'Vanishes' to almost zero. Yet sigmoid(x=0)=0.5 which has a sigmoid gradient of 0.25 and that happens to be the largest sigmoid gradient possible!
(Note: This is NOT an implementation of back propagation.)
Try changing the number of steps n.
Try changing the input value x. Consider the impact on sigmoid and sigmoid gradient.
Solution
One solution is to use activation functions that don't have tiny gradients. Other solutions involve more sophisticated model design. But they're both discussions for another time.