Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.
Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.
Path: blob/master/Sequence Models/Week 1/Building a Recurrent Neural Network - Step by Step/previous versions/Building a Recurrent Neural Network - Step by Step - v2.ipynb
Views: 13381
Building your Recurrent Neural Network - Step by Step
Welcome to Course 5's first assignment! In this assignment, you will implement your first Recurrent Neural Network in numpy.
Recurrent Neural Networks (RNN) are very effective for Natural Language Processing and other sequence tasks because they have "memory". They can read inputs (such as words) one at a time, and remember some information/context through the hidden layer activations that get passed from one time-step to the next. This allows a uni-directional RNN to take information from the past to process later inputs. A bidirection RNN can take context from both the past and the future.
Notation:
Superscript denotes an object associated with the layer.
Example: is the layer activation. and are the layer parameters.
Superscript denotes an object associated with the example.
Example: is the training example input.
Superscript denotes an object at the time-step.
Example: is the input x at the time-step. is the input at the timestep of example .
Lowerscript denotes the entry of a vector.
Example: denotes the entry of the activations in layer .
We assume that you are already familiar with numpy
and/or have completed the previous courses of the specialization. Let's get started!
Let's first import all the packages that you will need during this assignment.
1 - Forward propagation for the basic Recurrent Neural Network
Later this week, you will generate music using an RNN. The basic RNN that you will implement has the structure below. In this example, .

Here's how you can implement an RNN:
Steps:
Implement the calculations needed for one time-step of the RNN.
Implement a loop over time-steps in order to process all the inputs, one at a time.
Let's go!
1.1 - RNN cell
A Recurrent neural network can be seen as the repetition of a single cell. You are first going to implement the computations for a single time-step. The following figure describes the operations for a single time-step of an RNN cell.

Exercise: Implement the RNN-cell described in Figure (2).
Instructions:
Compute the hidden state with tanh activation: .
Using your new hidden state , compute the prediction . We provided you a function:
softmax
.Store in cache
Return , and cache
We will vectorize over examples. Thus, will have dimension , and will have dimension .
Expected Output:
**a_next[4]**: | [ 0.59584544 0.18141802 0.61311866 0.99808218 0.85016201 0.99980978 -0.18887155 0.99815551 0.6531151 0.82872037] |
**a_next.shape**: | (5, 10) |
**yt[1]**: | [ 0.9888161 0.01682021 0.21140899 0.36817467 0.98988387 0.88945212 0.36920224 0.9966312 0.9982559 0.17746526] |
**yt.shape**: | (2, 10) |
1.2 - RNN forward pass
You can see an RNN as the repetition of the cell you've just built. If your input sequence of data is carried over 10 time steps, then you will copy the RNN cell 10 times. Each cell takes as input the hidden state from the previous cell () and the current time-step's input data (). It outputs a hidden state () and a prediction () for this time-step.

Exercise: Code the forward propagation of the RNN described in Figure (3).
Instructions:
Create a vector of zeros () that will store all the hidden states computed by the RNN.
Initialize the "next" hidden state as (initial hidden state).
Start looping over each time step, your incremental index is :
Update the "next" hidden state and the cache by running
rnn_cell_forward
Store the "next" hidden state in ( position)
Store the prediction in y
Add the cache to the list of caches
Return , and caches
Expected Output:
**a[4][1]**: | [-0.99999375 0.77911235 -0.99861469 -0.99833267] |
**a.shape**: | (5, 10, 4) |
**y[1][3]**: | [ 0.79560373 0.86224861 0.11118257 0.81515947] |
**y.shape**: | (2, 10, 4) |
**cache[1][1][3]**: | [-1.1425182 -0.34934272 -0.20889423 0.58662319] |
**len(cache)**: | 2 |
Congratulations! You've successfully built the forward propagation of a recurrent neural network from scratch. This will work well enough for some applications, but it suffers from vanishing gradient problems. So it works best when each output can be estimated using mainly "local" context (meaning information from inputs where is not too far from ).
In the next part, you will build a more complex LSTM model, which is better at addressing vanishing gradients. The LSTM will be better able to remember a piece of information and keep it saved for many timesteps.
2 - Long Short-Term Memory (LSTM) network
This following figure shows the operations of an LSTM-cell.

Similar to the RNN example above, you will start by implementing the LSTM cell for a single time-step. Then you can iteratively call it from inside a for-loop to have it process an input with time-steps.
About the gates
- Forget gate
For the sake of this illustration, lets assume we are reading words in a piece of text, and want use an LSTM to keep track of grammatical structures, such as whether the subject is singular or plural. If the subject changes from a singular word to a plural word, we need to find a way to get rid of our previously stored memory value of the singular/plural state. In an LSTM, the forget gate lets us do this:
Here, are weights that govern the forget gate's behavior. We concatenate and multiply by . The equation above results in a vector with values between 0 and 1. This forget gate vector will be multiplied element-wise by the previous cell state . So if one of the values of is 0 (or close to 0) then it means that the LSTM should remove that piece of information (e.g. the singular subject) in the corresponding component of . If one of the values is 1, then it will keep the information.
- Update gate
Once we forget that the subject being discussed is singular, we need to find a way to update it to reflect that the new subject is now plural. Here is the formulat for the update gate:
Similar to the forget gate, here is again a vector of values between 0 and 1. This will be multiplied element-wise with , in order to compute .
- Updating the cell
To update the new subject we need to create a new vector of numbers that we can add to our previous cell state. The equation we use is:
Finally, the new cell state is:
- Output gate
To decide which outputs we will use, we will use the following two formulas:
Where in equation 5 you decide what to output using a sigmoid function and in equation 6 you multiply that by the of the previous state.
2.1 - LSTM cell
Exercise: Implement the LSTM cell described in the Figure (3).
Instructions:
Concatenate and in a single matrix:
Compute all the formulas 1-6. You can use
sigmoid()
(provided) andnp.tanh()
.Compute the prediction . You can use
softmax()
(provided).
Expected Output:
**a_next[4]**: | [-0.66408471 0.0036921 0.02088357 0.22834167 -0.85575339 0.00138482 0.76566531 0.34631421 -0.00215674 0.43827275] |
**a_next.shape**: | (5, 10) |
**c_next[2]**: | [ 0.63267805 1.00570849 0.35504474 0.20690913 -1.64566718 0.11832942 0.76449811 -0.0981561 -0.74348425 -0.26810932] |
**c_next.shape**: | (5, 10) |
**yt[1]**: | [ 0.79913913 0.15986619 0.22412122 0.15606108 0.97057211 0.31146381 0.00943007 0.12666353 0.39380172 0.07828381] |
**yt.shape**: | (2, 10) |
**cache[1][3]**: | [-0.16263996 1.03729328 0.72938082 -0.54101719 0.02752074 -0.30821874 0.07651101 -1.03752894 1.41219977 -0.37647422] |
**len(cache)**: | 10 |
2.2 - Forward pass for LSTM
Now that you have implemented one step of an LSTM, you can now iterate this over this using a for-loop to process a sequence of inputs.

Exercise: Implement lstm_forward()
to run an LSTM over time-steps.
Note: is initialized with zeros.
Expected Output:
**a[4][3][6]** = | 0.172117767533 |
**a.shape** = | (5, 10, 7) |
**y[1][4][3]** = | 0.95087346185 |
**y.shape** = | (2, 10, 7) |
**caches[1][1][1]** = | [ 0.82797464 0.23009474 0.76201118 -0.22232814 -0.20075807 0.18656139 0.41005165] |
Congratulations! You have now implemented the forward passes for the basic RNN and the LSTM. When using a deep learning framework, implementing the forward pass is sufficient to build systems that achieve great performance.
The rest of this notebook is optional, and will not be graded.
3 - Backpropagation in recurrent neural networks (OPTIONAL / UNGRADED)
In modern deep learning frameworks, you only have to implement the forward pass, and the framework takes care of the backward pass, so most deep learning engineers do not need to bother with the details of the backward pass. If however you are an expert in calculus and want to see the details of backprop in RNNs, you can work through this optional portion of the notebook.
When in an earlier course you implemented a simple (fully connected) neural network, you used backpropagation to compute the derivatives with respect to the cost to update the parameters. Similarly, in recurrent neural networks you can to calculate the derivatives with respect to the cost in order to update the parameters. The backprop equations are quite complicated and we did not derive them in lecture. However, we will briefly present them below.
3.1 - Basic RNN backward pass
We will start by computing the backward pass for the basic RNN-cell.
Deriving the one step backward functions:
To compute the rnn_cell_backward
you need to compute the following equations. It is a good exercise to derive them by hand.
The derivative of is . You can find the complete proof here. Note that:
Similarly for , the derivative of is .
The final two equations also follow same rule and are derived using the derivative. Note that the arrangement is done in a way to get the same dimensions to match.
Expected Output:
**gradients["dxt"][1][2]** = | -0.460564103059 |
**gradients["dxt"].shape** = | (3, 10) |
**gradients["da_prev"][2][3]** = | 0.0842968653807 |
**gradients["da_prev"].shape** = | (5, 10) |
**gradients["dWax"][3][1]** = | 0.393081873922 |
**gradients["dWax"].shape** = | (5, 3) |
**gradients["dWaa"][1][2]** = | -0.28483955787 |
**gradients["dWaa"].shape** = | (5, 5) |
**gradients["dba"][4]** = | [ 0.80517166] |
**gradients["dba"].shape** = | (5, 1) |
Backward pass through the RNN
Computing the gradients of the cost with respect to at every time-step is useful because it is what helps the gradient backpropagate to the previous RNN-cell. To do so, you need to iterate through all the time steps starting at the end, and at each step, you increment the overall , , and you store .
Instructions:
Implement the rnn_backward
function. Initialize the return variables with zeros first and then loop through all the time steps while calling the rnn_cell_backward
at each time timestep, update the other variables accordingly.
Expected Output:
**gradients["dx"][1][2]** = | [-2.07101689 -0.59255627 0.02466855 0.01483317] |
**gradients["dx"].shape** = | (3, 10, 4) |
**gradients["da0"][2][3]** = | -0.314942375127 |
**gradients["da0"].shape** = | (5, 10) |
**gradients["dWax"][3][1]** = | 11.2641044965 |
**gradients["dWax"].shape** = | (5, 3) |
**gradients["dWaa"][1][2]** = | 2.30333312658 |
**gradients["dWaa"].shape** = | (5, 5) |
**gradients["dba"][4]** = | [-0.74747722] |
**gradients["dba"].shape** = | (5, 1) |
3.2 - LSTM backward pass
3.2.1 One Step backward
The LSTM backward pass is slighltly more complicated than the forward one. We have provided you with all the equations for the LSTM backward pass below. (If you enjoy calculus exercises feel free to try deriving these from scratch yourself.)
3.2.2 gate derivatives
3.2.3 parameter derivatives
To calculate you just need to sum across the horizontal (axis= 1) axis on respectively. Note that you should have the keep_dims = True
option.
Finally, you will compute the derivative with respect to the previous hidden state, previous memory state, and input.
Here, the weights for equations 13 are the first n_a, (i.e. etc...)
where the weights for equation 15 are from n_a to the end, (i.e. etc...)
Exercise: Implement lstm_cell_backward
by implementing equations below. Good luck! 😃
Expected Output:
**gradients["dxt"][1][2]** = | 3.23055911511 |
**gradients["dxt"].shape** = | (3, 10) |
**gradients["da_prev"][2][3]** = | -0.0639621419711 |
**gradients["da_prev"].shape** = | (5, 10) |
**gradients["dc_prev"][2][3]** = | 0.797522038797 |
**gradients["dc_prev"].shape** = | (5, 10) |
**gradients["dWf"][3][1]** = | -0.147954838164 |
**gradients["dWf"].shape** = | (5, 8) |
**gradients["dWi"][1][2]** = | 1.05749805523 |
**gradients["dWi"].shape** = | (5, 8) |
**gradients["dWc"][3][1]** = | 2.30456216369 |
**gradients["dWc"].shape** = | (5, 8) |
**gradients["dWo"][1][2]** = | 0.331311595289 |
**gradients["dWo"].shape** = | (5, 8) |
**gradients["dbf"][4]** = | [ 0.18864637] |
**gradients["dbf"].shape** = | (5, 1) |
**gradients["dbi"][4]** = | [-0.40142491] |
**gradients["dbi"].shape** = | (5, 1) |
**gradients["dbc"][4]** = | [ 0.25587763] |
**gradients["dbc"].shape** = | (5, 1) |
**gradients["dbo"][4]** = | [ 0.13893342] |
**gradients["dbo"].shape** = | (5, 1) |
3.3 Backward pass through the LSTM RNN
This part is very similar to the rnn_backward
function you implemented above. You will first create variables of the same dimension as your return variables. You will then iterate over all the time steps starting from the end and call the one step function you implemented for LSTM at each iteration. You will then update the parameters by summing them individually. Finally return a dictionary with the new gradients.
Instructions: Implement the lstm_backward
function. Create a for loop starting from and going backward. For each step call lstm_cell_backward
and update the your old gradients by adding the new gradients to them. Note that dxt
is not updated but is stored.
Expected Output:
**gradients["dx"][1][2]** = | [-0.00173313 0.08287442 -0.30545663 -0.43281115] |
**gradients["dx"].shape** = | (3, 10, 4) |
**gradients["da0"][2][3]** = | -0.095911501954 |
**gradients["da0"].shape** = | (5, 10) |
**gradients["dWf"][3][1]** = | -0.0698198561274 |
**gradients["dWf"].shape** = | (5, 8) |
**gradients["dWi"][1][2]** = | 0.102371820249 |
**gradients["dWi"].shape** = | (5, 8) |
**gradients["dWc"][3][1]** = | -0.0624983794927 |
**gradients["dWc"].shape** = | (5, 8) |
**gradients["dWo"][1][2]** = | 0.0484389131444 |
**gradients["dWo"].shape** = | (5, 8) |
**gradients["dbf"][4]** = | [-0.0565788] |
**gradients["dbf"].shape** = | (5, 1) |
**gradients["dbi"][4]** = | [-0.06997391] |
**gradients["dbi"].shape** = | (5, 1) |
**gradients["dbc"][4]** = | [-0.27441821] |
**gradients["dbc"].shape** = | (5, 1) |
**gradients["dbo"][4]** = | [ 0.16532821] |
**gradients["dbo"].shape** = | (5, 1) |
Congratulations !
Congratulations on completing this assignment. You now understand how recurrent neural networks work!
Lets go on to the next exercise, where you'll use an RNN to build a character-level language model.