CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutSign UpSign In
amanchadha

Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.

GitHub Repository: amanchadha/coursera-deep-learning-specialization
Path: blob/master/C5 - Sequence Models/Week 3/Machine Translation/Neural_machine_translation_with_attention_v4a.py
Views: 4819
1
#!/usr/bin/env python
2
# coding: utf-8
3
4
# # Neural Machine Translation
5
#
6
# Welcome to your first programming assignment for this week!
7
#
8
# * You will build a Neural Machine Translation (NMT) model to translate human-readable dates ("25th of June, 2009") into machine-readable dates ("2009-06-25").
9
# * You will do this using an attention model, one of the most sophisticated sequence-to-sequence models.
10
#
11
# This notebook was produced together with NVIDIA's Deep Learning Institute.
12
13
# ## Table of Contents
14
#
15
# - [Packages](#0)
16
# - [1 - Translating Human Readable Dates Into Machine Readable Dates](#1)
17
# - [1.1 - Dataset](#1-1)
18
# - [2 - Neural Machine Translation with Attention](#2)
19
# - [2.1 - Attention Mechanism](#2-1)
20
# - [Exercise 1 - one_step_attention](#ex-1)
21
# - [Exercise 2 - modelf](#ex-2)
22
# - [Exercise 3 - Compile the Model](#ex-3)
23
# - [3 - Visualizing Attention (Optional / Ungraded)](#3)
24
# - [3.1 - Getting the Attention Weights From the Network](#3-1)
25
26
# <a name='0'></a>
27
# ## Packages
28
29
# In[ ]:
30
31
32
from tensorflow.keras.layers import Bidirectional, Concatenate, Permute, Dot, Input, LSTM, Multiply
33
from tensorflow.keras.layers import RepeatVector, Dense, Activation, Lambda
34
from tensorflow.keras.optimizers import Adam
35
from tensorflow.keras.utils import to_categorical
36
from tensorflow.keras.models import load_model, Model
37
import tensorflow.keras.backend as K
38
import tensorflow as tf
39
import numpy as np
40
41
from faker import Faker
42
import random
43
from tqdm import tqdm
44
from babel.dates import format_date
45
from nmt_utils import *
46
import matplotlib.pyplot as plt
47
get_ipython().run_line_magic('matplotlib', 'inline')
48
49
50
# <a name='1'></a>
51
# ## 1 - Translating Human Readable Dates Into Machine Readable Dates
52
#
53
# * The model you will build here could be used to translate from one language to another, such as translating from English to Hindi.
54
# * However, language translation requires massive datasets and usually takes days of training on GPUs.
55
# * To give you a place to experiment with these models without using massive datasets, we will perform a simpler "date translation" task.
56
# * The network will input a date written in a variety of possible formats (*e.g. "the 29th of August 1958", "03/30/1968", "24 JUNE 1987"*)
57
# * The network will translate them into standardized, machine readable dates (*e.g. "1958-08-29", "1968-03-30", "1987-06-24"*).
58
# * We will have the network learn to output dates in the common machine-readable format YYYY-MM-DD.
59
#
60
# <!--
61
# Take a look at [nmt_utils.py](./nmt_utils.py) to see all the formatting. Count and figure out how the formats work, you will need this knowledge later. !-->
62
63
# <a name='1-1'></a>
64
# ### 1.1 - Dataset
65
#
66
# We will train the model on a dataset of 10,000 human readable dates and their equivalent, standardized, machine readable dates. Let's run the following cells to load the dataset and print some examples.
67
68
# In[ ]:
69
70
71
m = 10000
72
dataset, human_vocab, machine_vocab, inv_machine_vocab = load_dataset(m)
73
74
75
# In[ ]:
76
77
78
dataset[:10]
79
80
81
# You've loaded:
82
# - `dataset`: a list of tuples of (human readable date, machine readable date).
83
# - `human_vocab`: a python dictionary mapping all characters used in the human readable dates to an integer-valued index.
84
# - `machine_vocab`: a python dictionary mapping all characters used in machine readable dates to an integer-valued index.
85
# - **Note**: These indices are not necessarily consistent with `human_vocab`.
86
# - `inv_machine_vocab`: the inverse dictionary of `machine_vocab`, mapping from indices back to characters.
87
#
88
# Let's preprocess the data and map the raw text data into the index values.
89
# - We will set Tx=30
90
# - We assume Tx is the maximum length of the human readable date.
91
# - If we get a longer input, we would have to truncate it.
92
# - We will set Ty=10
93
# - "YYYY-MM-DD" is 10 characters long.
94
95
# In[ ]:
96
97
98
Tx = 30
99
Ty = 10
100
X, Y, Xoh, Yoh = preprocess_data(dataset, human_vocab, machine_vocab, Tx, Ty)
101
102
print("X.shape:", X.shape)
103
print("Y.shape:", Y.shape)
104
print("Xoh.shape:", Xoh.shape)
105
print("Yoh.shape:", Yoh.shape)
106
107
108
# You now have:
109
# - `X`: a processed version of the human readable dates in the training set.
110
# - Each character in X is replaced by an index (integer) mapped to the character using `human_vocab`.
111
# - Each date is padded to ensure a length of $T_x$ using a special character (< pad >).
112
# - `X.shape = (m, Tx)` where m is the number of training examples in a batch.
113
# - `Y`: a processed version of the machine readable dates in the training set.
114
# - Each character is replaced by the index (integer) it is mapped to in `machine_vocab`.
115
# - `Y.shape = (m, Ty)`.
116
# - `Xoh`: one-hot version of `X`
117
# - Each index in `X` is converted to the one-hot representation (if the index is 2, the one-hot version has the index position 2 set to 1, and the remaining positions are 0.
118
# - `Xoh.shape = (m, Tx, len(human_vocab))`
119
# - `Yoh`: one-hot version of `Y`
120
# - Each index in `Y` is converted to the one-hot representation.
121
# - `Yoh.shape = (m, Ty, len(machine_vocab))`.
122
# - `len(machine_vocab) = 11` since there are 10 numeric digits (0 to 9) and the `-` symbol.
123
124
# * Let's also look at some examples of preprocessed training examples.
125
# * Feel free to play with `index` in the cell below to navigate the dataset and see how source/target dates are preprocessed.
126
127
# In[ ]:
128
129
130
index = 0
131
print("Source date:", dataset[index][0])
132
print("Target date:", dataset[index][1])
133
print()
134
print("Source after preprocessing (indices):", X[index])
135
print("Target after preprocessing (indices):", Y[index])
136
print()
137
print("Source after preprocessing (one-hot):", Xoh[index])
138
print("Target after preprocessing (one-hot):", Yoh[index])
139
140
141
# <a name='2'></a>
142
# ## 2 - Neural Machine Translation with Attention
143
#
144
# * If you had to translate a book's paragraph from French to English, you would not read the whole paragraph, then close the book and translate.
145
# * Even during the translation process, you would read/re-read and focus on the parts of the French paragraph corresponding to the parts of the English you are writing down.
146
# * The attention mechanism tells a Neural Machine Translation model where it should pay attention to at any step.
147
#
148
# <a name='2-1'></a>
149
# ### 2.1 - Attention Mechanism
150
#
151
# In this part, you will implement the attention mechanism presented in the lecture videos.
152
# * Here is a figure to remind you how the model works.
153
# * The diagram on the left shows the attention model.
154
# * The diagram on the right shows what one "attention" step does to calculate the attention variables $\alpha^{\langle t, t' \rangle}$.
155
# * The attention variables $\alpha^{\langle t, t' \rangle}$ are used to compute the context variable $context^{\langle t \rangle}$ for each timestep in the output ($t=1, \ldots, T_y$).
156
#
157
# <table>
158
# <td>
159
# <img src="images/attn_model.png" style="width:500;height:500px;"> <br>
160
# </td>
161
# <td>
162
# <img src="images/attn_mechanism.png" style="width:500;height:500px;"> <br>
163
# </td>
164
# </table>
165
# <caption><center> **Figure 1**: Neural machine translation with attention</center></caption>
166
#
167
168
# Here are some properties of the model that you may notice:
169
#
170
# #### Pre-attention and Post-attention LSTMs on both sides of the attention mechanism
171
# - There are two separate LSTMs in this model (see diagram on the left): pre-attention and post-attention LSTMs.
172
# - *Pre-attention* Bi-LSTM is the one at the bottom of the picture is a Bi-directional LSTM and comes *before* the attention mechanism.
173
# - The attention mechanism is shown in the middle of the left-hand diagram.
174
# - The pre-attention Bi-LSTM goes through $T_x$ time steps
175
# - *Post-attention* LSTM: at the top of the diagram comes *after* the attention mechanism.
176
# - The post-attention LSTM goes through $T_y$ time steps.
177
#
178
# - The post-attention LSTM passes the hidden state $s^{\langle t \rangle}$ and cell state $c^{\langle t \rangle}$ from one time step to the next.
179
180
# #### An LSTM has both a hidden state and cell state
181
# * In the lecture videos, we were using only a basic RNN for the post-attention sequence model
182
# * This means that the state captured by the RNN was outputting only the hidden state $s^{\langle t\rangle}$.
183
# * In this assignment, we are using an LSTM instead of a basic RNN.
184
# * So the LSTM has both the hidden state $s^{\langle t\rangle}$ and the cell state $c^{\langle t\rangle}$.
185
186
# #### Each time step does not use predictions from the previous time step
187
# * Unlike previous text generation examples earlier in the course, in this model, the post-attention LSTM at time $t$ does not take the previous time step's prediction $y^{\langle t-1 \rangle}$ as input.
188
# * The post-attention LSTM at time 't' only takes the hidden state $s^{\langle t\rangle}$ and cell state $c^{\langle t\rangle}$ as input.
189
# * We have designed the model this way because unlike language generation (where adjacent characters are highly correlated) there isn't as strong a dependency between the previous character and the next character in a YYYY-MM-DD date.
190
191
# #### Concatenation of hidden states from the forward and backward pre-attention LSTMs
192
# - $\overrightarrow{a}^{\langle t \rangle}$: hidden state of the forward-direction, pre-attention LSTM.
193
# - $\overleftarrow{a}^{\langle t \rangle}$: hidden state of the backward-direction, pre-attention LSTM.
194
# - $a^{\langle t \rangle} = [\overrightarrow{a}^{\langle t \rangle}, \overleftarrow{a}^{\langle t \rangle}]$: the concatenation of the activations of both the forward-direction $\overrightarrow{a}^{\langle t \rangle}$ and backward-directions $\overleftarrow{a}^{\langle t \rangle}$ of the pre-attention Bi-LSTM.
195
196
# #### Computing "energies" $e^{\langle t, t' \rangle}$ as a function of $s^{\langle t-1 \rangle}$ and $a^{\langle t' \rangle}$
197
# - Recall in the lesson videos "Attention Model", at time 6:45 to 8:16, the definition of "e" as a function of $s^{\langle t-1 \rangle}$ and $a^{\langle t \rangle}$.
198
# - "e" is called the "energies" variable.
199
# - $s^{\langle t-1 \rangle}$ is the hidden state of the post-attention LSTM
200
# - $a^{\langle t' \rangle}$ is the hidden state of the pre-attention LSTM.
201
# - $s^{\langle t-1 \rangle}$ and $a^{\langle t \rangle}$ are fed into a simple neural network, which learns the function to output $e^{\langle t, t' \rangle}$.
202
# - $e^{\langle t, t' \rangle}$ is then used when computing the attention $a^{\langle t, t' \rangle}$ that $y^{\langle t \rangle}$ should pay to $a^{\langle t' \rangle}$.
203
204
# - The diagram on the right of figure 1 uses a `RepeatVector` node to copy $s^{\langle t-1 \rangle}$'s value $T_x$ times.
205
# - Then it uses `Concatenation` to concatenate $s^{\langle t-1 \rangle}$ and $a^{\langle t \rangle}$.
206
# - The concatenation of $s^{\langle t-1 \rangle}$ and $a^{\langle t \rangle}$ is fed into a "Dense" layer, which computes $e^{\langle t, t' \rangle}$.
207
# - $e^{\langle t, t' \rangle}$ is then passed through a softmax to compute $\alpha^{\langle t, t' \rangle}$.
208
# - Note that the diagram doesn't explicitly show variable $e^{\langle t, t' \rangle}$, but $e^{\langle t, t' \rangle}$ is above the Dense layer and below the Softmax layer in the diagram in the right half of figure 1.
209
# - We'll explain how to use `RepeatVector` and `Concatenation` in Keras below.
210
211
# #### Implementation Details
212
#
213
# Let's implement this neural translator. You will start by implementing two functions: `one_step_attention()` and `model()`.
214
#
215
# #### one_step_attention
216
# * The inputs to the one_step_attention at time step $t$ are:
217
# - $[a^{<1>},a^{<2>}, ..., a^{<T_x>}]$: all hidden states of the pre-attention Bi-LSTM.
218
# - $s^{<t-1>}$: the previous hidden state of the post-attention LSTM
219
# * one_step_attention computes:
220
# - $[\alpha^{<t,1>},\alpha^{<t,2>}, ..., \alpha^{<t,T_x>}]$: the attention weights
221
# - $context^{ \langle t \rangle }$: the context vector:
222
#
223
# $$context^{<t>} = \sum_{t' = 1}^{T_x} \alpha^{<t,t'>}a^{<t'>}\tag{1}$$
224
#
225
# ##### Clarifying 'context' and 'c'
226
# - In the lecture videos, the context was denoted $c^{\langle t \rangle}$
227
# - In the assignment, we are calling the context $context^{\langle t \rangle}$.
228
# - This is to avoid confusion with the post-attention LSTM's internal memory cell variable, which is also denoted $c^{\langle t \rangle}$.
229
230
# <a name='ex-1'></a>
231
# ### Exercise 1 - one_step_attention
232
#
233
# Implement `one_step_attention()`.
234
#
235
# * The function `model()` will call the layers in `one_step_attention()` $T_y$ times using a for-loop.
236
# * It is important that all $T_y$ copies have the same weights.
237
# * It should not reinitialize the weights every time.
238
# * In other words, all $T_y$ steps should have shared weights.
239
# * Here's how you can implement layers with shareable weights in Keras:
240
# 1. Define the layer objects in a variable scope that is outside of the `one_step_attention` function. For example, defining the objects as global variables would work.
241
# - Note that defining these variables inside the scope of the function `model` would technically work, since `model` will then call the `one_step_attention` function. For the purposes of making grading and troubleshooting easier, we are defining these as global variables. Note that the automatic grader will expect these to be global variables as well.
242
# 2. Call these objects when propagating the input.
243
# * We have defined the layers you need as global variables.
244
# * Please run the following cells to create them.
245
# * Please note that the automatic grader expects these global variables with the given variable names. For grading purposes, please do not rename the global variables.
246
# * Please check the Keras documentation to learn more about these layers. The layers are functions. Below are examples of how to call these functions.
247
# * [RepeatVector()](https://www.tensorflow.org/api_docs/python/tf/keras/layers/RepeatVector)
248
# ```Python
249
# var_repeated = repeat_layer(var1)
250
# ```
251
# * [Concatenate()](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Concatenate)
252
# ```Python
253
# concatenated_vars = concatenate_layer([var1,var2,var3])
254
# ```
255
# * [Dense()](https://keras.io/layers/core/#dense)
256
# ```Python
257
# var_out = dense_layer(var_in)
258
# ```
259
# * [Activation()](https://keras.io/layers/core/#activation)
260
# ```Python
261
# activation = activation_layer(var_in)
262
# ```
263
# * [Dot()](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dot)
264
# ```Python
265
# dot_product = dot_layer([var1,var2])
266
# ```
267
268
# In[ ]:
269
270
271
# Defined shared layers as global variables
272
repeator = RepeatVector(Tx)
273
concatenator = Concatenate(axis=-1)
274
densor1 = Dense(10, activation = "tanh")
275
densor2 = Dense(1, activation = "relu")
276
activator = Activation(softmax, name='attention_weights') # We are using a custom softmax(axis = 1) loaded in this notebook
277
dotor = Dot(axes = 1)
278
279
280
# In[ ]:
281
282
283
# UNQ_C1 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
284
# GRADED FUNCTION: one_step_attention
285
286
def one_step_attention(a, s_prev):
287
"""
288
Performs one step of attention: Outputs a context vector computed as a dot product of the attention weights
289
"alphas" and the hidden states "a" of the Bi-LSTM.
290
291
Arguments:
292
a -- hidden state output of the Bi-LSTM, numpy-array of shape (m, Tx, 2*n_a)
293
s_prev -- previous hidden state of the (post-attention) LSTM, numpy-array of shape (m, n_s)
294
295
Returns:
296
context -- context vector, input of the next (post-attention) LSTM cell
297
"""
298
299
### START CODE HERE ###
300
# Use repeator to repeat s_prev to be of shape (m, Tx, n_s) so that you can concatenate it with all hidden states "a" (≈ 1 line)
301
s_prev = repeator(s_prev)
302
# Use concatenator to concatenate a and s_prev on the last axis (≈ 1 line)
303
# For grading purposes, please list 'a' first and 's_prev' second, in this order.
304
concat = concatenator([a,s_prev])
305
# Use densor1 to propagate concat through a small fully-connected neural network to compute the "intermediate energies" variable e. (≈1 lines)
306
e = densor1(concat)
307
# Use densor2 to propagate e through a small fully-connected neural network to compute the "energies" variable energies. (≈1 lines)
308
energies = densor2(e)
309
# Use "activator" on "energies" to compute the attention weights "alphas" (≈ 1 line)
310
alphas = activator(energies)
311
# Use dotor together with "alphas" and "a", in this order, to compute the context vector to be given to the next (post-attention) LSTM-cell (≈ 1 line)
312
context = dotor([alphas,a])
313
### END CODE HERE ###
314
315
return context
316
317
318
# In[ ]:
319
320
321
# UNIT TEST
322
def one_step_attention_test(target):
323
324
m = 10
325
Tx = 30
326
n_a = 32
327
n_s = 64
328
#np.random.seed(10)
329
a = np.random.uniform(1, 0, (m, Tx, 2 * n_a)).astype(np.float32)
330
s_prev =np.random.uniform(1, 0, (m, n_s)).astype(np.float32) * 1
331
context = target(a, s_prev)
332
333
assert type(context) == tf.python.framework.ops.EagerTensor, "Unexpected type. It should be a Tensor"
334
assert tuple(context.shape) == (m, 1, n_s), "Unexpected output shape"
335
assert np.all(context.numpy() > 0), "All output values must be > 0 in this example"
336
assert np.all(context.numpy() < 1), "All output values must be < 1 in this example"
337
338
#assert np.allclose(context[0][0][0:5].numpy(), [0.50877404, 0.57160693, 0.45448175, 0.50074816, 0.53651875]), "Unexpected values in the result"
339
print("\033[92mAll tests passed!")
340
341
one_step_attention_test(one_step_attention)
342
343
344
# <a name='ex-2'></a>
345
# ### Exercise 2 - modelf
346
#
347
# Implement `modelf()` as explained in figure 1 and the instructions:
348
#
349
# * `modelf` first runs the input through a Bi-LSTM to get $[a^{<1>},a^{<2>}, ..., a^{<T_x>}]$.
350
# * Then, `modelf` calls `one_step_attention()` $T_y$ times using a `for` loop. At each iteration of this loop:
351
# - It gives the computed context vector $context^{<t>}$ to the post-attention LSTM.
352
# - It runs the output of the post-attention LSTM through a dense layer with softmax activation.
353
# - The softmax generates a prediction $\hat{y}^{<t>}$.
354
#
355
# Again, we have defined global layers that will share weights to be used in `modelf()`.
356
357
# In[ ]:
358
359
360
n_a = 32 # number of units for the pre-attention, bi-directional LSTM's hidden state 'a'
361
n_s = 64 # number of units for the post-attention LSTM's hidden state "s"
362
363
# Please note, this is the post attention LSTM cell.
364
post_activation_LSTM_cell = LSTM(n_s, return_state = True) # Please do not modify this global variable.
365
output_layer = Dense(len(machine_vocab), activation=softmax)
366
367
368
# Now you can use these layers $T_y$ times in a `for` loop to generate the outputs, and their parameters will not be reinitialized. You will have to carry out the following steps:
369
#
370
# 1. Propagate the input `X` into a bi-directional LSTM.
371
# * [Bidirectional](https://keras.io/layers/wrappers/#bidirectional)
372
# * [LSTM](https://keras.io/layers/recurrent/#lstm)
373
# * Remember that we want the LSTM to return a full sequence instead of just the last hidden state.
374
#
375
# Sample code:
376
#
377
# ```Python
378
# sequence_of_hidden_states = Bidirectional(LSTM(units=..., return_sequences=...))(the_input_X)
379
# ```
380
#
381
# 2. Iterate for $t = 0, \cdots, T_y-1$:
382
# 1. Call `one_step_attention()`, passing in the sequence of hidden states $[a^{\langle 1 \rangle},a^{\langle 2 \rangle}, ..., a^{ \langle T_x \rangle}]$ from the pre-attention bi-directional LSTM, and the previous hidden state $s^{<t-1>}$ from the post-attention LSTM to calculate the context vector $context^{<t>}$.
383
# 2. Give $context^{<t>}$ to the post-attention LSTM cell.
384
# - Remember to pass in the previous hidden-state $s^{\langle t-1\rangle}$ and cell-states $c^{\langle t-1\rangle}$ of this LSTM
385
# * This outputs the new hidden state $s^{<t>}$ and the new cell state $c^{<t>}$.
386
#
387
# Sample code:
388
# ```Python
389
# next_hidden_state, _ , next_cell_state =
390
# post_activation_LSTM_cell(inputs=..., initial_state=[prev_hidden_state, prev_cell_state])
391
# ```
392
# Please note that the layer is actually the "post attention LSTM cell". For the purposes of passing the automatic grader, please do not modify the naming of this global variable. This will be fixed when we deploy updates to the automatic grader.
393
# 3. Apply a dense, softmax layer to $s^{<t>}$, get the output.
394
# Sample code:
395
# ```Python
396
# output = output_layer(inputs=...)
397
# ```
398
# 4. Save the output by adding it to the list of outputs.
399
#
400
# 3. Create your Keras model instance.
401
# * It should have three inputs:
402
# * `X`, the one-hot encoded inputs to the model, of shape ($T_{x}, humanVocabSize)$
403
# * $s^{\langle 0 \rangle}$, the initial hidden state of the post-attention LSTM
404
# * $c^{\langle 0 \rangle}$, the initial cell state of the post-attention LSTM
405
# * The output is the list of outputs.
406
# Sample code
407
# ```Python
408
# model = Model(inputs=[...,...,...], outputs=...)
409
# ```
410
411
# In[ ]:
412
413
414
# UNQ_C2 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
415
# GRADED FUNCTION: model
416
417
def modelf(Tx, Ty, n_a, n_s, human_vocab_size, machine_vocab_size):
418
"""
419
Arguments:
420
Tx -- length of the input sequence
421
Ty -- length of the output sequence
422
n_a -- hidden state size of the Bi-LSTM
423
n_s -- hidden state size of the post-attention LSTM
424
human_vocab_size -- size of the python dictionary "human_vocab"
425
machine_vocab_size -- size of the python dictionary "machine_vocab"
426
427
Returns:
428
model -- Keras model instance
429
"""
430
431
# Define the inputs of your model with a shape (Tx,)
432
# Define s0 (initial hidden state) and c0 (initial cell state)
433
# for the decoder LSTM with shape (n_s,)
434
X = Input(shape=(Tx, human_vocab_size))
435
s0 = Input(shape=(n_s,), name='s0')
436
c0 = Input(shape=(n_s,), name='c0')
437
s = s0
438
c = c0
439
440
# Initialize empty list of outputs
441
outputs = []
442
443
### START CODE HERE ###
444
445
# Step 1: Define your pre-attention Bi-LSTM. (≈ 1 line)
446
a = Bidirectional(LSTM(n_a, return_sequences=True))(X)
447
448
# Step 2: Iterate for Ty steps
449
for t in range(Ty):
450
451
# Step 2.A: Perform one step of the attention mechanism to get back the context vector at step t (≈ 1 line)
452
context = one_step_attention(a, s)
453
454
# Step 2.B: Apply the post-attention LSTM cell to the "context" vector.
455
# Don't forget to pass: initial_state = [hidden state, cell state] (≈ 1 line)
456
s, _, c = post_activation_LSTM_cell(context,initial_state=[s, c])
457
458
# Step 2.C: Apply Dense layer to the hidden state output of the post-attention LSTM (≈ 1 line)
459
out = output_layer(s)
460
461
# Step 2.D: Append "out" to the "outputs" list (≈ 1 line)
462
outputs.append(out)
463
464
# Step 3: Create model instance taking three inputs and returning the list of outputs. (≈ 1 line)
465
model = Model(inputs=[X, s0, c0],outputs=outputs)
466
467
### END CODE HERE ###
468
469
return model
470
471
472
# In[ ]:
473
474
475
# UNIT TEST
476
from test_utils import *
477
478
def modelf_test(target):
479
m = 10
480
Tx = 30
481
n_a = 32
482
n_s = 64
483
len_human_vocab = 37
484
len_machine_vocab = 11
485
486
487
model = target(Tx, Ty, n_a, n_s, len_human_vocab, len_machine_vocab)
488
489
print(summary(model))
490
491
492
expected_summary = [['InputLayer', [(None, 30, 37)], 0],
493
['InputLayer', [(None, 64)], 0],
494
['Bidirectional', (None, 30, 64), 17920],
495
['RepeatVector', (None, 30, 64), 0, 30],
496
['Concatenate', (None, 30, 128), 0],
497
['Dense', (None, 30, 10), 1290, 'tanh'],
498
['Dense', (None, 30, 1), 11, 'relu'],
499
['Activation', (None, 30, 1), 0],
500
['Dot', (None, 1, 64), 0],
501
['InputLayer', [(None, 64)], 0],
502
['LSTM',[(None, 64), (None, 64), (None, 64)], 33024,[(None, 1, 64), (None, 64), (None, 64)],'tanh'],
503
['Dense', (None, 11), 715, 'softmax']]
504
505
assert len(model.outputs) == 10, f"Wrong output shape. Expected 10 != {len(model.outputs)}"
506
507
comparator(summary(model), expected_summary)
508
509
510
modelf_test(modelf)
511
512
513
# Run the following cell to create your model.
514
515
# In[ ]:
516
517
518
model = modelf(Tx, Ty, n_a, n_s, len(human_vocab), len(machine_vocab))
519
520
521
# #### Troubleshooting Note
522
# * If you are getting repeated errors after an initially incorrect implementation of "model", but believe that you have corrected the error, you may still see error messages when building your model.
523
# * A solution is to save and restart your kernel (or shutdown then restart your notebook), and re-run the cells.
524
525
# Let's get a summary of the model to check if it matches the expected output.
526
527
# In[ ]:
528
529
530
model.summary()
531
532
533
# **Expected Output**:
534
#
535
# Here is the summary you should see
536
# <table>
537
# <tr>
538
# <td>
539
# **Total params:**
540
# </td>
541
# <td>
542
# 52,960
543
# </td>
544
# </tr>
545
# <tr>
546
# <td>
547
# **Trainable params:**
548
# </td>
549
# <td>
550
# 52,960
551
# </td>
552
# </tr>
553
# <tr>
554
# <td>
555
# **Non-trainable params:**
556
# </td>
557
# <td>
558
# 0
559
# </td>
560
# </tr>
561
# <tr>
562
# <td>
563
# **bidirectional_1's output shape **
564
# </td>
565
# <td>
566
# (None, 30, 64)
567
# </td>
568
# </tr>
569
# <tr>
570
# <td>
571
# **repeat_vector_1's output shape **
572
# </td>
573
# <td>
574
# (None, 30, 64)
575
# </td>
576
# </tr>
577
# <tr>
578
# <td>
579
# **concatenate_1's output shape **
580
# </td>
581
# <td>
582
# (None, 30, 128)
583
# </td>
584
# </tr>
585
# <tr>
586
# <td>
587
# **attention_weights's output shape **
588
# </td>
589
# <td>
590
# (None, 30, 1)
591
# </td>
592
# </tr>
593
# <tr>
594
# <td>
595
# **dot_1's output shape **
596
# </td>
597
# <td>
598
# (None, 1, 64)
599
# </td>
600
# </tr>
601
# <tr>
602
# <td>
603
# **dense_3's output shape **
604
# </td>
605
# <td>
606
# (None, 11)
607
# </td>
608
# </tr>
609
# </table>
610
#
611
612
# <a name='ex-3'></a>
613
# ### Exercise 3 - Compile the Model
614
#
615
# * After creating your model in Keras, you need to compile it and define the loss function, optimizer and metrics you want to use.
616
# * Loss function: 'categorical_crossentropy'.
617
# * Optimizer: [Adam](https://keras.io/optimizers/#adam) [optimizer](https://keras.io/optimizers/#usage-of-optimizers)
618
# - learning rate = 0.005
619
# - $\beta_1 = 0.9$
620
# - $\beta_2 = 0.999$
621
# - decay = 0.01
622
# * metric: 'accuracy'
623
#
624
# Sample code
625
# ```Python
626
# optimizer = Adam(lr=..., beta_1=..., beta_2=..., decay=...)
627
# model.compile(optimizer=..., loss=..., metrics=[...])
628
# ```
629
630
# In[ ]:
631
632
633
### START CODE HERE ### (≈2 lines)
634
opt = Adam(lr=0.005, beta_1=0.9, beta_2=0.999, decay=0.01)
635
model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
636
### END CODE HERE ###
637
638
639
# In[ ]:
640
641
642
# UNIT TESTS
643
assert opt.lr == 0.005, "Set the lr parameter to 0.005"
644
assert opt.beta_1 == 0.9, "Set the beta_1 parameter to 0.9"
645
assert opt.beta_2 == 0.999, "Set the beta_2 parameter to 0.999"
646
assert opt.decay == 0.01, "Set the decay parameter to 0.01"
647
assert model.loss == "categorical_crossentropy", "Wrong loss. Use 'categorical_crossentropy'"
648
assert model.optimizer == opt, "Use the optimizer that you have instantiated"
649
assert model.compiled_metrics._user_metrics[0] == 'accuracy', "set metrics to ['accuracy']"
650
651
print("\033[92mAll tests passed!")
652
653
654
# #### Define inputs and outputs, and fit the model
655
# The last step is to define all your inputs and outputs to fit the model:
656
# - You have input `Xoh` of shape $(m = 10000, T_x = 30, human\_vocab=37)$ containing the training examples.
657
# - You need to create `s0` and `c0` to initialize your `post_attention_LSTM_cell` with zeros.
658
# - Given the `model()` you coded, you need the "outputs" to be a list of 10 elements of shape (m, T_y).
659
# - The list `outputs[i][0], ..., outputs[i][Ty]` represents the true labels (characters) corresponding to the $i^{th}$ training example (`Xoh[i]`).
660
# - `outputs[i][j]` is the true label of the $j^{th}$ character in the $i^{th}$ training example.
661
662
# In[ ]:
663
664
665
s0 = np.zeros((m, n_s))
666
c0 = np.zeros((m, n_s))
667
outputs = list(Yoh.swapaxes(0,1))
668
669
670
# Let's now fit the model and run it for one epoch.
671
672
# In[ ]:
673
674
675
model.fit([Xoh, s0, c0], outputs, epochs=1, batch_size=100)
676
677
678
# While training you can see the loss as well as the accuracy on each of the 10 positions of the output. The table below gives you an example of what the accuracies could be if the batch had 2 examples:
679
#
680
# <img src="images/table.png" style="width:700;height:200px;"> <br>
681
# <caption><center>Thus, `dense_2_acc_8: 0.89` means that you are predicting the 7th character of the output correctly 89% of the time in the current batch of data. </center></caption>
682
#
683
#
684
# We have run this model for longer, and saved the weights. Run the next cell to load our weights. (By training a model for several minutes, you should be able to obtain a model of similar accuracy, but loading our model will save you time.)
685
686
# In[ ]:
687
688
689
model.load_weights('models/model.h5')
690
691
692
# You can now see the results on new examples.
693
694
# In[ ]:
695
696
697
EXAMPLES = ['3 May 1979', '5 April 09', '21th of August 2016', 'Tue 10 Jul 2007', 'Saturday May 9 2018', 'March 3 2001', 'March 3rd 2001', '1 March 2001']
698
s00 = np.zeros((1, n_s))
699
c00 = np.zeros((1, n_s))
700
for example in EXAMPLES:
701
source = string_to_int(example, Tx, human_vocab)
702
#print(source)
703
source = np.array(list(map(lambda x: to_categorical(x, num_classes=len(human_vocab)), source))).swapaxes(0,1)
704
source = np.swapaxes(source, 0, 1)
705
source = np.expand_dims(source, axis=0)
706
prediction = model.predict([source, s00, c00])
707
prediction = np.argmax(prediction, axis = -1)
708
output = [inv_machine_vocab[int(i)] for i in prediction]
709
print("source:", example)
710
print("output:", ''.join(output),"\n")
711
712
713
# You can also change these examples to test with your own examples. The next part will give you a better sense of what the attention mechanism is doing--i.e., what part of the input the network is paying attention to when generating a particular output character.
714
715
# <a name='3'></a>
716
# ## 3 - Visualizing Attention (Optional / Ungraded)
717
#
718
# Since the problem has a fixed output length of 10, it is also possible to carry out this task using 10 different softmax units to generate the 10 characters of the output. But one advantage of the attention model is that each part of the output (such as the month) knows it needs to depend only on a small part of the input (the characters in the input giving the month). We can visualize what each part of the output is looking at which part of the input.
719
#
720
# Consider the task of translating "Saturday 9 May 2018" to "2018-05-09". If we visualize the computed $\alpha^{\langle t, t' \rangle}$ we get this:
721
#
722
# <img src="images/date_attention.png" style="width:600;height:300px;"> <br>
723
# <caption><center> **Figure 8**: Full Attention Map</center></caption>
724
#
725
# Notice how the output ignores the "Saturday" portion of the input. None of the output timesteps are paying much attention to that portion of the input. We also see that 9 has been translated as 09 and May has been correctly translated into 05, with the output paying attention to the parts of the input it needs to to make the translation. The year mostly requires it to pay attention to the input's "18" in order to generate "2018."
726
727
# <a name='3-1'></a>
728
# ### 3.1 - Getting the Attention Weights From the Network
729
#
730
# Lets now visualize the attention values in your network. We'll propagate an example through the network, then visualize the values of $\alpha^{\langle t, t' \rangle}$.
731
#
732
# To figure out where the attention values are located, let's start by printing a summary of the model .
733
734
# In[ ]:
735
736
737
model.summary()
738
739
740
# Navigate through the output of `model.summary()` above. You can see that the layer named `attention_weights` outputs the `alphas` of shape (m, 30, 1) before `dot_2` computes the context vector for every time step $t = 0, \ldots, T_y-1$. Let's get the attention weights from this layer.
741
#
742
# The function `attention_map()` pulls out the attention values from your model and plots them.
743
#
744
# **Note**: We are aware that you might run into an error running the cell below despite a valid implementation for Exercise 2 - `modelf` above. If you get the error kindly report it on this [Topic](https://discourse.deeplearning.ai/t/error-in-optional-ungraded-part-of-neural-machine-translation-w3a1/1096) on [Discourse](https://discourse.deeplearning.ai) as it'll help us improve our content.
745
#
746
# If you haven’t joined our Discourse community you can do so by clicking on the link: http://bit.ly/dls-discourse
747
#
748
# And don’t worry about the error, it will not affect the grading for this assignment.
749
750
# In[ ]:
751
752
753
attention_map = plot_attention_map(model, human_vocab, inv_machine_vocab, "Tuesday 09 Oct 1993", num = 7, n_s = 64);
754
755
756
# On the generated plot you can observe the values of the attention weights for each character of the predicted output. Examine this plot and check that the places where the network is paying attention makes sense to you.
757
#
758
# In the date translation application, you will observe that most of the time attention helps predict the year, and doesn't have much impact on predicting the day or month.
759
760
# ### Congratulations!
761
#
762
#
763
# You have come to the end of this assignment
764
#
765
# #### Here's what you should remember
766
#
767
# - Machine translation models can be used to map from one sequence to another. They are useful not just for translating human languages (like French->English) but also for tasks like date format translation.
768
# - An attention mechanism allows a network to focus on the most relevant parts of the input when producing a specific part of the output.
769
# - A network using an attention mechanism can translate from inputs of length $T_x$ to outputs of length $T_y$, where $T_x$ and $T_y$ can be different.
770
# - You can visualize attention weights $\alpha^{\langle t,t' \rangle}$ to see what the network is paying attention to while generating each output.
771
772
# Congratulations on finishing this assignment! You are now able to implement an attention model and use it to learn complex mappings from one sequence to another.
773
774