Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.
Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.
Path: blob/master/Natural Language Processing with Attention Models/Week 2 - Text Summarization/C4_W2_lecture_notebook_Attention.ipynb
Views: 13373
The Three Ways of Attention and Dot Product Attention: Ungraded Lab Notebook
In this notebook you'll explore the three ways of attention (encoder-decoder attention, causal attention, and bi-directional self attention) and how to implement the latter two with dot product attention.
Background
As you learned last week, attention models constitute powerful tools in the NLP practitioner's toolkit. Like LSTMs, they learn which words are most important to phrases, sentences, paragraphs, and so on. Moreover, they mitigate the vanishing gradient problem even better than LSTMs. You've already seen how to combine attention with LSTMs to build encoder-decoder models for applications such as machine translation.
This week, you'll see how to integrate attention into transformers. Because transformers are not sequence models, they are much easier to parallelize and accelerate. Beyond machine translation, applications of transformers include:
Auto-completion
Named Entity Recognition
Chatbots
Question-Answering
And more!
Along with embedding, positional encoding, dense layers, and residual connections, attention is a crucial component of transformers. At the heart of any attention scheme used in a transformer is dot product attention, of which the figures below display a simplified picture:
With basic dot product attention, you capture the interactions between every word (embedding) in your query and every word in your key. If the queries and keys belong to the same sentences, this constitutes bi-directional self-attention. In some situations, however, it's more appropriate to consider only words which have come before the current one. Such cases, particularly when the queries and keys come from the same sentences, fall into the category of causal attention.
For causal attention, we add a mask to the argument of our softmax function, as illustrated below:
Now let's see how to implement attention with NumPy. When you integrate attention into a transformer network defined with Trax, you'll have to use trax.fastmath.numpy
instead, since Trax's arrays are based on JAX DeviceArrays. Fortunately, the function interfaces are often identical.
Imports
Here are some helper functions that will help you create tensors and display useful information:
create_tensor()
creates a numpy array from a list of lists.display_tensor()
prints out the shape and the actual tensor.
Create some tensors and display their shapes. Feel free to experiment with your own tensors. Keep in mind, though, that the query, key, and value arrays must all have the same embedding dimensions (number of columns), and the mask array must have the same shape as np.dot(query, key.T)
.
Dot product attention
Here we come to the crux of this lab, in which we compute , where the (optional, but default) scaling factor is the square root of the embedding dimension.
Now let's implement the masked dot product self-attention (at the heart of causal attention) as a special case of dot product attention