Path: blob/master/examples/nlp/ipynb/neural_machine_translation_with_keras_hub.ipynb
3508 views
English-to-Spanish translation with KerasHub
Author: Abheesht Sharma
Date created: 2022/05/26
Last modified: 2024/04/30
Description: Use KerasHub to train a sequence-to-sequence Transformer model on the machine translation task.
Introduction
KerasHub provides building blocks for NLP (model layers, tokenizers, metrics, etc.) and makes it convenient to construct NLP pipelines.
In this example, we'll use KerasHub layers to build an encoder-decoder Transformer model, and train it on the English-to-Spanish machine translation task.
This example is based on the English-to-Spanish NMT example by fchollet. The original example is more low-level and implements layers from scratch, whereas this example uses KerasHub to show some more advanced approaches, such as subword tokenization and using metrics to compute the quality of generated translations.
You'll learn how to:
Tokenize text using
keras_hub.tokenizers.WordPieceTokenizer
.Implement a sequence-to-sequence Transformer model using KerasHub's
keras_hub.layers.TransformerEncoder
,keras_hub.layers.TransformerDecoder
andkeras_hub.layers.TokenAndPositionEmbedding
layers, and train it.Use
keras_hub.samplers
to generate translations of unseen input sentences using the top-p decoding strategy!
Don't worry if you aren't familiar with KerasHub. This tutorial will start with the basics. Let's dive right in!
Setup
Before we start implementing the pipeline, let's import all the libraries we need.
Let's also define our parameters/hyperparameters.
Downloading the data
We'll be working with an English-to-Spanish translation dataset provided by Anki. Let's download it:
Parsing the data
Each line contains an English sentence and its corresponding Spanish sentence. The English sentence is the source sequence and Spanish one is the target sequence. Before adding the text to a list, we convert it to lowercase.
Here's what our sentence pairs look like:
Now, let's split the sentence pairs into a training set, a validation set, and a test set.
Tokenizing the data
We'll define two tokenizers - one for the source language (English), and the other for the target language (Spanish). We'll be using keras_hub.tokenizers.WordPieceTokenizer
to tokenize the text. keras_hub.tokenizers.WordPieceTokenizer
takes a WordPiece vocabulary and has functions for tokenizing the text, and detokenizing sequences of tokens.
Before we define the two tokenizers, we first need to train them on the dataset we have. The WordPiece tokenization algorithm is a subword tokenization algorithm; training it on a corpus gives us a vocabulary of subwords. A subword tokenizer is a compromise between word tokenizers (word tokenizers need very large vocabularies for good coverage of input words), and character tokenizers (characters don't really encode meaning like words do). Luckily, KerasHub makes it very simple to train WordPiece on a corpus with the keras_hub.tokenizers.compute_word_piece_vocabulary
utility.
Every vocabulary has a few special, reserved tokens. We have four such tokens:
"[PAD]"
- Padding token. Padding tokens are appended to the input sequence length when the input sequence length is shorter than the maximum sequence length."[UNK]"
- Unknown token."[START]"
- Token that marks the start of the input sequence."[END]"
- Token that marks the end of the input sequence.
Let's see some tokens!
Now, let's define the tokenizers. We will configure the tokenizers with the the vocabularies trained above.
Let's try and tokenize a sample from our dataset! To verify whether the text has been tokenized correctly, we can also detokenize the list of tokens back to the original text.
Format datasets
Next, we'll format our datasets.
At each training step, the model will seek to predict target words N+1 (and beyond) using the source sentence and the target words 0 to N.
As such, the training dataset will yield a tuple (inputs, targets)
, where:
inputs
is a dictionary with the keysencoder_inputs
anddecoder_inputs
.encoder_inputs
is the tokenized source sentence anddecoder_inputs
is the target sentence "so far", that is to say, the words 0 to N used to predict word N+1 (and beyond) in the target sentence.target
is the target sentence offset by one step: it provides the next words in the target sentence -- what the model will try to predict.
We will add special tokens, "[START]"
and "[END]"
, to the input Spanish sentence after tokenizing the text. We will also pad the input to a fixed length. This can be easily done using keras_hub.layers.StartEndPacker
.
Let's take a quick look at the sequence shapes (we have batches of 64 pairs, and all sequences are 40 steps long):
Building the model
Now, let's move on to the exciting part - defining our model! We first need an embedding layer, i.e., a vector for every token in our input sequence. This embedding layer can be initialised randomly. We also need a positional embedding layer which encodes the word order in the sequence. The convention is to add these two embeddings. KerasHub has a keras_hub.layers.TokenAndPositionEmbedding
layer which does all of the above steps for us.
Our sequence-to-sequence Transformer consists of a keras_hub.layers.TransformerEncoder
layer and a keras_hub.layers.TransformerDecoder
layer chained together.
The source sequence will be passed to keras_hub.layers.TransformerEncoder
, which will produce a new representation of it. This new representation will then be passed to the keras_hub.layers.TransformerDecoder
, together with the target sequence so far (target words 0 to N). The keras_hub.layers.TransformerDecoder
will then seek to predict the next words in the target sequence (N+1 and beyond).
A key detail that makes this possible is causal masking. The keras_hub.layers.TransformerDecoder
sees the entire sequence at once, and thus we must make sure that it only uses information from target tokens 0 to N when predicting token N+1 (otherwise, it could use information from the future, which would result in a model that cannot be used at inference time). Causal masking is enabled by default in keras_hub.layers.TransformerDecoder
.
We also need to mask the padding tokens ("[PAD]"
). For this, we can set the mask_zero
argument of the keras_hub.layers.TokenAndPositionEmbedding
layer to True. This will then be propagated to all subsequent layers.
Training our model
We'll use accuracy as a quick way to monitor training progress on the validation data. Note that machine translation typically uses BLEU scores as well as other metrics, rather than accuracy. However, in order to use metrics like ROUGE, BLEU, etc. we will have decode the probabilities and generate the text. Text generation is computationally expensive, and performing this during training is not recommended.
Here we only train for 1 epoch, but to get the model to actually converge you should train for at least 10 epochs.
Decoding test sentences (qualitative analysis)
Finally, let's demonstrate how to translate brand new English sentences. We simply feed into the model the tokenized English sentence as well as the target token "[START]"
. The model outputs probabilities of the next token. We then we repeatedly generated the next token conditioned on the tokens generated so far, until we hit the token "[END]"
.
For decoding, we will use the keras_hub.samplers
module from KerasHub. Greedy Decoding is a text decoding method which outputs the most likely next token at each time step, i.e., the token with the highest probability.
Evaluating our model (quantitative analysis)
There are many metrics which are used for text generation tasks. Here, to evaluate translations generated by our model, let's compute the ROUGE-1 and ROUGE-2 scores. Essentially, ROUGE-N is a score based on the number of common n-grams between the reference text and the generated text. ROUGE-1 and ROUGE-2 use the number of common unigrams and bigrams, respectively.
We will calculate the score over 30 test samples (since decoding is an expensive process).
After 10 epochs, the scores are as follows:
ROUGE-1 | ROUGE-2 | |
---|---|---|
Precision | 0.568 | 0.374 |
Recall | 0.615 | 0.394 |
F1 Score | 0.579 | 0.381 |