CoCalc provides the best real-time collaborative environment for Jupyter Notebooks, LaTeX documents, and SageMath, scalable from individual users to large groups and classes!
CoCalc provides the best real-time collaborative environment for Jupyter Notebooks, LaTeX documents, and SageMath, scalable from individual users to large groups and classes!
Path: blob/main/intermediate_source/seq2seq_translation_tutorial.py
Views: 494
# -*- coding: utf-8 -*-1"""2NLP From Scratch: Translation with a Sequence to Sequence Network and Attention3*******************************************************************************4**Author**: `Sean Robertson <https://github.com/spro>`_56This is the third and final tutorial on doing "NLP From Scratch", where we7write our own classes and functions to preprocess the data to do our NLP8modeling tasks. We hope after you complete this tutorial that you'll proceed to9learn how `torchtext` can handle much of this preprocessing for you in the10three tutorials immediately following this one.1112In this project we will be teaching a neural network to translate from13French to English.1415.. code-block:: sh1617[KEY: > input, = target, < output]1819> il est en train de peindre un tableau .20= he is painting a picture .21< he is painting a picture .2223> pourquoi ne pas essayer ce vin delicieux ?24= why not try that delicious wine ?25< why not try that delicious wine ?2627> elle n est pas poete mais romanciere .28= she is not a poet but a novelist .29< she not not a poet but a novelist .3031> vous etes trop maigre .32= you re too skinny .33< you re all alone .3435... to varying degrees of success.3637This is made possible by the simple but powerful idea of the `sequence38to sequence network <https://arxiv.org/abs/1409.3215>`__, in which two39recurrent neural networks work together to transform one sequence to40another. An encoder network condenses an input sequence into a vector,41and a decoder network unfolds that vector into a new sequence.4243.. figure:: /_static/img/seq-seq-images/seq2seq.png44:alt:4546To improve upon this model we'll use an `attention47mechanism <https://arxiv.org/abs/1409.0473>`__, which lets the decoder48learn to focus over a specific range of the input sequence.4950**Recommended Reading:**5152I assume you have at least installed PyTorch, know Python, and53understand Tensors:5455- https://pytorch.org/ For installation instructions56- :doc:`/beginner/deep_learning_60min_blitz` to get started with PyTorch in general57- :doc:`/beginner/pytorch_with_examples` for a wide and deep overview58- :doc:`/beginner/former_torchies_tutorial` if you are former Lua Torch user596061It would also be useful to know about Sequence to Sequence networks and62how they work:6364- `Learning Phrase Representations using RNN Encoder-Decoder for65Statistical Machine Translation <https://arxiv.org/abs/1406.1078>`__66- `Sequence to Sequence Learning with Neural67Networks <https://arxiv.org/abs/1409.3215>`__68- `Neural Machine Translation by Jointly Learning to Align and69Translate <https://arxiv.org/abs/1409.0473>`__70- `A Neural Conversational Model <https://arxiv.org/abs/1506.05869>`__7172You will also find the previous tutorials on73:doc:`/intermediate/char_rnn_classification_tutorial`74and :doc:`/intermediate/char_rnn_generation_tutorial`75helpful as those concepts are very similar to the Encoder and Decoder76models, respectively.7778**Requirements**79"""80from __future__ import unicode_literals, print_function, division81from io import open82import unicodedata83import re84import random8586import torch87import torch.nn as nn88from torch import optim89import torch.nn.functional as F9091import numpy as np92from torch.utils.data import TensorDataset, DataLoader, RandomSampler9394device = torch.device("cuda" if torch.cuda.is_available() else "cpu")9596######################################################################97# Loading data files98# ==================99#100# The data for this project is a set of many thousands of English to101# French translation pairs.102#103# `This question on Open Data Stack104# Exchange <https://opendata.stackexchange.com/questions/3888/dataset-of-sentences-translated-into-many-languages>`__105# pointed me to the open translation site https://tatoeba.org/ which has106# downloads available at https://tatoeba.org/eng/downloads - and better107# yet, someone did the extra work of splitting language pairs into108# individual text files here: https://www.manythings.org/anki/109#110# The English to French pairs are too big to include in the repository, so111# download to ``data/eng-fra.txt`` before continuing. The file is a tab112# separated list of translation pairs:113#114# .. code-block:: sh115#116# I am cold. J'ai froid.117#118# .. note::119# Download the data from120# `here <https://download.pytorch.org/tutorial/data.zip>`_121# and extract it to the current directory.122123######################################################################124# Similar to the character encoding used in the character-level RNN125# tutorials, we will be representing each word in a language as a one-hot126# vector, or giant vector of zeros except for a single one (at the index127# of the word). Compared to the dozens of characters that might exist in a128# language, there are many many more words, so the encoding vector is much129# larger. We will however cheat a bit and trim the data to only use a few130# thousand words per language.131#132# .. figure:: /_static/img/seq-seq-images/word-encoding.png133# :alt:134#135#136137138######################################################################139# We'll need a unique index per word to use as the inputs and targets of140# the networks later. To keep track of all this we will use a helper class141# called ``Lang`` which has word → index (``word2index``) and index → word142# (``index2word``) dictionaries, as well as a count of each word143# ``word2count`` which will be used to replace rare words later.144#145146SOS_token = 0147EOS_token = 1148149class Lang:150def __init__(self, name):151self.name = name152self.word2index = {}153self.word2count = {}154self.index2word = {0: "SOS", 1: "EOS"}155self.n_words = 2 # Count SOS and EOS156157def addSentence(self, sentence):158for word in sentence.split(' '):159self.addWord(word)160161def addWord(self, word):162if word not in self.word2index:163self.word2index[word] = self.n_words164self.word2count[word] = 1165self.index2word[self.n_words] = word166self.n_words += 1167else:168self.word2count[word] += 1169170171######################################################################172# The files are all in Unicode, to simplify we will turn Unicode173# characters to ASCII, make everything lowercase, and trim most174# punctuation.175#176177# Turn a Unicode string to plain ASCII, thanks to178# https://stackoverflow.com/a/518232/2809427179def unicodeToAscii(s):180return ''.join(181c for c in unicodedata.normalize('NFD', s)182if unicodedata.category(c) != 'Mn'183)184185# Lowercase, trim, and remove non-letter characters186def normalizeString(s):187s = unicodeToAscii(s.lower().strip())188s = re.sub(r"([.!?])", r" \1", s)189s = re.sub(r"[^a-zA-Z!?]+", r" ", s)190return s.strip()191192193######################################################################194# To read the data file we will split the file into lines, and then split195# lines into pairs. The files are all English → Other Language, so if we196# want to translate from Other Language → English I added the ``reverse``197# flag to reverse the pairs.198#199200def readLangs(lang1, lang2, reverse=False):201print("Reading lines...")202203# Read the file and split into lines204lines = open('data/%s-%s.txt' % (lang1, lang2), encoding='utf-8').\205read().strip().split('\n')206207# Split every line into pairs and normalize208pairs = [[normalizeString(s) for s in l.split('\t')] for l in lines]209210# Reverse pairs, make Lang instances211if reverse:212pairs = [list(reversed(p)) for p in pairs]213input_lang = Lang(lang2)214output_lang = Lang(lang1)215else:216input_lang = Lang(lang1)217output_lang = Lang(lang2)218219return input_lang, output_lang, pairs220221222######################################################################223# Since there are a *lot* of example sentences and we want to train224# something quickly, we'll trim the data set to only relatively short and225# simple sentences. Here the maximum length is 10 words (that includes226# ending punctuation) and we're filtering to sentences that translate to227# the form "I am" or "He is" etc. (accounting for apostrophes replaced228# earlier).229#230231MAX_LENGTH = 10232233eng_prefixes = (234"i am ", "i m ",235"he is", "he s ",236"she is", "she s ",237"you are", "you re ",238"we are", "we re ",239"they are", "they re "240)241242def filterPair(p):243return len(p[0].split(' ')) < MAX_LENGTH and \244len(p[1].split(' ')) < MAX_LENGTH and \245p[1].startswith(eng_prefixes)246247248def filterPairs(pairs):249return [pair for pair in pairs if filterPair(pair)]250251252######################################################################253# The full process for preparing the data is:254#255# - Read text file and split into lines, split lines into pairs256# - Normalize text, filter by length and content257# - Make word lists from sentences in pairs258#259260def prepareData(lang1, lang2, reverse=False):261input_lang, output_lang, pairs = readLangs(lang1, lang2, reverse)262print("Read %s sentence pairs" % len(pairs))263pairs = filterPairs(pairs)264print("Trimmed to %s sentence pairs" % len(pairs))265print("Counting words...")266for pair in pairs:267input_lang.addSentence(pair[0])268output_lang.addSentence(pair[1])269print("Counted words:")270print(input_lang.name, input_lang.n_words)271print(output_lang.name, output_lang.n_words)272return input_lang, output_lang, pairs273274input_lang, output_lang, pairs = prepareData('eng', 'fra', True)275print(random.choice(pairs))276277278######################################################################279# The Seq2Seq Model280# =================281#282# A Recurrent Neural Network, or RNN, is a network that operates on a283# sequence and uses its own output as input for subsequent steps.284#285# A `Sequence to Sequence network <https://arxiv.org/abs/1409.3215>`__, or286# seq2seq network, or `Encoder Decoder287# network <https://arxiv.org/pdf/1406.1078v3.pdf>`__, is a model288# consisting of two RNNs called the encoder and decoder. The encoder reads289# an input sequence and outputs a single vector, and the decoder reads290# that vector to produce an output sequence.291#292# .. figure:: /_static/img/seq-seq-images/seq2seq.png293# :alt:294#295# Unlike sequence prediction with a single RNN, where every input296# corresponds to an output, the seq2seq model frees us from sequence297# length and order, which makes it ideal for translation between two298# languages.299#300# Consider the sentence ``Je ne suis pas le chat noir`` → ``I am not the301# black cat``. Most of the words in the input sentence have a direct302# translation in the output sentence, but are in slightly different303# orders, e.g. ``chat noir`` and ``black cat``. Because of the ``ne/pas``304# construction there is also one more word in the input sentence. It would305# be difficult to produce a correct translation directly from the sequence306# of input words.307#308# With a seq2seq model the encoder creates a single vector which, in the309# ideal case, encodes the "meaning" of the input sequence into a single310# vector — a single point in some N dimensional space of sentences.311#312313314######################################################################315# The Encoder316# -----------317#318# The encoder of a seq2seq network is a RNN that outputs some value for319# every word from the input sentence. For every input word the encoder320# outputs a vector and a hidden state, and uses the hidden state for the321# next input word.322#323# .. figure:: /_static/img/seq-seq-images/encoder-network.png324# :alt:325#326#327328class EncoderRNN(nn.Module):329def __init__(self, input_size, hidden_size, dropout_p=0.1):330super(EncoderRNN, self).__init__()331self.hidden_size = hidden_size332333self.embedding = nn.Embedding(input_size, hidden_size)334self.gru = nn.GRU(hidden_size, hidden_size, batch_first=True)335self.dropout = nn.Dropout(dropout_p)336337def forward(self, input):338embedded = self.dropout(self.embedding(input))339output, hidden = self.gru(embedded)340return output, hidden341342######################################################################343# The Decoder344# -----------345#346# The decoder is another RNN that takes the encoder output vector(s) and347# outputs a sequence of words to create the translation.348#349350351######################################################################352# Simple Decoder353# ^^^^^^^^^^^^^^354#355# In the simplest seq2seq decoder we use only last output of the encoder.356# This last output is sometimes called the *context vector* as it encodes357# context from the entire sequence. This context vector is used as the358# initial hidden state of the decoder.359#360# At every step of decoding, the decoder is given an input token and361# hidden state. The initial input token is the start-of-string ``<SOS>``362# token, and the first hidden state is the context vector (the encoder's363# last hidden state).364#365# .. figure:: /_static/img/seq-seq-images/decoder-network.png366# :alt:367#368#369370class DecoderRNN(nn.Module):371def __init__(self, hidden_size, output_size):372super(DecoderRNN, self).__init__()373self.embedding = nn.Embedding(output_size, hidden_size)374self.gru = nn.GRU(hidden_size, hidden_size, batch_first=True)375self.out = nn.Linear(hidden_size, output_size)376377def forward(self, encoder_outputs, encoder_hidden, target_tensor=None):378batch_size = encoder_outputs.size(0)379decoder_input = torch.empty(batch_size, 1, dtype=torch.long, device=device).fill_(SOS_token)380decoder_hidden = encoder_hidden381decoder_outputs = []382383for i in range(MAX_LENGTH):384decoder_output, decoder_hidden = self.forward_step(decoder_input, decoder_hidden)385decoder_outputs.append(decoder_output)386387if target_tensor is not None:388# Teacher forcing: Feed the target as the next input389decoder_input = target_tensor[:, i].unsqueeze(1) # Teacher forcing390else:391# Without teacher forcing: use its own predictions as the next input392_, topi = decoder_output.topk(1)393decoder_input = topi.squeeze(-1).detach() # detach from history as input394395decoder_outputs = torch.cat(decoder_outputs, dim=1)396decoder_outputs = F.log_softmax(decoder_outputs, dim=-1)397return decoder_outputs, decoder_hidden, None # We return `None` for consistency in the training loop398399def forward_step(self, input, hidden):400output = self.embedding(input)401output = F.relu(output)402output, hidden = self.gru(output, hidden)403output = self.out(output)404return output, hidden405406######################################################################407# I encourage you to train and observe the results of this model, but to408# save space we'll be going straight for the gold and introducing the409# Attention Mechanism.410#411412413######################################################################414# Attention Decoder415# ^^^^^^^^^^^^^^^^^416#417# If only the context vector is passed between the encoder and decoder,418# that single vector carries the burden of encoding the entire sentence.419#420# Attention allows the decoder network to "focus" on a different part of421# the encoder's outputs for every step of the decoder's own outputs. First422# we calculate a set of *attention weights*. These will be multiplied by423# the encoder output vectors to create a weighted combination. The result424# (called ``attn_applied`` in the code) should contain information about425# that specific part of the input sequence, and thus help the decoder426# choose the right output words.427#428# .. figure:: https://i.imgur.com/1152PYf.png429# :alt:430#431# Calculating the attention weights is done with another feed-forward432# layer ``attn``, using the decoder's input and hidden state as inputs.433# Because there are sentences of all sizes in the training data, to434# actually create and train this layer we have to choose a maximum435# sentence length (input length, for encoder outputs) that it can apply436# to. Sentences of the maximum length will use all the attention weights,437# while shorter sentences will only use the first few.438#439# .. figure:: /_static/img/seq-seq-images/attention-decoder-network.png440# :alt:441#442#443# Bahdanau attention, also known as additive attention, is a commonly used444# attention mechanism in sequence-to-sequence models, particularly in neural445# machine translation tasks. It was introduced by Bahdanau et al. in their446# paper titled `Neural Machine Translation by Jointly Learning to Align and Translate <https://arxiv.org/pdf/1409.0473.pdf>`__.447# This attention mechanism employs a learned alignment model to compute attention448# scores between the encoder and decoder hidden states. It utilizes a feed-forward449# neural network to calculate alignment scores.450#451# However, there are alternative attention mechanisms available, such as Luong attention,452# which computes attention scores by taking the dot product between the decoder hidden453# state and the encoder hidden states. It does not involve the non-linear transformation454# used in Bahdanau attention.455#456# In this tutorial, we will be using Bahdanau attention. However, it would be a valuable457# exercise to explore modifying the attention mechanism to use Luong attention.458459class BahdanauAttention(nn.Module):460def __init__(self, hidden_size):461super(BahdanauAttention, self).__init__()462self.Wa = nn.Linear(hidden_size, hidden_size)463self.Ua = nn.Linear(hidden_size, hidden_size)464self.Va = nn.Linear(hidden_size, 1)465466def forward(self, query, keys):467scores = self.Va(torch.tanh(self.Wa(query) + self.Ua(keys)))468scores = scores.squeeze(2).unsqueeze(1)469470weights = F.softmax(scores, dim=-1)471context = torch.bmm(weights, keys)472473return context, weights474475class AttnDecoderRNN(nn.Module):476def __init__(self, hidden_size, output_size, dropout_p=0.1):477super(AttnDecoderRNN, self).__init__()478self.embedding = nn.Embedding(output_size, hidden_size)479self.attention = BahdanauAttention(hidden_size)480self.gru = nn.GRU(2 * hidden_size, hidden_size, batch_first=True)481self.out = nn.Linear(hidden_size, output_size)482self.dropout = nn.Dropout(dropout_p)483484def forward(self, encoder_outputs, encoder_hidden, target_tensor=None):485batch_size = encoder_outputs.size(0)486decoder_input = torch.empty(batch_size, 1, dtype=torch.long, device=device).fill_(SOS_token)487decoder_hidden = encoder_hidden488decoder_outputs = []489attentions = []490491for i in range(MAX_LENGTH):492decoder_output, decoder_hidden, attn_weights = self.forward_step(493decoder_input, decoder_hidden, encoder_outputs494)495decoder_outputs.append(decoder_output)496attentions.append(attn_weights)497498if target_tensor is not None:499# Teacher forcing: Feed the target as the next input500decoder_input = target_tensor[:, i].unsqueeze(1) # Teacher forcing501else:502# Without teacher forcing: use its own predictions as the next input503_, topi = decoder_output.topk(1)504decoder_input = topi.squeeze(-1).detach() # detach from history as input505506decoder_outputs = torch.cat(decoder_outputs, dim=1)507decoder_outputs = F.log_softmax(decoder_outputs, dim=-1)508attentions = torch.cat(attentions, dim=1)509510return decoder_outputs, decoder_hidden, attentions511512513def forward_step(self, input, hidden, encoder_outputs):514embedded = self.dropout(self.embedding(input))515516query = hidden.permute(1, 0, 2)517context, attn_weights = self.attention(query, encoder_outputs)518input_gru = torch.cat((embedded, context), dim=2)519520output, hidden = self.gru(input_gru, hidden)521output = self.out(output)522523return output, hidden, attn_weights524525526######################################################################527# .. note:: There are other forms of attention that work around the length528# limitation by using a relative position approach. Read about "local529# attention" in `Effective Approaches to Attention-based Neural Machine530# Translation <https://arxiv.org/abs/1508.04025>`__.531#532# Training533# ========534#535# Preparing Training Data536# -----------------------537#538# To train, for each pair we will need an input tensor (indexes of the539# words in the input sentence) and target tensor (indexes of the words in540# the target sentence). While creating these vectors we will append the541# EOS token to both sequences.542#543544def indexesFromSentence(lang, sentence):545return [lang.word2index[word] for word in sentence.split(' ')]546547def tensorFromSentence(lang, sentence):548indexes = indexesFromSentence(lang, sentence)549indexes.append(EOS_token)550return torch.tensor(indexes, dtype=torch.long, device=device).view(1, -1)551552def tensorsFromPair(pair):553input_tensor = tensorFromSentence(input_lang, pair[0])554target_tensor = tensorFromSentence(output_lang, pair[1])555return (input_tensor, target_tensor)556557def get_dataloader(batch_size):558input_lang, output_lang, pairs = prepareData('eng', 'fra', True)559560n = len(pairs)561input_ids = np.zeros((n, MAX_LENGTH), dtype=np.int32)562target_ids = np.zeros((n, MAX_LENGTH), dtype=np.int32)563564for idx, (inp, tgt) in enumerate(pairs):565inp_ids = indexesFromSentence(input_lang, inp)566tgt_ids = indexesFromSentence(output_lang, tgt)567inp_ids.append(EOS_token)568tgt_ids.append(EOS_token)569input_ids[idx, :len(inp_ids)] = inp_ids570target_ids[idx, :len(tgt_ids)] = tgt_ids571572train_data = TensorDataset(torch.LongTensor(input_ids).to(device),573torch.LongTensor(target_ids).to(device))574575train_sampler = RandomSampler(train_data)576train_dataloader = DataLoader(train_data, sampler=train_sampler, batch_size=batch_size)577return input_lang, output_lang, train_dataloader578579580######################################################################581# Training the Model582# ------------------583#584# To train we run the input sentence through the encoder, and keep track585# of every output and the latest hidden state. Then the decoder is given586# the ``<SOS>`` token as its first input, and the last hidden state of the587# encoder as its first hidden state.588#589# "Teacher forcing" is the concept of using the real target outputs as590# each next input, instead of using the decoder's guess as the next input.591# Using teacher forcing causes it to converge faster but `when the trained592# network is exploited, it may exhibit593# instability <http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.378.4095&rep=rep1&type=pdf>`__.594#595# You can observe outputs of teacher-forced networks that read with596# coherent grammar but wander far from the correct translation -597# intuitively it has learned to represent the output grammar and can "pick598# up" the meaning once the teacher tells it the first few words, but it599# has not properly learned how to create the sentence from the translation600# in the first place.601#602# Because of the freedom PyTorch's autograd gives us, we can randomly603# choose to use teacher forcing or not with a simple if statement. Turn604# ``teacher_forcing_ratio`` up to use more of it.605#606607def train_epoch(dataloader, encoder, decoder, encoder_optimizer,608decoder_optimizer, criterion):609610total_loss = 0611for data in dataloader:612input_tensor, target_tensor = data613614encoder_optimizer.zero_grad()615decoder_optimizer.zero_grad()616617encoder_outputs, encoder_hidden = encoder(input_tensor)618decoder_outputs, _, _ = decoder(encoder_outputs, encoder_hidden, target_tensor)619620loss = criterion(621decoder_outputs.view(-1, decoder_outputs.size(-1)),622target_tensor.view(-1)623)624loss.backward()625626encoder_optimizer.step()627decoder_optimizer.step()628629total_loss += loss.item()630631return total_loss / len(dataloader)632633634######################################################################635# This is a helper function to print time elapsed and estimated time636# remaining given the current time and progress %.637#638639import time640import math641642def asMinutes(s):643m = math.floor(s / 60)644s -= m * 60645return '%dm %ds' % (m, s)646647def timeSince(since, percent):648now = time.time()649s = now - since650es = s / (percent)651rs = es - s652return '%s (- %s)' % (asMinutes(s), asMinutes(rs))653654655######################################################################656# The whole training process looks like this:657#658# - Start a timer659# - Initialize optimizers and criterion660# - Create set of training pairs661# - Start empty losses array for plotting662#663# Then we call ``train`` many times and occasionally print the progress (%664# of examples, time so far, estimated time) and average loss.665#666667def train(train_dataloader, encoder, decoder, n_epochs, learning_rate=0.001,668print_every=100, plot_every=100):669start = time.time()670plot_losses = []671print_loss_total = 0 # Reset every print_every672plot_loss_total = 0 # Reset every plot_every673674encoder_optimizer = optim.Adam(encoder.parameters(), lr=learning_rate)675decoder_optimizer = optim.Adam(decoder.parameters(), lr=learning_rate)676criterion = nn.NLLLoss()677678for epoch in range(1, n_epochs + 1):679loss = train_epoch(train_dataloader, encoder, decoder, encoder_optimizer, decoder_optimizer, criterion)680print_loss_total += loss681plot_loss_total += loss682683if epoch % print_every == 0:684print_loss_avg = print_loss_total / print_every685print_loss_total = 0686print('%s (%d %d%%) %.4f' % (timeSince(start, epoch / n_epochs),687epoch, epoch / n_epochs * 100, print_loss_avg))688689if epoch % plot_every == 0:690plot_loss_avg = plot_loss_total / plot_every691plot_losses.append(plot_loss_avg)692plot_loss_total = 0693694showPlot(plot_losses)695696######################################################################697# Plotting results698# ----------------699#700# Plotting is done with matplotlib, using the array of loss values701# ``plot_losses`` saved while training.702#703704import matplotlib.pyplot as plt705plt.switch_backend('agg')706import matplotlib.ticker as ticker707import numpy as np708709def showPlot(points):710plt.figure()711fig, ax = plt.subplots()712# this locator puts ticks at regular intervals713loc = ticker.MultipleLocator(base=0.2)714ax.yaxis.set_major_locator(loc)715plt.plot(points)716717718######################################################################719# Evaluation720# ==========721#722# Evaluation is mostly the same as training, but there are no targets so723# we simply feed the decoder's predictions back to itself for each step.724# Every time it predicts a word we add it to the output string, and if it725# predicts the EOS token we stop there. We also store the decoder's726# attention outputs for display later.727#728729def evaluate(encoder, decoder, sentence, input_lang, output_lang):730with torch.no_grad():731input_tensor = tensorFromSentence(input_lang, sentence)732733encoder_outputs, encoder_hidden = encoder(input_tensor)734decoder_outputs, decoder_hidden, decoder_attn = decoder(encoder_outputs, encoder_hidden)735736_, topi = decoder_outputs.topk(1)737decoded_ids = topi.squeeze()738739decoded_words = []740for idx in decoded_ids:741if idx.item() == EOS_token:742decoded_words.append('<EOS>')743break744decoded_words.append(output_lang.index2word[idx.item()])745return decoded_words, decoder_attn746747748######################################################################749# We can evaluate random sentences from the training set and print out the750# input, target, and output to make some subjective quality judgements:751#752753def evaluateRandomly(encoder, decoder, n=10):754for i in range(n):755pair = random.choice(pairs)756print('>', pair[0])757print('=', pair[1])758output_words, _ = evaluate(encoder, decoder, pair[0], input_lang, output_lang)759output_sentence = ' '.join(output_words)760print('<', output_sentence)761print('')762763764######################################################################765# Training and Evaluating766# =======================767#768# With all these helper functions in place (it looks like extra work, but769# it makes it easier to run multiple experiments) we can actually770# initialize a network and start training.771#772# Remember that the input sentences were heavily filtered. For this small773# dataset we can use relatively small networks of 256 hidden nodes and a774# single GRU layer. After about 40 minutes on a MacBook CPU we'll get some775# reasonable results.776#777# .. note::778# If you run this notebook you can train, interrupt the kernel,779# evaluate, and continue training later. Comment out the lines where the780# encoder and decoder are initialized and run ``trainIters`` again.781#782783hidden_size = 128784batch_size = 32785786input_lang, output_lang, train_dataloader = get_dataloader(batch_size)787788encoder = EncoderRNN(input_lang.n_words, hidden_size).to(device)789decoder = AttnDecoderRNN(hidden_size, output_lang.n_words).to(device)790791train(train_dataloader, encoder, decoder, 80, print_every=5, plot_every=5)792793######################################################################794#795# Set dropout layers to ``eval`` mode796encoder.eval()797decoder.eval()798evaluateRandomly(encoder, decoder)799800801######################################################################802# Visualizing Attention803# ---------------------804#805# A useful property of the attention mechanism is its highly interpretable806# outputs. Because it is used to weight specific encoder outputs of the807# input sequence, we can imagine looking where the network is focused most808# at each time step.809#810# You could simply run ``plt.matshow(attentions)`` to see attention output811# displayed as a matrix. For a better viewing experience we will do the812# extra work of adding axes and labels:813#814815def showAttention(input_sentence, output_words, attentions):816fig = plt.figure()817ax = fig.add_subplot(111)818cax = ax.matshow(attentions.cpu().numpy(), cmap='bone')819fig.colorbar(cax)820821# Set up axes822ax.set_xticklabels([''] + input_sentence.split(' ') +823['<EOS>'], rotation=90)824ax.set_yticklabels([''] + output_words)825826# Show label at every tick827ax.xaxis.set_major_locator(ticker.MultipleLocator(1))828ax.yaxis.set_major_locator(ticker.MultipleLocator(1))829830plt.show()831832833def evaluateAndShowAttention(input_sentence):834output_words, attentions = evaluate(encoder, decoder, input_sentence, input_lang, output_lang)835print('input =', input_sentence)836print('output =', ' '.join(output_words))837showAttention(input_sentence, output_words, attentions[0, :len(output_words), :])838839840evaluateAndShowAttention('il n est pas aussi grand que son pere')841842evaluateAndShowAttention('je suis trop fatigue pour conduire')843844evaluateAndShowAttention('je suis desole si c est une question idiote')845846evaluateAndShowAttention('je suis reellement fiere de vous')847848849######################################################################850# Exercises851# =========852#853# - Try with a different dataset854#855# - Another language pair856# - Human → Machine (e.g. IOT commands)857# - Chat → Response858# - Question → Answer859#860# - Replace the embeddings with pretrained word embeddings such as ``word2vec`` or861# ``GloVe``862# - Try with more layers, more hidden units, and more sentences. Compare863# the training time and results.864# - If you use a translation file where pairs have two of the same phrase865# (``I am test \t I am test``), you can use this as an autoencoder. Try866# this:867#868# - Train as an autoencoder869# - Save only the Encoder network870# - Train a new Decoder for translation from there871#872873874