Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place.
Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place.
Path: blob/main/beginner_source/nlp/advanced_tutorial.py
Views: 713
# -*- coding: utf-8 -*-1r"""2Advanced: Making Dynamic Decisions and the Bi-LSTM CRF3======================================================45Dynamic versus Static Deep Learning Toolkits6--------------------------------------------78Pytorch is a *dynamic* neural network kit. Another example of a dynamic9kit is `Dynet <https://github.com/clab/dynet>`__ (I mention this because10working with Pytorch and Dynet is similar. If you see an example in11Dynet, it will probably help you implement it in Pytorch). The opposite12is the *static* tool kit, which includes Theano, Keras, TensorFlow, etc.13The core difference is the following:1415* In a static toolkit, you define16a computation graph once, compile it, and then stream instances to it.17* In a dynamic toolkit, you define a computation graph *for each18instance*. It is never compiled and is executed on-the-fly1920Without a lot of experience, it is difficult to appreciate the21difference. One example is to suppose we want to build a deep22constituent parser. Suppose our model involves roughly the following23steps:2425* We build the tree bottom up26* Tag the root nodes (the words of the sentence)27* From there, use a neural network and the embeddings28of the words to find combinations that form constituents. Whenever you29form a new constituent, use some sort of technique to get an embedding30of the constituent. In this case, our network architecture will depend31completely on the input sentence. In the sentence "The green cat32scratched the wall", at some point in the model, we will want to combine33the span :math:`(i,j,r) = (1, 3, \text{NP})` (that is, an NP constituent34spans word 1 to word 3, in this case "The green cat").3536However, another sentence might be "Somewhere, the big fat cat scratched37the wall". In this sentence, we will want to form the constituent38:math:`(2, 4, NP)` at some point. The constituents we will want to form39will depend on the instance. If we just compile the computation graph40once, as in a static toolkit, it will be exceptionally difficult or41impossible to program this logic. In a dynamic toolkit though, there42isn't just 1 pre-defined computation graph. There can be a new43computation graph for each instance, so this problem goes away.4445Dynamic toolkits also have the advantage of being easier to debug and46the code more closely resembling the host language (by that I mean that47Pytorch and Dynet look more like actual Python code than Keras or48Theano).4950Bi-LSTM Conditional Random Field Discussion51-------------------------------------------5253For this section, we will see a full, complicated example of a Bi-LSTM54Conditional Random Field for named-entity recognition. The LSTM tagger55above is typically sufficient for part-of-speech tagging, but a sequence56model like the CRF is really essential for strong performance on NER.57Familiarity with CRF's is assumed. Although this name sounds scary, all58the model is a CRF but where an LSTM provides the features. This is59an advanced model though, far more complicated than any earlier model in60this tutorial. If you want to skip it, that is fine. To see if you're61ready, see if you can:6263- Write the recurrence for the viterbi variable at step i for tag k.64- Modify the above recurrence to compute the forward variables instead.65- Modify again the above recurrence to compute the forward variables in66log-space (hint: log-sum-exp)6768If you can do those three things, you should be able to understand the69code below. Recall that the CRF computes a conditional probability. Let70:math:`y` be a tag sequence and :math:`x` an input sequence of words.71Then we compute7273.. math:: P(y|x) = \frac{\exp{(\text{Score}(x, y)})}{\sum_{y'} \exp{(\text{Score}(x, y')})}7475Where the score is determined by defining some log potentials76:math:`\log \psi_i(x,y)` such that7778.. math:: \text{Score}(x,y) = \sum_i \log \psi_i(x,y)7980To make the partition function tractable, the potentials must look only81at local features.8283In the Bi-LSTM CRF, we define two kinds of potentials: emission and84transition. The emission potential for the word at index :math:`i` comes85from the hidden state of the Bi-LSTM at timestep :math:`i`. The86transition scores are stored in a :math:`|T|x|T|` matrix87:math:`\textbf{P}`, where :math:`T` is the tag set. In my88implementation, :math:`\textbf{P}_{j,k}` is the score of transitioning89to tag :math:`j` from tag :math:`k`. So:9091.. math:: \text{Score}(x,y) = \sum_i \log \psi_\text{EMIT}(y_i \rightarrow x_i) + \log \psi_\text{TRANS}(y_{i-1} \rightarrow y_i)9293.. math:: = \sum_i h_i[y_i] + \textbf{P}_{y_i, y_{i-1}}9495where in this second expression, we think of the tags as being assigned96unique non-negative indices.9798If the above discussion was too brief, you can check out99`this <http://www.cs.columbia.edu/%7Emcollins/crf.pdf>`__ write up from100Michael Collins on CRFs.101102Implementation Notes103--------------------104105The example below implements the forward algorithm in log space to106compute the partition function, and the viterbi algorithm to decode.107Backpropagation will compute the gradients automatically for us. We108don't have to do anything by hand.109110The implementation is not optimized. If you understand what is going on,111you'll probably quickly see that iterating over the next tag in the112forward algorithm could probably be done in one big operation. I wanted113to code to be more readable. If you want to make the relevant change,114you could probably use this tagger for real tasks.115"""116# Author: Robert Guthrie117118import torch119import torch.autograd as autograd120import torch.nn as nn121import torch.optim as optim122123torch.manual_seed(1)124125#####################################################################126# Helper functions to make the code more readable.127128129def argmax(vec):130# return the argmax as a python int131_, idx = torch.max(vec, 1)132return idx.item()133134135def prepare_sequence(seq, to_ix):136idxs = [to_ix[w] for w in seq]137return torch.tensor(idxs, dtype=torch.long)138139140# Compute log sum exp in a numerically stable way for the forward algorithm141def log_sum_exp(vec):142max_score = vec[0, argmax(vec)]143max_score_broadcast = max_score.view(1, -1).expand(1, vec.size()[1])144return max_score + \145torch.log(torch.sum(torch.exp(vec - max_score_broadcast)))146147#####################################################################148# Create model149150151class BiLSTM_CRF(nn.Module):152153def __init__(self, vocab_size, tag_to_ix, embedding_dim, hidden_dim):154super(BiLSTM_CRF, self).__init__()155self.embedding_dim = embedding_dim156self.hidden_dim = hidden_dim157self.vocab_size = vocab_size158self.tag_to_ix = tag_to_ix159self.tagset_size = len(tag_to_ix)160161self.word_embeds = nn.Embedding(vocab_size, embedding_dim)162self.lstm = nn.LSTM(embedding_dim, hidden_dim // 2,163num_layers=1, bidirectional=True)164165# Maps the output of the LSTM into tag space.166self.hidden2tag = nn.Linear(hidden_dim, self.tagset_size)167168# Matrix of transition parameters. Entry i,j is the score of169# transitioning *to* i *from* j.170self.transitions = nn.Parameter(171torch.randn(self.tagset_size, self.tagset_size))172173# These two statements enforce the constraint that we never transfer174# to the start tag and we never transfer from the stop tag175self.transitions.data[tag_to_ix[START_TAG], :] = -10000176self.transitions.data[:, tag_to_ix[STOP_TAG]] = -10000177178self.hidden = self.init_hidden()179180def init_hidden(self):181return (torch.randn(2, 1, self.hidden_dim // 2),182torch.randn(2, 1, self.hidden_dim // 2))183184def _forward_alg(self, feats):185# Do the forward algorithm to compute the partition function186init_alphas = torch.full((1, self.tagset_size), -10000.)187# START_TAG has all of the score.188init_alphas[0][self.tag_to_ix[START_TAG]] = 0.189190# Wrap in a variable so that we will get automatic backprop191forward_var = init_alphas192193# Iterate through the sentence194for feat in feats:195alphas_t = [] # The forward tensors at this timestep196for next_tag in range(self.tagset_size):197# broadcast the emission score: it is the same regardless of198# the previous tag199emit_score = feat[next_tag].view(2001, -1).expand(1, self.tagset_size)201# the ith entry of trans_score is the score of transitioning to202# next_tag from i203trans_score = self.transitions[next_tag].view(1, -1)204# The ith entry of next_tag_var is the value for the205# edge (i -> next_tag) before we do log-sum-exp206next_tag_var = forward_var + trans_score + emit_score207# The forward variable for this tag is log-sum-exp of all the208# scores.209alphas_t.append(log_sum_exp(next_tag_var).view(1))210forward_var = torch.cat(alphas_t).view(1, -1)211terminal_var = forward_var + self.transitions[self.tag_to_ix[STOP_TAG]]212alpha = log_sum_exp(terminal_var)213return alpha214215def _get_lstm_features(self, sentence):216self.hidden = self.init_hidden()217embeds = self.word_embeds(sentence).view(len(sentence), 1, -1)218lstm_out, self.hidden = self.lstm(embeds, self.hidden)219lstm_out = lstm_out.view(len(sentence), self.hidden_dim)220lstm_feats = self.hidden2tag(lstm_out)221return lstm_feats222223def _score_sentence(self, feats, tags):224# Gives the score of a provided tag sequence225score = torch.zeros(1)226tags = torch.cat([torch.tensor([self.tag_to_ix[START_TAG]], dtype=torch.long), tags])227for i, feat in enumerate(feats):228score = score + \229self.transitions[tags[i + 1], tags[i]] + feat[tags[i + 1]]230score = score + self.transitions[self.tag_to_ix[STOP_TAG], tags[-1]]231return score232233def _viterbi_decode(self, feats):234backpointers = []235236# Initialize the viterbi variables in log space237init_vvars = torch.full((1, self.tagset_size), -10000.)238init_vvars[0][self.tag_to_ix[START_TAG]] = 0239240# forward_var at step i holds the viterbi variables for step i-1241forward_var = init_vvars242for feat in feats:243bptrs_t = [] # holds the backpointers for this step244viterbivars_t = [] # holds the viterbi variables for this step245246for next_tag in range(self.tagset_size):247# next_tag_var[i] holds the viterbi variable for tag i at the248# previous step, plus the score of transitioning249# from tag i to next_tag.250# We don't include the emission scores here because the max251# does not depend on them (we add them in below)252next_tag_var = forward_var + self.transitions[next_tag]253best_tag_id = argmax(next_tag_var)254bptrs_t.append(best_tag_id)255viterbivars_t.append(next_tag_var[0][best_tag_id].view(1))256# Now add in the emission scores, and assign forward_var to the set257# of viterbi variables we just computed258forward_var = (torch.cat(viterbivars_t) + feat).view(1, -1)259backpointers.append(bptrs_t)260261# Transition to STOP_TAG262terminal_var = forward_var + self.transitions[self.tag_to_ix[STOP_TAG]]263best_tag_id = argmax(terminal_var)264path_score = terminal_var[0][best_tag_id]265266# Follow the back pointers to decode the best path.267best_path = [best_tag_id]268for bptrs_t in reversed(backpointers):269best_tag_id = bptrs_t[best_tag_id]270best_path.append(best_tag_id)271# Pop off the start tag (we dont want to return that to the caller)272start = best_path.pop()273assert start == self.tag_to_ix[START_TAG] # Sanity check274best_path.reverse()275return path_score, best_path276277def neg_log_likelihood(self, sentence, tags):278feats = self._get_lstm_features(sentence)279forward_score = self._forward_alg(feats)280gold_score = self._score_sentence(feats, tags)281return forward_score - gold_score282283def forward(self, sentence): # dont confuse this with _forward_alg above.284# Get the emission scores from the BiLSTM285lstm_feats = self._get_lstm_features(sentence)286287# Find the best path, given the features.288score, tag_seq = self._viterbi_decode(lstm_feats)289return score, tag_seq290291#####################################################################292# Run training293294295START_TAG = "<START>"296STOP_TAG = "<STOP>"297EMBEDDING_DIM = 5298HIDDEN_DIM = 4299300# Make up some training data301training_data = [(302"the wall street journal reported today that apple corporation made money".split(),303"B I I I O O O B I O O".split()304), (305"georgia tech is a university in georgia".split(),306"B I O O O O B".split()307)]308309word_to_ix = {}310for sentence, tags in training_data:311for word in sentence:312if word not in word_to_ix:313word_to_ix[word] = len(word_to_ix)314315tag_to_ix = {"B": 0, "I": 1, "O": 2, START_TAG: 3, STOP_TAG: 4}316317model = BiLSTM_CRF(len(word_to_ix), tag_to_ix, EMBEDDING_DIM, HIDDEN_DIM)318optimizer = optim.SGD(model.parameters(), lr=0.01, weight_decay=1e-4)319320# Check predictions before training321with torch.no_grad():322precheck_sent = prepare_sequence(training_data[0][0], word_to_ix)323precheck_tags = torch.tensor([tag_to_ix[t] for t in training_data[0][1]], dtype=torch.long)324print(model(precheck_sent))325326# Make sure prepare_sequence from earlier in the LSTM section is loaded327for epoch in range(328300): # again, normally you would NOT do 300 epochs, it is toy data329for sentence, tags in training_data:330# Step 1. Remember that Pytorch accumulates gradients.331# We need to clear them out before each instance332model.zero_grad()333334# Step 2. Get our inputs ready for the network, that is,335# turn them into Tensors of word indices.336sentence_in = prepare_sequence(sentence, word_to_ix)337targets = torch.tensor([tag_to_ix[t] for t in tags], dtype=torch.long)338339# Step 3. Run our forward pass.340loss = model.neg_log_likelihood(sentence_in, targets)341342# Step 4. Compute the loss, gradients, and update the parameters by343# calling optimizer.step()344loss.backward()345optimizer.step()346347# Check predictions after training348with torch.no_grad():349precheck_sent = prepare_sequence(training_data[0][0], word_to_ix)350print(model(precheck_sent))351# We got it!352353354######################################################################355# Exercise: A new loss function for discriminative tagging356# --------------------------------------------------------357#358# It wasn't really necessary for us to create a computation graph when359# doing decoding, since we do not backpropagate from the viterbi path360# score. Since we have it anyway, try training the tagger where the loss361# function is the difference between the Viterbi path score and the score362# of the gold-standard path. It should be clear that this function is363# non-negative and 0 when the predicted tag sequence is the correct tag364# sequence. This is essentially *structured perceptron*.365#366# This modification should be short, since Viterbi and score\_sentence are367# already implemented. This is an example of the shape of the computation368# graph *depending on the training instance*. Although I haven't tried369# implementing this in a static toolkit, I imagine that it is possible but370# much less straightforward.371#372# Pick up some real data and do a comparison!373#374375376