📚 The CoCalc Library - books, templates and other resources
License: OTHER
Logistic Regression in Theano
Credits: Forked from summerschool2015 by mila-udem
This notebook is inspired from the tutorial on logistic regression on deeplearning.net.
In this notebook, we show how Theano can be used to implement the most basic classifier: the logistic regression. We start off with a quick primer of the model, which serves both as a refresher but also to anchor the notation and show how mathematical expressions are mapped onto Theano graphs.
In the deepest of machine learning traditions, this tutorial will tackle the exciting problem of MNIST digit classification.
Get the data
In the mean time, let's just download a pre-packaged version of MNIST, and load each split of the dataset as NumPy ndarrays.
The model
Logistic regression is a probabilistic, linear classifier. It is parametrized by a weight matrix and a bias vector . Classification is done by projecting an input vector onto a set of hyperplanes, each of which corresponds to a class. The distance from the input to a hyperplane reflects the probability that the input is a member of the corresponding class.
Mathematically, the probability that an input vector is a member of a class , a value of a stochastic variable , can be written as:
The model's prediction is the class whose probability is maximal, specifically:
Now, let us define our input variables. First, we need to define the dimension of our tensors:
n_in
is the length of each training vector,n_out
is the number of classes.
Our variables will be:
x
is a matrix, where each row contains a different example of the dataset. Its shape is(batch_size, n_in)
, butbatch_size
does not have to be specified in advance, and can change during training.W
is a shared matrix, of shape(n_in, n_out)
, initialized with zeros. Columnk
ofW
represents the separation hyperplane for classk
.b
is a shared vector, of lengthn_out
, initialized with zeros. Elementk
ofb
represents the free parameter of hyperplanek
.
Now, we can build a symbolic expression for the matrix of class-membership probability (p_y_given_x
), and for the class whose probability is maximal (y_pred
).
Defining a loss function
Learning optimal model parameters involves minimizing a loss function. In the case of multi-class logistic regression, it is very common to use the negative log-likelihood as the loss. This is equivalent to maximizing the likelihood of the data set under the model parameterized by . Let us first start by defining the likelihood and loss :
Again, we will express those expressions using Theano. We have one additional input, the actual target class y
:
y
is an input vector of integers, of lengthbatch_size
(which will have to match the length ofx
at runtime). The length ofy
can be symbolically expressed byy.shape[0]
.log_prob
is a(batch_size, n_out)
matrix containing the log probabilities of class membership for each example.arange(y.shape[0])
is a symbolic vector which will contain[0,1,2,... batch_size-1]
log_likelihood
is a vector containing the log probability of the target, for each example.loss
is the mean of the negativelog_likelihood
over the examples in the minibatch.
Training procedure
This notebook will use the method of stochastic gradient descent with mini-batches (MSGD) to find values of W
and b
that minimize the loss.
We can let Theano compute symbolic expressions for the gradient of the loss wrt W
and b
.
g_W
and g_b
are symbolic variables, which can be used as part of a computation graph. In particular, let us define the expressions for one step of gradient descent for W
and b
, for a fixed learning rate.
We can then define update expressions, or pairs of (shared variable, expression for its update), that we will use when compiling the Theano function. The updates will be performed each time the function gets called.
The following function, train_model
, returns the loss on the current minibatch, then changes the values of the shared variables according to the update rules. It needs to be passed x
and y
as inputs, but not the shared variables, which are implicit inputs.
The entire learning algorithm thus consists in looping over all examples in the dataset, considering all the examples in one minibatch at a time, and repeatedly calling the train_model
function.
Testing the model
When testing the model, we are interested in the number of misclassified examples (and not only in the likelihood). Here, we build a symbolic expression for retrieving the number of misclassified examples in a minibatch.
This will also be useful to apply on the validation and testing sets, in order to monitor the progress of the model during training, and to do early stopping.
Training the model
Here is the main training loop of the algorithm:
For each epoch, or pass through the training set
split the training set in minibatches, and call
train_model
on each minibatchsplit the validation set in minibatches, and call
test_model
on each minibatch to measure the misclassification rateif the misclassification rate has not improved in a while, stop training
Measure performance on the test set
The early stopping procedure is what decide whether the performance has improved enough. There are many variants, and we will not go into the details of this one here.
We first need to define a few parameters for the training loop and the early stopping procedure.
Visualization
You can visualize the columns of W
, which correspond to the separation hyperplanes for each class.