CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutSign UpSign In
pytorch

CoCalc provides the best real-time collaborative environment for Jupyter Notebooks, LaTeX documents, and SageMath, scalable from individual users to large groups and classes!

GitHub Repository: pytorch/tutorials
Path: blob/main/beginner_source/blitz/cifar10_tutorial.py
Views: 494
1
# -*- coding: utf-8 -*-
2
"""
3
Training a Classifier
4
=====================
5
6
This is it. You have seen how to define neural networks, compute loss and make
7
updates to the weights of the network.
8
9
Now you might be thinking,
10
11
What about data?
12
----------------
13
14
Generally, when you have to deal with image, text, audio or video data,
15
you can use standard python packages that load data into a numpy array.
16
Then you can convert this array into a ``torch.*Tensor``.
17
18
- For images, packages such as Pillow, OpenCV are useful
19
- For audio, packages such as scipy and librosa
20
- For text, either raw Python or Cython based loading, or NLTK and
21
SpaCy are useful
22
23
Specifically for vision, we have created a package called
24
``torchvision``, that has data loaders for common datasets such as
25
ImageNet, CIFAR10, MNIST, etc. and data transformers for images, viz.,
26
``torchvision.datasets`` and ``torch.utils.data.DataLoader``.
27
28
This provides a huge convenience and avoids writing boilerplate code.
29
30
For this tutorial, we will use the CIFAR10 dataset.
31
It has the classes: ‘airplane’, ‘automobile’, ‘bird’, ‘cat’, ‘deer’,
32
‘dog’, ‘frog’, ‘horse’, ‘ship’, ‘truck’. The images in CIFAR-10 are of
33
size 3x32x32, i.e. 3-channel color images of 32x32 pixels in size.
34
35
.. figure:: /_static/img/cifar10.png
36
:alt: cifar10
37
38
cifar10
39
40
41
Training an image classifier
42
----------------------------
43
44
We will do the following steps in order:
45
46
1. Load and normalize the CIFAR10 training and test datasets using
47
``torchvision``
48
2. Define a Convolutional Neural Network
49
3. Define a loss function
50
4. Train the network on the training data
51
5. Test the network on the test data
52
53
1. Load and normalize CIFAR10
54
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
55
56
Using ``torchvision``, it’s extremely easy to load CIFAR10.
57
"""
58
import torch
59
import torchvision
60
import torchvision.transforms as transforms
61
62
########################################################################
63
# The output of torchvision datasets are PILImage images of range [0, 1].
64
# We transform them to Tensors of normalized range [-1, 1].
65
66
########################################################################
67
# .. note::
68
# If running on Windows and you get a BrokenPipeError, try setting
69
# the num_worker of torch.utils.data.DataLoader() to 0.
70
71
transform = transforms.Compose(
72
[transforms.ToTensor(),
73
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
74
75
batch_size = 4
76
77
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
78
download=True, transform=transform)
79
trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size,
80
shuffle=True, num_workers=2)
81
82
testset = torchvision.datasets.CIFAR10(root='./data', train=False,
83
download=True, transform=transform)
84
testloader = torch.utils.data.DataLoader(testset, batch_size=batch_size,
85
shuffle=False, num_workers=2)
86
87
classes = ('plane', 'car', 'bird', 'cat',
88
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
89
90
########################################################################
91
# Let us show some of the training images, for fun.
92
93
import matplotlib.pyplot as plt
94
import numpy as np
95
96
# functions to show an image
97
98
99
def imshow(img):
100
img = img / 2 + 0.5 # unnormalize
101
npimg = img.numpy()
102
plt.imshow(np.transpose(npimg, (1, 2, 0)))
103
plt.show()
104
105
106
# get some random training images
107
dataiter = iter(trainloader)
108
images, labels = next(dataiter)
109
110
# show images
111
imshow(torchvision.utils.make_grid(images))
112
# print labels
113
print(' '.join(f'{classes[labels[j]]:5s}' for j in range(batch_size)))
114
115
116
########################################################################
117
# 2. Define a Convolutional Neural Network
118
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
119
# Copy the neural network from the Neural Networks section before and modify it to
120
# take 3-channel images (instead of 1-channel images as it was defined).
121
122
import torch.nn as nn
123
import torch.nn.functional as F
124
125
126
class Net(nn.Module):
127
def __init__(self):
128
super().__init__()
129
self.conv1 = nn.Conv2d(3, 6, 5)
130
self.pool = nn.MaxPool2d(2, 2)
131
self.conv2 = nn.Conv2d(6, 16, 5)
132
self.fc1 = nn.Linear(16 * 5 * 5, 120)
133
self.fc2 = nn.Linear(120, 84)
134
self.fc3 = nn.Linear(84, 10)
135
136
def forward(self, x):
137
x = self.pool(F.relu(self.conv1(x)))
138
x = self.pool(F.relu(self.conv2(x)))
139
x = torch.flatten(x, 1) # flatten all dimensions except batch
140
x = F.relu(self.fc1(x))
141
x = F.relu(self.fc2(x))
142
x = self.fc3(x)
143
return x
144
145
146
net = Net()
147
148
########################################################################
149
# 3. Define a Loss function and optimizer
150
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
151
# Let's use a Classification Cross-Entropy loss and SGD with momentum.
152
153
import torch.optim as optim
154
155
criterion = nn.CrossEntropyLoss()
156
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
157
158
########################################################################
159
# 4. Train the network
160
# ^^^^^^^^^^^^^^^^^^^^
161
#
162
# This is when things start to get interesting.
163
# We simply have to loop over our data iterator, and feed the inputs to the
164
# network and optimize.
165
166
for epoch in range(2): # loop over the dataset multiple times
167
168
running_loss = 0.0
169
for i, data in enumerate(trainloader, 0):
170
# get the inputs; data is a list of [inputs, labels]
171
inputs, labels = data
172
173
# zero the parameter gradients
174
optimizer.zero_grad()
175
176
# forward + backward + optimize
177
outputs = net(inputs)
178
loss = criterion(outputs, labels)
179
loss.backward()
180
optimizer.step()
181
182
# print statistics
183
running_loss += loss.item()
184
if i % 2000 == 1999: # print every 2000 mini-batches
185
print(f'[{epoch + 1}, {i + 1:5d}] loss: {running_loss / 2000:.3f}')
186
running_loss = 0.0
187
188
print('Finished Training')
189
190
########################################################################
191
# Let's quickly save our trained model:
192
193
PATH = './cifar_net.pth'
194
torch.save(net.state_dict(), PATH)
195
196
########################################################################
197
# See `here <https://pytorch.org/docs/stable/notes/serialization.html>`_
198
# for more details on saving PyTorch models.
199
#
200
# 5. Test the network on the test data
201
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
202
#
203
# We have trained the network for 2 passes over the training dataset.
204
# But we need to check if the network has learnt anything at all.
205
#
206
# We will check this by predicting the class label that the neural network
207
# outputs, and checking it against the ground-truth. If the prediction is
208
# correct, we add the sample to the list of correct predictions.
209
#
210
# Okay, first step. Let us display an image from the test set to get familiar.
211
212
dataiter = iter(testloader)
213
images, labels = next(dataiter)
214
215
# print images
216
imshow(torchvision.utils.make_grid(images))
217
print('GroundTruth: ', ' '.join(f'{classes[labels[j]]:5s}' for j in range(4)))
218
219
########################################################################
220
# Next, let's load back in our saved model (note: saving and re-loading the model
221
# wasn't necessary here, we only did it to illustrate how to do so):
222
223
net = Net()
224
net.load_state_dict(torch.load(PATH, weights_only=True))
225
226
########################################################################
227
# Okay, now let us see what the neural network thinks these examples above are:
228
229
outputs = net(images)
230
231
########################################################################
232
# The outputs are energies for the 10 classes.
233
# The higher the energy for a class, the more the network
234
# thinks that the image is of the particular class.
235
# So, let's get the index of the highest energy:
236
_, predicted = torch.max(outputs, 1)
237
238
print('Predicted: ', ' '.join(f'{classes[predicted[j]]:5s}'
239
for j in range(4)))
240
241
########################################################################
242
# The results seem pretty good.
243
#
244
# Let us look at how the network performs on the whole dataset.
245
246
correct = 0
247
total = 0
248
# since we're not training, we don't need to calculate the gradients for our outputs
249
with torch.no_grad():
250
for data in testloader:
251
images, labels = data
252
# calculate outputs by running images through the network
253
outputs = net(images)
254
# the class with the highest energy is what we choose as prediction
255
_, predicted = torch.max(outputs.data, 1)
256
total += labels.size(0)
257
correct += (predicted == labels).sum().item()
258
259
print(f'Accuracy of the network on the 10000 test images: {100 * correct // total} %')
260
261
########################################################################
262
# That looks way better than chance, which is 10% accuracy (randomly picking
263
# a class out of 10 classes).
264
# Seems like the network learnt something.
265
#
266
# Hmmm, what are the classes that performed well, and the classes that did
267
# not perform well:
268
269
# prepare to count predictions for each class
270
correct_pred = {classname: 0 for classname in classes}
271
total_pred = {classname: 0 for classname in classes}
272
273
# again no gradients needed
274
with torch.no_grad():
275
for data in testloader:
276
images, labels = data
277
outputs = net(images)
278
_, predictions = torch.max(outputs, 1)
279
# collect the correct predictions for each class
280
for label, prediction in zip(labels, predictions):
281
if label == prediction:
282
correct_pred[classes[label]] += 1
283
total_pred[classes[label]] += 1
284
285
286
# print accuracy for each class
287
for classname, correct_count in correct_pred.items():
288
accuracy = 100 * float(correct_count) / total_pred[classname]
289
print(f'Accuracy for class: {classname:5s} is {accuracy:.1f} %')
290
291
########################################################################
292
# Okay, so what next?
293
#
294
# How do we run these neural networks on the GPU?
295
#
296
# Training on GPU
297
# ----------------
298
# Just like how you transfer a Tensor onto the GPU, you transfer the neural
299
# net onto the GPU.
300
#
301
# Let's first define our device as the first visible cuda device if we have
302
# CUDA available:
303
304
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
305
306
# Assuming that we are on a CUDA machine, this should print a CUDA device:
307
308
print(device)
309
310
########################################################################
311
# The rest of this section assumes that ``device`` is a CUDA device.
312
#
313
# Then these methods will recursively go over all modules and convert their
314
# parameters and buffers to CUDA tensors:
315
#
316
# .. code:: python
317
#
318
# net.to(device)
319
#
320
#
321
# Remember that you will have to send the inputs and targets at every step
322
# to the GPU too:
323
#
324
# .. code:: python
325
#
326
# inputs, labels = data[0].to(device), data[1].to(device)
327
#
328
# Why don't I notice MASSIVE speedup compared to CPU? Because your network
329
# is really small.
330
#
331
# **Exercise:** Try increasing the width of your network (argument 2 of
332
# the first ``nn.Conv2d``, and argument 1 of the second ``nn.Conv2d`` –
333
# they need to be the same number), see what kind of speedup you get.
334
#
335
# **Goals achieved**:
336
#
337
# - Understanding PyTorch's Tensor library and neural networks at a high level.
338
# - Train a small neural network to classify images
339
#
340
# Training on multiple GPUs
341
# -------------------------
342
# If you want to see even more MASSIVE speedup using all of your GPUs,
343
# please check out :doc:`data_parallel_tutorial`.
344
#
345
# Where do I go next?
346
# -------------------
347
#
348
# - :doc:`Train neural nets to play video games </intermediate/reinforcement_q_learning>`
349
# - `Train a state-of-the-art ResNet network on imagenet`_
350
# - `Train a face generator using Generative Adversarial Networks`_
351
# - `Train a word-level language model using Recurrent LSTM networks`_
352
# - `More examples`_
353
# - `More tutorials`_
354
# - `Discuss PyTorch on the Forums`_
355
# - `Chat with other users on Slack`_
356
#
357
# .. _Train a state-of-the-art ResNet network on imagenet: https://github.com/pytorch/examples/tree/master/imagenet
358
# .. _Train a face generator using Generative Adversarial Networks: https://github.com/pytorch/examples/tree/master/dcgan
359
# .. _Train a word-level language model using Recurrent LSTM networks: https://github.com/pytorch/examples/tree/master/word_language_model
360
# .. _More examples: https://github.com/pytorch/examples
361
# .. _More tutorials: https://github.com/pytorch/tutorials
362
# .. _Discuss PyTorch on the Forums: https://discuss.pytorch.org/
363
# .. _Chat with other users on Slack: https://pytorch.slack.com/messages/beginner/
364
365
# %%%%%%INVISIBLE_CODE_BLOCK%%%%%%
366
del dataiter
367
# %%%%%%INVISIBLE_CODE_BLOCK%%%%%%
368
369