Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
pytorch
GitHub Repository: pytorch/tutorials
Path: blob/main/beginner_source/blitz/cifar10_tutorial.py
5712 views
1
# -*- coding: utf-8 -*-
2
"""
3
Training a Classifier
4
=====================
5
6
This is it. You have seen how to define neural networks, compute loss and make
7
updates to the weights of the network.
8
9
Now you might be thinking,
10
11
What about data?
12
----------------
13
14
Generally, when you have to deal with image, text, audio or video data,
15
you can use standard python packages that load data into a numpy array.
16
Then you can convert this array into a ``torch.*Tensor``.
17
18
- For images, packages such as Pillow, OpenCV are useful
19
- For audio, packages such as scipy and librosa
20
- For text, either raw Python or Cython based loading, or NLTK and
21
SpaCy are useful
22
23
Specifically for vision, we have created a package called
24
``torchvision``, that has data loaders for common datasets such as
25
ImageNet, CIFAR10, MNIST, etc. and data transformers for images, viz.,
26
``torchvision.datasets`` and ``torch.utils.data.DataLoader``.
27
28
This provides a huge convenience and avoids writing boilerplate code.
29
30
For this tutorial, we will use the CIFAR10 dataset.
31
It has the classes: ‘airplane’, ‘automobile’, ‘bird’, ‘cat’, ‘deer’,
32
‘dog’, ‘frog’, ‘horse’, ‘ship’, ‘truck’. The images in CIFAR-10 are of
33
size 3x32x32, i.e. 3-channel color images of 32x32 pixels in size.
34
35
.. figure:: /_static/img/cifar10.png
36
:alt: cifar10
37
38
cifar10
39
40
41
Training an image classifier
42
----------------------------
43
44
We will do the following steps in order:
45
46
1. Load and normalize the CIFAR10 training and test datasets using
47
``torchvision``
48
2. Define a Convolutional Neural Network
49
3. Define a loss function
50
4. Train the network on the training data
51
5. Test the network on the test data
52
53
1. Load and normalize CIFAR10
54
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
55
56
Using ``torchvision``, it’s extremely easy to load CIFAR10.
57
"""
58
import torch
59
import torchvision
60
from torchvision.transforms import v2
61
62
########################################################################
63
# The output of torchvision datasets are PILImage images of range [0, 1].
64
# We transform them to Tensors of normalized range [-1, 1].
65
66
########################################################################
67
# .. note::
68
# If you are running this tutorial on Windows or MacOS and encounter a
69
# BrokenPipeError or RuntimeError related to multiprocessing, try setting
70
# the num_worker of torch.utils.data.DataLoader() to 0.
71
72
transform = v2.Compose([
73
v2.ToImage(),
74
v2.ToDtype(torch.float32, scale=True),
75
v2.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
76
77
batch_size = 4
78
79
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
80
download=True, transform=transform)
81
trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size,
82
shuffle=True, num_workers=2)
83
84
testset = torchvision.datasets.CIFAR10(root='./data', train=False,
85
download=True, transform=transform)
86
testloader = torch.utils.data.DataLoader(testset, batch_size=batch_size,
87
shuffle=False, num_workers=2)
88
89
classes = ('plane', 'car', 'bird', 'cat',
90
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
91
92
########################################################################
93
# Let us show some of the training images, for fun.
94
95
import matplotlib.pyplot as plt
96
import numpy as np
97
98
# functions to show an image
99
100
101
def imshow(img):
102
img = img / 2 + 0.5 # unnormalize
103
npimg = img.numpy()
104
plt.imshow(np.transpose(npimg, (1, 2, 0)))
105
plt.show()
106
107
108
# get some random training images
109
dataiter = iter(trainloader)
110
images, labels = next(dataiter)
111
112
# show images
113
imshow(torchvision.utils.make_grid(images))
114
# print labels
115
print(' '.join(f'{classes[labels[j]]:5s}' for j in range(batch_size)))
116
117
118
########################################################################
119
# 2. Define a Convolutional Neural Network
120
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
121
# Copy the neural network from the Neural Networks section before and modify it to
122
# take 3-channel images (instead of 1-channel images as it was defined).
123
124
import torch.nn as nn
125
import torch.nn.functional as F
126
127
128
class Net(nn.Module):
129
def __init__(self):
130
super().__init__()
131
self.conv1 = nn.Conv2d(3, 6, 5)
132
self.pool = nn.MaxPool2d(2, 2)
133
self.conv2 = nn.Conv2d(6, 16, 5)
134
self.fc1 = nn.Linear(16 * 5 * 5, 120)
135
self.fc2 = nn.Linear(120, 84)
136
self.fc3 = nn.Linear(84, 10)
137
138
def forward(self, x):
139
x = self.pool(F.relu(self.conv1(x)))
140
x = self.pool(F.relu(self.conv2(x)))
141
x = torch.flatten(x, 1) # flatten all dimensions except batch
142
x = F.relu(self.fc1(x))
143
x = F.relu(self.fc2(x))
144
x = self.fc3(x)
145
return x
146
147
148
net = Net()
149
150
########################################################################
151
# 3. Define a Loss function and optimizer
152
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
153
# Let's use a Classification Cross-Entropy loss and SGD with momentum.
154
155
import torch.optim as optim
156
157
criterion = nn.CrossEntropyLoss()
158
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
159
160
########################################################################
161
# 4. Train the network
162
# ^^^^^^^^^^^^^^^^^^^^
163
#
164
# This is when things start to get interesting.
165
# We simply have to loop over our data iterator, and feed the inputs to the
166
# network and optimize.
167
168
for epoch in range(2): # loop over the dataset multiple times
169
170
running_loss = 0.0
171
for i, data in enumerate(trainloader, 0):
172
# get the inputs; data is a list of [inputs, labels]
173
inputs, labels = data
174
175
# zero the parameter gradients
176
optimizer.zero_grad()
177
178
# forward + backward + optimize
179
outputs = net(inputs)
180
loss = criterion(outputs, labels)
181
loss.backward()
182
optimizer.step()
183
184
# print statistics
185
running_loss += loss.item()
186
if i % 2000 == 1999: # print every 2000 mini-batches
187
print(f'[{epoch + 1}, {i + 1:5d}] loss: {running_loss / 2000:.3f}')
188
running_loss = 0.0
189
190
print('Finished Training')
191
192
########################################################################
193
# Let's quickly save our trained model:
194
195
PATH = './cifar_net.pt'
196
torch.save(net.state_dict(), PATH)
197
198
########################################################################
199
# See `here <https://pytorch.org/docs/stable/notes/serialization.html>`_
200
# for more details on saving PyTorch models.
201
#
202
# 5. Test the network on the test data
203
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
204
#
205
# We have trained the network for 2 passes over the training dataset.
206
# But we need to check if the network has learnt anything at all.
207
#
208
# We will check this by predicting the class label that the neural network
209
# outputs, and checking it against the ground-truth. If the prediction is
210
# correct, we add the sample to the list of correct predictions.
211
#
212
# Okay, first step. Let us display an image from the test set to get familiar.
213
214
dataiter = iter(testloader)
215
images, labels = next(dataiter)
216
217
# print images
218
imshow(torchvision.utils.make_grid(images))
219
print('GroundTruth: ', ' '.join(f'{classes[labels[j]]:5s}' for j in range(4)))
220
221
########################################################################
222
# Next, let's load back in our saved model (note: saving and re-loading the model
223
# wasn't necessary here, we only did it to illustrate how to do so):
224
225
net = Net()
226
net.load_state_dict(torch.load(PATH, weights_only=True))
227
228
########################################################################
229
# Okay, now let us see what the neural network thinks these examples above are:
230
231
outputs = net(images)
232
233
########################################################################
234
# The outputs are energies for the 10 classes.
235
# The higher the energy for a class, the more the network
236
# thinks that the image is of the particular class.
237
# So, let's get the index of the highest energy:
238
_, predicted = torch.max(outputs, 1)
239
240
print('Predicted: ', ' '.join(f'{classes[predicted[j]]:5s}'
241
for j in range(4)))
242
243
########################################################################
244
# The results seem pretty good.
245
#
246
# Let us look at how the network performs on the whole dataset.
247
248
correct = 0
249
total = 0
250
# since we're not training, we don't need to calculate the gradients for our outputs
251
with torch.no_grad():
252
for data in testloader:
253
images, labels = data
254
# calculate outputs by running images through the network
255
outputs = net(images)
256
# the class with the highest energy is what we choose as prediction
257
_, predicted = torch.max(outputs, 1)
258
total += labels.size(0)
259
correct += (predicted == labels).sum().item()
260
261
print(f'Accuracy of the network on the 10000 test images: {100 * correct // total} %')
262
263
########################################################################
264
# That looks way better than chance, which is 10% accuracy (randomly picking
265
# a class out of 10 classes).
266
# Seems like the network learnt something.
267
#
268
# Hmmm, what are the classes that performed well, and the classes that did
269
# not perform well:
270
271
# prepare to count predictions for each class
272
correct_pred = {classname: 0 for classname in classes}
273
total_pred = {classname: 0 for classname in classes}
274
275
# again no gradients needed
276
with torch.no_grad():
277
for data in testloader:
278
images, labels = data
279
outputs = net(images)
280
_, predictions = torch.max(outputs, 1)
281
# collect the correct predictions for each class
282
for label, prediction in zip(labels, predictions):
283
if label == prediction:
284
correct_pred[classes[label]] += 1
285
total_pred[classes[label]] += 1
286
287
288
# print accuracy for each class
289
for classname, correct_count in correct_pred.items():
290
accuracy = 100 * float(correct_count) / total_pred[classname]
291
print(f'Accuracy for class: {classname:5s} is {accuracy:.1f} %')
292
293
########################################################################
294
# Okay, so what next?
295
#
296
# How do we run these neural networks on the GPU?
297
#
298
# Training on GPU
299
# ----------------
300
# Just like how you transfer a Tensor onto the GPU, you transfer the neural
301
# net onto the GPU.
302
#
303
# Let's first define our device as the first visible cuda device if we have
304
# CUDA available:
305
306
device = torch.device(torch.accelerator.current_accelerator().type if torch.accelerator.is_available() else 'cpu')
307
308
# Assuming that we are on a CUDA machine, this should print a CUDA device:
309
310
print(device)
311
312
########################################################################
313
# The rest of this section assumes that ``device`` is a CUDA device.
314
#
315
# Then these methods will recursively go over all modules and convert their
316
# parameters and buffers to CUDA tensors:
317
#
318
# .. code:: python
319
#
320
# net.to(device)
321
#
322
#
323
# Remember that you will have to send the inputs and targets at every step
324
# to the GPU too:
325
#
326
# .. code:: python
327
#
328
# inputs, labels = data[0].to(device), data[1].to(device)
329
#
330
# Why don't I notice MASSIVE speedup compared to CPU? Because your network
331
# is really small.
332
#
333
# **Exercise:** Try increasing the width of your network (argument 2 of
334
# the first ``nn.Conv2d``, and argument 1 of the second ``nn.Conv2d`` –
335
# they need to be the same number), see what kind of speedup you get.
336
#
337
# **Goals achieved**:
338
#
339
# - Understanding PyTorch's Tensor library and neural networks at a high level.
340
# - Train a small neural network to classify images
341
#
342
# Training on multiple GPUs
343
# -------------------------
344
# If you want to see even more MASSIVE speedup using all of your GPUs,
345
# please check out :doc:`data_parallel_tutorial`.
346
#
347
# Where do I go next?
348
# -------------------
349
#
350
# - :doc:`Train neural nets to play video games </intermediate/reinforcement_q_learning>`
351
# - `Train a state-of-the-art ResNet network on imagenet`_
352
# - `Train a face generator using Generative Adversarial Networks`_
353
# - `Train a word-level language model using Recurrent LSTM networks`_
354
# - `More examples`_
355
# - `More tutorials`_
356
# - `Discuss PyTorch on the Forums`_
357
# - `Chat with other users on Slack`_
358
#
359
# .. _Train a state-of-the-art ResNet network on imagenet: https://github.com/pytorch/examples/tree/main/imagenet
360
# .. _Train a face generator using Generative Adversarial Networks: https://github.com/pytorch/examples/tree/main/dcgan
361
# .. _Train a word-level language model using Recurrent LSTM networks: https://github.com/pytorch/examples/tree/main/word_language_model
362
# .. _More examples: https://github.com/pytorch/examples
363
# .. _More tutorials: https://github.com/pytorch/tutorials
364
# .. _Discuss PyTorch on the Forums: https://discuss.pytorch.org/
365
# .. _Chat with other users on Slack: https://pytorch.slack.com/messages/beginner/
366
367
# %%%%%%INVISIBLE_CODE_BLOCK%%%%%%
368
del dataiter
369
# %%%%%%INVISIBLE_CODE_BLOCK%%%%%%
370
371