CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutSign UpSign In
pytorch

Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place.

GitHub Repository: pytorch/tutorials
Path: blob/main/beginner_source/introyt/tensorboardyt_tutorial.py
Views: 713
1
"""
2
`Introduction <introyt1_tutorial.html>`_ ||
3
`Tensors <tensors_deeper_tutorial.html>`_ ||
4
`Autograd <autogradyt_tutorial.html>`_ ||
5
`Building Models <modelsyt_tutorial.html>`_ ||
6
**TensorBoard Support** ||
7
`Training Models <trainingyt.html>`_ ||
8
`Model Understanding <captumyt.html>`_
9
10
PyTorch TensorBoard Support
11
===========================
12
13
Follow along with the video below or on `youtube <https://www.youtube.com/watch?v=6CEld3hZgqc>`__.
14
15
.. raw:: html
16
17
<div style="margin-top:10px; margin-bottom:10px;">
18
<iframe width="560" height="315" src="https://www.youtube.com/embed/6CEld3hZgqc" frameborder="0" allow="accelerometer; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
19
</div>
20
21
Before You Start
22
----------------
23
24
To run this tutorial, you’ll need to install PyTorch, TorchVision,
25
Matplotlib, and TensorBoard.
26
27
With ``conda``:
28
29
.. code-block:: sh
30
31
conda install pytorch torchvision -c pytorch
32
conda install matplotlib tensorboard
33
34
With ``pip``:
35
36
.. code-block:: sh
37
38
pip install torch torchvision matplotlib tensorboard
39
40
Once the dependencies are installed, restart this notebook in the Python
41
environment where you installed them.
42
43
44
Introduction
45
------------
46
47
In this notebook, we’ll be training a variant of LeNet-5 against the
48
Fashion-MNIST dataset. Fashion-MNIST is a set of image tiles depicting
49
various garments, with ten class labels indicating the type of garment
50
depicted.
51
52
"""
53
54
# PyTorch model and training necessities
55
import torch
56
import torch.nn as nn
57
import torch.nn.functional as F
58
import torch.optim as optim
59
60
# Image datasets and image manipulation
61
import torchvision
62
import torchvision.transforms as transforms
63
64
# Image display
65
import matplotlib.pyplot as plt
66
import numpy as np
67
68
# PyTorch TensorBoard support
69
from torch.utils.tensorboard import SummaryWriter
70
71
# In case you are using an environment that has TensorFlow installed,
72
# such as Google Colab, uncomment the following code to avoid
73
# a bug with saving embeddings to your TensorBoard directory
74
75
# import tensorflow as tf
76
# import tensorboard as tb
77
# tf.io.gfile = tb.compat.tensorflow_stub.io.gfile
78
79
######################################################################
80
# Showing Images in TensorBoard
81
# -----------------------------
82
#
83
# Let’s start by adding sample images from our dataset to TensorBoard:
84
#
85
86
# Gather datasets and prepare them for consumption
87
transform = transforms.Compose(
88
[transforms.ToTensor(),
89
transforms.Normalize((0.5,), (0.5,))])
90
91
# Store separate training and validations splits in ./data
92
training_set = torchvision.datasets.FashionMNIST('./data',
93
download=True,
94
train=True,
95
transform=transform)
96
validation_set = torchvision.datasets.FashionMNIST('./data',
97
download=True,
98
train=False,
99
transform=transform)
100
101
training_loader = torch.utils.data.DataLoader(training_set,
102
batch_size=4,
103
shuffle=True,
104
num_workers=2)
105
106
107
validation_loader = torch.utils.data.DataLoader(validation_set,
108
batch_size=4,
109
shuffle=False,
110
num_workers=2)
111
112
# Class labels
113
classes = ('T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
114
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle Boot')
115
116
# Helper function for inline image display
117
def matplotlib_imshow(img, one_channel=False):
118
if one_channel:
119
img = img.mean(dim=0)
120
img = img / 2 + 0.5 # unnormalize
121
npimg = img.numpy()
122
if one_channel:
123
plt.imshow(npimg, cmap="Greys")
124
else:
125
plt.imshow(np.transpose(npimg, (1, 2, 0)))
126
127
# Extract a batch of 4 images
128
dataiter = iter(training_loader)
129
images, labels = next(dataiter)
130
131
# Create a grid from the images and show them
132
img_grid = torchvision.utils.make_grid(images)
133
matplotlib_imshow(img_grid, one_channel=True)
134
135
136
########################################################################
137
# Above, we used TorchVision and Matplotlib to create a visual grid of a
138
# minibatch of our input data. Below, we use the ``add_image()`` call on
139
# ``SummaryWriter`` to log the image for consumption by TensorBoard, and
140
# we also call ``flush()`` to make sure it’s written to disk right away.
141
#
142
143
# Default log_dir argument is "runs" - but it's good to be specific
144
# torch.utils.tensorboard.SummaryWriter is imported above
145
writer = SummaryWriter('runs/fashion_mnist_experiment_1')
146
147
# Write image data to TensorBoard log dir
148
writer.add_image('Four Fashion-MNIST Images', img_grid)
149
writer.flush()
150
151
# To view, start TensorBoard on the command line with:
152
# tensorboard --logdir=runs
153
# ...and open a browser tab to http://localhost:6006/
154
155
156
##########################################################################
157
# If you start TensorBoard at the command line and open it in a new
158
# browser tab (usually at `localhost:6006 <localhost:6006>`__), you should
159
# see the image grid under the IMAGES tab.
160
#
161
# Graphing Scalars to Visualize Training
162
# --------------------------------------
163
#
164
# TensorBoard is useful for tracking the progress and efficacy of your
165
# training. Below, we’ll run a training loop, track some metrics, and save
166
# the data for TensorBoard’s consumption.
167
#
168
# Let’s define a model to categorize our image tiles, and an optimizer and
169
# loss function for training:
170
#
171
172
class Net(nn.Module):
173
def __init__(self):
174
super(Net, self).__init__()
175
self.conv1 = nn.Conv2d(1, 6, 5)
176
self.pool = nn.MaxPool2d(2, 2)
177
self.conv2 = nn.Conv2d(6, 16, 5)
178
self.fc1 = nn.Linear(16 * 4 * 4, 120)
179
self.fc2 = nn.Linear(120, 84)
180
self.fc3 = nn.Linear(84, 10)
181
182
def forward(self, x):
183
x = self.pool(F.relu(self.conv1(x)))
184
x = self.pool(F.relu(self.conv2(x)))
185
x = x.view(-1, 16 * 4 * 4)
186
x = F.relu(self.fc1(x))
187
x = F.relu(self.fc2(x))
188
x = self.fc3(x)
189
return x
190
191
192
net = Net()
193
criterion = nn.CrossEntropyLoss()
194
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
195
196
197
##########################################################################
198
# Now let’s train a single epoch, and evaluate the training vs. validation
199
# set losses every 1000 batches:
200
#
201
202
print(len(validation_loader))
203
for epoch in range(1): # loop over the dataset multiple times
204
running_loss = 0.0
205
206
for i, data in enumerate(training_loader, 0):
207
# basic training loop
208
inputs, labels = data
209
optimizer.zero_grad()
210
outputs = net(inputs)
211
loss = criterion(outputs, labels)
212
loss.backward()
213
optimizer.step()
214
215
running_loss += loss.item()
216
if i % 1000 == 999: # Every 1000 mini-batches...
217
print('Batch {}'.format(i + 1))
218
# Check against the validation set
219
running_vloss = 0.0
220
221
# In evaluation mode some model specific operations can be omitted eg. dropout layer
222
net.train(False) # Switching to evaluation mode, eg. turning off regularisation
223
for j, vdata in enumerate(validation_loader, 0):
224
vinputs, vlabels = vdata
225
voutputs = net(vinputs)
226
vloss = criterion(voutputs, vlabels)
227
running_vloss += vloss.item()
228
net.train(True) # Switching back to training mode, eg. turning on regularisation
229
230
avg_loss = running_loss / 1000
231
avg_vloss = running_vloss / len(validation_loader)
232
233
# Log the running loss averaged per batch
234
writer.add_scalars('Training vs. Validation Loss',
235
{ 'Training' : avg_loss, 'Validation' : avg_vloss },
236
epoch * len(training_loader) + i)
237
238
running_loss = 0.0
239
print('Finished Training')
240
241
writer.flush()
242
243
244
#########################################################################
245
# Switch to your open TensorBoard and have a look at the SCALARS tab.
246
#
247
# Visualizing Your Model
248
# ----------------------
249
#
250
# TensorBoard can also be used to examine the data flow within your model.
251
# To do this, call the ``add_graph()`` method with a model and sample
252
# input:
253
#
254
255
# Again, grab a single mini-batch of images
256
dataiter = iter(training_loader)
257
images, labels = next(dataiter)
258
259
# add_graph() will trace the sample input through your model,
260
# and render it as a graph.
261
writer.add_graph(net, images)
262
writer.flush()
263
264
265
#########################################################################
266
# When you switch over to TensorBoard, you should see a GRAPHS tab.
267
# Double-click the “NET” node to see the layers and data flow within your
268
# model.
269
#
270
# Visualizing Your Dataset with Embeddings
271
# ----------------------------------------
272
#
273
# The 28-by-28 image tiles we’re using can be modeled as 784-dimensional
274
# vectors (28 \* 28 = 784). It can be instructive to project this to a
275
# lower-dimensional representation. The ``add_embedding()`` method will
276
# project a set of data onto the three dimensions with highest variance,
277
# and display them as an interactive 3D chart. The ``add_embedding()``
278
# method does this automatically by projecting to the three dimensions
279
# with highest variance.
280
#
281
# Below, we’ll take a sample of our data, and generate such an embedding:
282
#
283
284
# Select a random subset of data and corresponding labels
285
def select_n_random(data, labels, n=100):
286
assert len(data) == len(labels)
287
288
perm = torch.randperm(len(data))
289
return data[perm][:n], labels[perm][:n]
290
291
# Extract a random subset of data
292
images, labels = select_n_random(training_set.data, training_set.targets)
293
294
# get the class labels for each image
295
class_labels = [classes[label] for label in labels]
296
297
# log embeddings
298
features = images.view(-1, 28 * 28)
299
writer.add_embedding(features,
300
metadata=class_labels,
301
label_img=images.unsqueeze(1))
302
writer.flush()
303
writer.close()
304
305
306
#######################################################################
307
# Now if you switch to TensorBoard and select the PROJECTOR tab, you
308
# should see a 3D representation of the projection. You can rotate and
309
# zoom the model. Examine it at large and small scales, and see whether
310
# you can spot patterns in the projected data and the clustering of
311
# labels.
312
#
313
# For better visibility, it’s recommended to:
314
#
315
# - Select “label” from the “Color by” drop-down on the left.
316
# - Toggle the Night Mode icon along the top to place the
317
# light-colored images on a dark background.
318
#
319
# Other Resources
320
# ---------------
321
#
322
# For more information, have a look at:
323
#
324
# - PyTorch documentation on `torch.utils.tensorboard.SummaryWriter <https://pytorch.org/docs/stable/tensorboard.html?highlight=summarywriter>`__
325
# - Tensorboard tutorial content in the `PyTorch.org Tutorials <https://pytorch.org/tutorials/>`__
326
# - For more information about TensorBoard, see the `TensorBoard
327
# documentation <https://www.tensorflow.org/tensorboard>`__
328
329