CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutSign UpSign In
pytorch

Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place.

GitHub Repository: pytorch/tutorials
Path: blob/main/recipes_source/recipes/profiler_recipe.py
Views: 713
1
"""
2
PyTorch Profiler
3
====================================
4
This recipe explains how to use PyTorch profiler and measure the time and
5
memory consumption of the model's operators.
6
7
Introduction
8
------------
9
PyTorch includes a simple profiler API that is useful when user needs
10
to determine the most expensive operators in the model.
11
12
In this recipe, we will use a simple Resnet model to demonstrate how to
13
use profiler to analyze model performance.
14
15
Setup
16
-----
17
To install ``torch`` and ``torchvision`` use the following command:
18
19
.. code-block:: sh
20
21
pip install torch torchvision
22
23
24
"""
25
26
27
######################################################################
28
# Steps
29
# -----
30
#
31
# 1. Import all necessary libraries
32
# 2. Instantiate a simple Resnet model
33
# 3. Using profiler to analyze execution time
34
# 4. Using profiler to analyze memory consumption
35
# 5. Using tracing functionality
36
# 6. Examining stack traces
37
# 7. Using profiler to analyze long-running jobs
38
#
39
# 1. Import all necessary libraries
40
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
41
#
42
# In this recipe we will use ``torch``, ``torchvision.models``
43
# and ``profiler`` modules:
44
#
45
46
import torch
47
import torchvision.models as models
48
from torch.profiler import profile, record_function, ProfilerActivity
49
50
51
######################################################################
52
# 2. Instantiate a simple Resnet model
53
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
54
#
55
# Let's create an instance of a Resnet model and prepare an input
56
# for it:
57
#
58
59
model = models.resnet18()
60
inputs = torch.randn(5, 3, 224, 224)
61
62
######################################################################
63
# 3. Using profiler to analyze execution time
64
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
65
#
66
# PyTorch profiler is enabled through the context manager and accepts
67
# a number of parameters, some of the most useful are:
68
#
69
# - ``activities`` - a list of activities to profile:
70
# - ``ProfilerActivity.CPU`` - PyTorch operators, TorchScript functions and
71
# user-defined code labels (see ``record_function`` below);
72
# - ``ProfilerActivity.CUDA`` - on-device CUDA kernels;
73
# - ``ProfilerActivity.XPU`` - on-device XPU kernels;
74
# - ``record_shapes`` - whether to record shapes of the operator inputs;
75
# - ``profile_memory`` - whether to report amount of memory consumed by
76
# model's Tensors;
77
#
78
# Note: when using CUDA, profiler also shows the runtime CUDA events
79
# occurring on the host.
80
81
######################################################################
82
# Let's see how we can use profiler to analyze the execution time:
83
84
with profile(activities=[ProfilerActivity.CPU], record_shapes=True) as prof:
85
with record_function("model_inference"):
86
model(inputs)
87
88
######################################################################
89
# Note that we can use ``record_function`` context manager to label
90
# arbitrary code ranges with user provided names
91
# (``model_inference`` is used as a label in the example above).
92
#
93
# Profiler allows one to check which operators were called during the
94
# execution of a code range wrapped with a profiler context manager.
95
# If multiple profiler ranges are active at the same time (e.g. in
96
# parallel PyTorch threads), each profiling context manager tracks only
97
# the operators of its corresponding range.
98
# Profiler also automatically profiles the asynchronous tasks launched
99
# with ``torch.jit._fork`` and (in case of a backward pass)
100
# the backward pass operators launched with ``backward()`` call.
101
#
102
# Let's print out the stats for the execution above:
103
104
print(prof.key_averages().table(sort_by="cpu_time_total", row_limit=10))
105
106
######################################################################
107
# The output will look like (omitting some columns):
108
109
# --------------------------------- ------------ ------------ ------------ ------------
110
# Name Self CPU CPU total CPU time avg # of Calls
111
# --------------------------------- ------------ ------------ ------------ ------------
112
# model_inference 5.509ms 57.503ms 57.503ms 1
113
# aten::conv2d 231.000us 31.931ms 1.597ms 20
114
# aten::convolution 250.000us 31.700ms 1.585ms 20
115
# aten::_convolution 336.000us 31.450ms 1.573ms 20
116
# aten::mkldnn_convolution 30.838ms 31.114ms 1.556ms 20
117
# aten::batch_norm 211.000us 14.693ms 734.650us 20
118
# aten::_batch_norm_impl_index 319.000us 14.482ms 724.100us 20
119
# aten::native_batch_norm 9.229ms 14.109ms 705.450us 20
120
# aten::mean 332.000us 2.631ms 125.286us 21
121
# aten::select 1.668ms 2.292ms 8.988us 255
122
# --------------------------------- ------------ ------------ ------------ ------------
123
# Self CPU time total: 57.549m
124
#
125
126
######################################################################
127
# Here we see that, as expected, most of the time is spent in convolution (and specifically in ``mkldnn_convolution``
128
# for PyTorch compiled with ``MKL-DNN`` support).
129
# Note the difference between self cpu time and cpu time - operators can call other operators, self cpu time excludes time
130
# spent in children operator calls, while total cpu time includes it. You can choose to sort by the self cpu time by passing
131
# ``sort_by="self_cpu_time_total"`` into the ``table`` call.
132
#
133
# To get a finer granularity of results and include operator input shapes, pass ``group_by_input_shape=True``
134
# (note: this requires running the profiler with ``record_shapes=True``):
135
136
print(prof.key_averages(group_by_input_shape=True).table(sort_by="cpu_time_total", row_limit=10))
137
138
########################################################################################
139
# The output might look like this (omitting some columns):
140
#
141
# .. code-block:: sh
142
#
143
# --------------------------------- ------------ -------------------------------------------
144
# Name CPU total Input Shapes
145
# --------------------------------- ------------ -------------------------------------------
146
# model_inference 57.503ms []
147
# aten::conv2d 8.008ms [5,64,56,56], [64,64,3,3], [], ..., []]
148
# aten::convolution 7.956ms [[5,64,56,56], [64,64,3,3], [], ..., []]
149
# aten::_convolution 7.909ms [[5,64,56,56], [64,64,3,3], [], ..., []]
150
# aten::mkldnn_convolution 7.834ms [[5,64,56,56], [64,64,3,3], [], ..., []]
151
# aten::conv2d 6.332ms [[5,512,7,7], [512,512,3,3], [], ..., []]
152
# aten::convolution 6.303ms [[5,512,7,7], [512,512,3,3], [], ..., []]
153
# aten::_convolution 6.273ms [[5,512,7,7], [512,512,3,3], [], ..., []]
154
# aten::mkldnn_convolution 6.233ms [[5,512,7,7], [512,512,3,3], [], ..., []]
155
# aten::conv2d 4.751ms [[5,256,14,14], [256,256,3,3], [], ..., []]
156
# --------------------------------- ------------ -------------------------------------------
157
# Self CPU time total: 57.549ms
158
#
159
160
######################################################################
161
# Note the occurrence of ``aten::convolution`` twice with different input shapes.
162
163
######################################################################
164
# Profiler can also be used to analyze performance of models executed on GPUs and XPUs:
165
# Users could switch between cpu, cuda and xpu
166
if torch.cuda.is_available():
167
device = 'cuda'
168
elif torch.xpu.is_available():
169
device = 'xpu'
170
else:
171
print('Neither CUDA nor XPU devices are available to demonstrate profiling on acceleration devices')
172
import sys
173
sys.exit(0)
174
175
activities = [ProfilerActivity.CPU, ProfilerActivity.CUDA, ProfilerActivity.XPU]
176
sort_by_keyword = device + "_time_total"
177
178
model = models.resnet18().to(device)
179
inputs = torch.randn(5, 3, 224, 224).to(device)
180
181
with profile(activities=activities, record_shapes=True) as prof:
182
with record_function("model_inference"):
183
model(inputs)
184
185
print(prof.key_averages().table(sort_by=sort_by_keyword, row_limit=10))
186
187
######################################################################
188
# (Note: the first use of CUDA profiling may bring an extra overhead.)
189
190
######################################################################
191
# The resulting table output (omitting some columns):
192
#
193
# .. code-block:: sh
194
#
195
# ------------------------------------------------------- ------------ ------------
196
# Name Self CUDA CUDA total
197
# ------------------------------------------------------- ------------ ------------
198
# model_inference 0.000us 11.666ms
199
# aten::conv2d 0.000us 10.484ms
200
# aten::convolution 0.000us 10.484ms
201
# aten::_convolution 0.000us 10.484ms
202
# aten::_convolution_nogroup 0.000us 10.484ms
203
# aten::thnn_conv2d 0.000us 10.484ms
204
# aten::thnn_conv2d_forward 10.484ms 10.484ms
205
# void at::native::im2col_kernel<float>(long, float co... 3.844ms 3.844ms
206
# sgemm_32x32x32_NN 3.206ms 3.206ms
207
# sgemm_32x32x32_NN_vec 3.093ms 3.093ms
208
# ------------------------------------------------------- ------------ ------------
209
# Self CPU time total: 23.015ms
210
# Self CUDA time total: 11.666ms
211
#
212
######################################################################
213
214
215
######################################################################
216
# (Note: the first use of XPU profiling may bring an extra overhead.)
217
218
######################################################################
219
# The resulting table output (omitting some columns):
220
#
221
# .. code-block:: sh
222
#
223
#------------------------------------------------------- ------------ ------------ ------------ ------------ ------------
224
# Name Self XPU Self XPU % XPU total XPU time avg # of Calls
225
# ------------------------------------------------------- ------------ ------------ ------------ ------------ ------------
226
# model_inference 0.000us 0.00% 2.567ms 2.567ms 1
227
# aten::conv2d 0.000us 0.00% 1.871ms 93.560us 20
228
# aten::convolution 0.000us 0.00% 1.871ms 93.560us 20
229
# aten::_convolution 0.000us 0.00% 1.871ms 93.560us 20
230
# aten::convolution_overrideable 1.871ms 72.89% 1.871ms 93.560us 20
231
# gen_conv 1.484ms 57.82% 1.484ms 74.216us 20
232
# aten::batch_norm 0.000us 0.00% 432.640us 21.632us 20
233
# aten::_batch_norm_impl_index 0.000us 0.00% 432.640us 21.632us 20
234
# aten::native_batch_norm 432.640us 16.85% 432.640us 21.632us 20
235
# conv_reorder 386.880us 15.07% 386.880us 6.448us 60
236
# ------------------------------------------------------- ------------ ------------ ------------ ------------ ------------
237
# Self CPU time total: 712.486ms
238
# Self XPU time total: 2.567ms
239
240
#
241
242
243
######################################################################
244
# Note the occurrence of on-device kernels in the output (e.g. ``sgemm_32x32x32_NN``).
245
246
######################################################################
247
# 4. Using profiler to analyze memory consumption
248
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
249
#
250
# PyTorch profiler can also show the amount of memory (used by the model's tensors)
251
# that was allocated (or released) during the execution of the model's operators.
252
# In the output below, 'self' memory corresponds to the memory allocated (released)
253
# by the operator, excluding the children calls to the other operators.
254
# To enable memory profiling functionality pass ``profile_memory=True``.
255
256
model = models.resnet18()
257
inputs = torch.randn(5, 3, 224, 224)
258
259
with profile(activities=[ProfilerActivity.CPU],
260
profile_memory=True, record_shapes=True) as prof:
261
model(inputs)
262
263
print(prof.key_averages().table(sort_by="self_cpu_memory_usage", row_limit=10))
264
265
# (omitting some columns)
266
# --------------------------------- ------------ ------------ ------------
267
# Name CPU Mem Self CPU Mem # of Calls
268
# --------------------------------- ------------ ------------ ------------
269
# aten::empty 94.79 Mb 94.79 Mb 121
270
# aten::max_pool2d_with_indices 11.48 Mb 11.48 Mb 1
271
# aten::addmm 19.53 Kb 19.53 Kb 1
272
# aten::empty_strided 572 b 572 b 25
273
# aten::resize_ 240 b 240 b 6
274
# aten::abs 480 b 240 b 4
275
# aten::add 160 b 160 b 20
276
# aten::masked_select 120 b 112 b 1
277
# aten::ne 122 b 53 b 6
278
# aten::eq 60 b 30 b 2
279
# --------------------------------- ------------ ------------ ------------
280
# Self CPU time total: 53.064ms
281
282
print(prof.key_averages().table(sort_by="cpu_memory_usage", row_limit=10))
283
284
#############################################################################
285
# The output might look like this (omitting some columns):
286
#
287
# .. code-block:: sh
288
#
289
# --------------------------------- ------------ ------------ ------------
290
# Name CPU Mem Self CPU Mem # of Calls
291
# --------------------------------- ------------ ------------ ------------
292
# aten::empty 94.79 Mb 94.79 Mb 121
293
# aten::batch_norm 47.41 Mb 0 b 20
294
# aten::_batch_norm_impl_index 47.41 Mb 0 b 20
295
# aten::native_batch_norm 47.41 Mb 0 b 20
296
# aten::conv2d 47.37 Mb 0 b 20
297
# aten::convolution 47.37 Mb 0 b 20
298
# aten::_convolution 47.37 Mb 0 b 20
299
# aten::mkldnn_convolution 47.37 Mb 0 b 20
300
# aten::max_pool2d 11.48 Mb 0 b 1
301
# aten::max_pool2d_with_indices 11.48 Mb 11.48 Mb 1
302
# --------------------------------- ------------ ------------ ------------
303
# Self CPU time total: 53.064ms
304
#
305
306
######################################################################
307
# 5. Using tracing functionality
308
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
309
#
310
# Profiling results can be outputted as a ``.json`` trace file:
311
# Tracing CUDA or XPU kernels
312
# Users could switch between cpu, cuda and xpu
313
device = 'cuda'
314
315
activities = [ProfilerActivity.CPU, ProfilerActivity.CUDA, ProfilerActivity.XPU]
316
317
model = models.resnet18().to(device)
318
inputs = torch.randn(5, 3, 224, 224).to(device)
319
320
with profile(activities=activities) as prof:
321
model(inputs)
322
323
prof.export_chrome_trace("trace.json")
324
325
######################################################################
326
# You can examine the sequence of profiled operators and CUDA/XPU kernels
327
# in Chrome trace viewer (``chrome://tracing``):
328
#
329
# .. image:: ../../_static/img/trace_img.png
330
# :scale: 25 %
331
332
######################################################################
333
# 6. Examining stack traces
334
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
335
#
336
# Profiler can be used to analyze Python and TorchScript stack traces:
337
sort_by_keyword = "self_" + device + "_time_total"
338
339
with profile(
340
activities=activities,
341
with_stack=True,
342
) as prof:
343
model(inputs)
344
345
# Print aggregated stats
346
print(prof.key_averages(group_by_stack_n=5).table(sort_by=sort_by_keyword, row_limit=2))
347
348
#################################################################################
349
# The output might look like this (omitting some columns):
350
#
351
# .. code-block:: sh
352
#
353
# ------------------------- -----------------------------------------------------------
354
# Name Source Location
355
# ------------------------- -----------------------------------------------------------
356
# aten::thnn_conv2d_forward .../torch/nn/modules/conv.py(439): _conv_forward
357
# .../torch/nn/modules/conv.py(443): forward
358
# .../torch/nn/modules/module.py(1051): _call_impl
359
# .../site-packages/torchvision/models/resnet.py(63): forward
360
# .../torch/nn/modules/module.py(1051): _call_impl
361
# aten::thnn_conv2d_forward .../torch/nn/modules/conv.py(439): _conv_forward
362
# .../torch/nn/modules/conv.py(443): forward
363
# .../torch/nn/modules/module.py(1051): _call_impl
364
# .../site-packages/torchvision/models/resnet.py(59): forward
365
# .../torch/nn/modules/module.py(1051): _call_impl
366
# ------------------------- -----------------------------------------------------------
367
# Self CPU time total: 34.016ms
368
# Self CUDA time total: 11.659ms
369
#
370
371
######################################################################
372
# Note the two convolutions and the two call sites in ``torchvision/models/resnet.py`` script.
373
#
374
# (Warning: stack tracing adds an extra profiling overhead.)
375
376
######################################################################
377
# 7. Using profiler to analyze long-running jobs
378
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
379
#
380
# PyTorch profiler offers an additional API to handle long-running jobs
381
# (such as training loops). Tracing all of the execution can be
382
# slow and result in very large trace files. To avoid this, use optional
383
# arguments:
384
#
385
# - ``schedule`` - specifies a function that takes an integer argument (step number)
386
# as an input and returns an action for the profiler, the best way to use this parameter
387
# is to use ``torch.profiler.schedule`` helper function that can generate a schedule for you;
388
# - ``on_trace_ready`` - specifies a function that takes a reference to the profiler as
389
# an input and is called by the profiler each time the new trace is ready.
390
#
391
# To illustrate how the API works, let's first consider the following example with
392
# ``torch.profiler.schedule`` helper function:
393
394
from torch.profiler import schedule
395
396
my_schedule = schedule(
397
skip_first=10,
398
wait=5,
399
warmup=1,
400
active=3,
401
repeat=2)
402
403
######################################################################
404
# Profiler assumes that the long-running job is composed of steps, numbered
405
# starting from zero. The example above defines the following sequence of actions
406
# for the profiler:
407
#
408
# 1. Parameter ``skip_first`` tells profiler that it should ignore the first 10 steps
409
# (default value of ``skip_first`` is zero);
410
# 2. After the first ``skip_first`` steps, profiler starts executing profiler cycles;
411
# 3. Each cycle consists of three phases:
412
#
413
# - idling (``wait=5`` steps), during this phase profiler is not active;
414
# - warming up (``warmup=1`` steps), during this phase profiler starts tracing, but
415
# the results are discarded; this phase is used to discard the samples obtained by
416
# the profiler at the beginning of the trace since they are usually skewed by an extra
417
# overhead;
418
# - active tracing (``active=3`` steps), during this phase profiler traces and records data;
419
# 4. An optional ``repeat`` parameter specifies an upper bound on the number of cycles.
420
# By default (zero value), profiler will execute cycles as long as the job runs.
421
422
######################################################################
423
# Thus, in the example above, profiler will skip the first 15 steps, spend the next step on the warm up,
424
# actively record the next 3 steps, skip another 5 steps, spend the next step on the warm up, actively
425
# record another 3 steps. Since the ``repeat=2`` parameter value is specified, the profiler will stop
426
# the recording after the first two cycles.
427
#
428
# At the end of each cycle profiler calls the specified ``on_trace_ready`` function and passes itself as
429
# an argument. This function is used to process the new trace - either by obtaining the table output or
430
# by saving the output on disk as a trace file.
431
#
432
# To send the signal to the profiler that the next step has started, call ``prof.step()`` function.
433
# The current profiler step is stored in ``prof.step_num``.
434
#
435
# The following example shows how to use all of the concepts above for CUDA and XPU Kernels:
436
437
sort_by_keyword = "self_" + device + "_time_total"
438
439
def trace_handler(p):
440
output = p.key_averages().table(sort_by=sort_by_keyword, row_limit=10)
441
print(output)
442
p.export_chrome_trace("/tmp/trace_" + str(p.step_num) + ".json")
443
444
with profile(
445
activities=activities,
446
schedule=torch.profiler.schedule(
447
wait=1,
448
warmup=1,
449
active=2),
450
on_trace_ready=trace_handler
451
) as p:
452
for idx in range(8):
453
model(inputs)
454
p.step()
455
456
######################################################################
457
# Learn More
458
# ----------
459
#
460
# Take a look at the following recipes/tutorials to continue your learning:
461
#
462
# - `PyTorch Benchmark <https://pytorch.org/tutorials/recipes/recipes/benchmark.html>`_
463
# - `Visualizing models, data, and training with TensorBoard <https://pytorch.org/tutorials/intermediate/tensorboard_tutorial.html>`_ tutorial
464
#
465
466