CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutSign UpSign In
pytorch

CoCalc provides the best real-time collaborative environment for Jupyter Notebooks, LaTeX documents, and SageMath, scalable from individual users to large groups and classes!

GitHub Repository: pytorch/tutorials
Path: blob/main/recipes_source/recipes/tuning_guide.py
Views: 494
1
"""
2
Performance Tuning Guide
3
*************************
4
**Author**: `Szymon Migacz <https://github.com/szmigacz>`_
5
6
Performance Tuning Guide is a set of optimizations and best practices which can
7
accelerate training and inference of deep learning models in PyTorch. Presented
8
techniques often can be implemented by changing only a few lines of code and can
9
be applied to a wide range of deep learning models across all domains.
10
11
General optimizations
12
---------------------
13
"""
14
15
###############################################################################
16
# Enable asynchronous data loading and augmentation
17
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
18
# `torch.utils.data.DataLoader <https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader>`_
19
# supports asynchronous data loading and data augmentation in separate worker
20
# subprocesses. The default setting for ``DataLoader`` is ``num_workers=0``,
21
# which means that the data loading is synchronous and done in the main process.
22
# As a result the main training process has to wait for the data to be available
23
# to continue the execution.
24
#
25
# Setting ``num_workers > 0`` enables asynchronous data loading and overlap
26
# between the training and data loading. ``num_workers`` should be tuned
27
# depending on the workload, CPU, GPU, and location of training data.
28
#
29
# ``DataLoader`` accepts ``pin_memory`` argument, which defaults to ``False``.
30
# When using a GPU it's better to set ``pin_memory=True``, this instructs
31
# ``DataLoader`` to use pinned memory and enables faster and asynchronous memory
32
# copy from the host to the GPU.
33
34
###############################################################################
35
# Disable gradient calculation for validation or inference
36
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
37
# PyTorch saves intermediate buffers from all operations which involve tensors
38
# that require gradients. Typically gradients aren't needed for validation or
39
# inference.
40
# `torch.no_grad() <https://pytorch.org/docs/stable/generated/torch.no_grad.html#torch.no_grad>`_
41
# context manager can be applied to disable gradient calculation within a
42
# specified block of code, this accelerates execution and reduces the amount of
43
# required memory.
44
# `torch.no_grad() <https://pytorch.org/docs/stable/generated/torch.no_grad.html#torch.no_grad>`_
45
# can also be used as a function decorator.
46
47
###############################################################################
48
# Disable bias for convolutions directly followed by a batch norm
49
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
50
# `torch.nn.Conv2d() <https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html#torch.nn.Conv2d>`_
51
# has ``bias`` parameter which defaults to ``True`` (the same is true for
52
# `Conv1d <https://pytorch.org/docs/stable/generated/torch.nn.Conv1d.html#torch.nn.Conv1d>`_
53
# and
54
# `Conv3d <https://pytorch.org/docs/stable/generated/torch.nn.Conv3d.html#torch.nn.Conv3d>`_
55
# ).
56
#
57
# If a ``nn.Conv2d`` layer is directly followed by a ``nn.BatchNorm2d`` layer,
58
# then the bias in the convolution is not needed, instead use
59
# ``nn.Conv2d(..., bias=False, ....)``. Bias is not needed because in the first
60
# step ``BatchNorm`` subtracts the mean, which effectively cancels out the
61
# effect of bias.
62
#
63
# This is also applicable to 1d and 3d convolutions as long as ``BatchNorm`` (or
64
# other normalization layer) normalizes on the same dimension as convolution's
65
# bias.
66
#
67
# Models available from `torchvision <https://github.com/pytorch/vision>`_
68
# already implement this optimization.
69
70
###############################################################################
71
# Use parameter.grad = None instead of model.zero_grad() or optimizer.zero_grad()
72
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
73
# Instead of calling:
74
model.zero_grad()
75
# or
76
optimizer.zero_grad()
77
78
###############################################################################
79
# to zero out gradients, use the following method instead:
80
81
for param in model.parameters():
82
param.grad = None
83
84
###############################################################################
85
# The second code snippet does not zero the memory of each individual parameter,
86
# also the subsequent backward pass uses assignment instead of addition to store
87
# gradients, this reduces the number of memory operations.
88
#
89
# Setting gradient to ``None`` has a slightly different numerical behavior than
90
# setting it to zero, for more details refer to the
91
# `documentation <https://pytorch.org/docs/master/optim.html#torch.optim.Optimizer.zero_grad>`_.
92
#
93
# Alternatively, starting from PyTorch 1.7, call ``model`` or
94
# ``optimizer.zero_grad(set_to_none=True)``.
95
96
###############################################################################
97
# Fuse operations
98
# ~~~~~~~~~~~~~~~~~~~~~~~~~
99
# Pointwise operations such as elementwise addition, multiplication, and math
100
# functions like `sin()`, `cos()`, `sigmoid()`, etc., can be combined into a
101
# single kernel. This fusion helps reduce memory access and kernel launch times.
102
# Typically, pointwise operations are memory-bound; PyTorch eager-mode initiates
103
# a separate kernel for each operation, which involves loading data from memory,
104
# executing the operation (often not the most time-consuming step), and writing
105
# the results back to memory.
106
#
107
# By using a fused operator, only one kernel is launched for multiple pointwise
108
# operations, and data is loaded and stored just once. This efficiency is
109
# particularly beneficial for activation functions, optimizers, and custom RNN cells etc.
110
#
111
# PyTorch 2 introduces a compile-mode facilitated by TorchInductor, an underlying compiler
112
# that automatically fuses kernels. TorchInductor extends its capabilities beyond simple
113
# element-wise operations, enabling advanced fusion of eligible pointwise and reduction
114
# operations for optimized performance.
115
#
116
# In the simplest case fusion can be enabled by applying
117
# `torch.compile <https://pytorch.org/docs/stable/generated/torch.compile.html>`_
118
# decorator to the function definition, for example:
119
120
@torch.compile
121
def gelu(x):
122
return x * 0.5 * (1.0 + torch.erf(x / 1.41421))
123
124
###############################################################################
125
# Refer to
126
# `Introduction to torch.compile <https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html>`_
127
# for more advanced use cases.
128
129
###############################################################################
130
# Enable channels_last memory format for computer vision models
131
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
132
# PyTorch 1.5 introduced support for ``channels_last`` memory format for
133
# convolutional networks. This format is meant to be used in conjunction with
134
# `AMP <https://pytorch.org/docs/stable/amp.html>`_ to further accelerate
135
# convolutional neural networks with
136
# `Tensor Cores <https://www.nvidia.com/en-us/data-center/tensor-cores/>`_.
137
#
138
# Support for ``channels_last`` is experimental, but it's expected to work for
139
# standard computer vision models (e.g. ResNet-50, SSD). To convert models to
140
# ``channels_last`` format follow
141
# `Channels Last Memory Format Tutorial <https://pytorch.org/tutorials/intermediate/memory_format_tutorial.html>`_.
142
# The tutorial includes a section on
143
# `converting existing models <https://pytorch.org/tutorials/intermediate/memory_format_tutorial.html#converting-existing-models>`_.
144
145
###############################################################################
146
# Checkpoint intermediate buffers
147
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
148
# Buffer checkpointing is a technique to mitigate the memory capacity burden of
149
# model training. Instead of storing inputs of all layers to compute upstream
150
# gradients in backward propagation, it stores the inputs of a few layers and
151
# the others are recomputed during backward pass. The reduced memory
152
# requirements enables increasing the batch size that can improve utilization.
153
#
154
# Checkpointing targets should be selected carefully. The best is not to store
155
# large layer outputs that have small re-computation cost. The example target
156
# layers are activation functions (e.g. ``ReLU``, ``Sigmoid``, ``Tanh``),
157
# up/down sampling and matrix-vector operations with small accumulation depth.
158
#
159
# PyTorch supports a native
160
# `torch.utils.checkpoint <https://pytorch.org/docs/stable/checkpoint.html>`_
161
# API to automatically perform checkpointing and recomputation.
162
163
###############################################################################
164
# Disable debugging APIs
165
# ~~~~~~~~~~~~~~~~~~~~~~
166
# Many PyTorch APIs are intended for debugging and should be disabled for
167
# regular training runs:
168
#
169
# * anomaly detection:
170
# `torch.autograd.detect_anomaly <https://pytorch.org/docs/stable/autograd.html#torch.autograd.detect_anomaly>`_
171
# or
172
# `torch.autograd.set_detect_anomaly(True) <https://pytorch.org/docs/stable/autograd.html#torch.autograd.set_detect_anomaly>`_
173
# * profiler related:
174
# `torch.autograd.profiler.emit_nvtx <https://pytorch.org/docs/stable/autograd.html#torch.autograd.profiler.emit_nvtx>`_,
175
# `torch.autograd.profiler.profile <https://pytorch.org/docs/stable/autograd.html#torch.autograd.profiler.profile>`_
176
# * autograd ``gradcheck``:
177
# `torch.autograd.gradcheck <https://pytorch.org/docs/stable/autograd.html#torch.autograd.gradcheck>`_
178
# or
179
# `torch.autograd.gradgradcheck <https://pytorch.org/docs/stable/autograd.html#torch.autograd.gradgradcheck>`_
180
#
181
182
###############################################################################
183
# CPU specific optimizations
184
# --------------------------
185
186
###############################################################################
187
# Utilize Non-Uniform Memory Access (NUMA) Controls
188
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
189
# NUMA or non-uniform memory access is a memory layout design used in data center machines meant to take advantage of locality of memory in multi-socket machines with multiple memory controllers and blocks. Generally speaking, all deep learning workloads, training or inference, get better performance without accessing hardware resources across NUMA nodes. Thus, inference can be run with multiple instances, each instance runs on one socket, to raise throughput. For training tasks on single node, distributed training is recommended to make each training process run on one socket.
190
#
191
# In general cases the following command executes a PyTorch script on cores on the Nth node only, and avoids cross-socket memory access to reduce memory access overhead.
192
#
193
# .. code-block:: sh
194
#
195
# numactl --cpunodebind=N --membind=N python <pytorch_script>
196
197
###############################################################################
198
# More detailed descriptions can be found `here <https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/performance_tuning/tuning_guide.html>`_.
199
200
###############################################################################
201
# Utilize OpenMP
202
# ~~~~~~~~~~~~~~
203
# OpenMP is utilized to bring better performance for parallel computation tasks.
204
# ``OMP_NUM_THREADS`` is the easiest switch that can be used to accelerate computations. It determines number of threads used for OpenMP computations.
205
# CPU affinity setting controls how workloads are distributed over multiple cores. It affects communication overhead, cache line invalidation overhead, or page thrashing, thus proper setting of CPU affinity brings performance benefits. ``GOMP_CPU_AFFINITY`` or ``KMP_AFFINITY`` determines how to bind OpenMP* threads to physical processing units. Detailed information can be found `here <https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/performance_tuning/tuning_guide.html>`_.
206
207
###############################################################################
208
# With the following command, PyTorch run the task on N OpenMP threads.
209
#
210
# .. code-block:: sh
211
#
212
# export OMP_NUM_THREADS=N
213
214
###############################################################################
215
# Typically, the following environment variables are used to set for CPU affinity with GNU OpenMP implementation. ``OMP_PROC_BIND`` specifies whether threads may be moved between processors. Setting it to CLOSE keeps OpenMP threads close to the primary thread in contiguous place partitions. ``OMP_SCHEDULE`` determines how OpenMP threads are scheduled. ``GOMP_CPU_AFFINITY`` binds threads to specific CPUs.
216
# An important tuning parameter is core pinning which prevent the threads of migrating between multiple CPUs, enhancing data location and minimizing inter core communication.
217
#
218
# .. code-block:: sh
219
#
220
# export OMP_SCHEDULE=STATIC
221
# export OMP_PROC_BIND=CLOSE
222
# export GOMP_CPU_AFFINITY="N-M"
223
224
###############################################################################
225
# Intel OpenMP Runtime Library (``libiomp``)
226
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
227
# By default, PyTorch uses GNU OpenMP (GNU ``libgomp``) for parallel computation. On Intel platforms, Intel OpenMP Runtime Library (``libiomp``) provides OpenMP API specification support. It sometimes brings more performance benefits compared to ``libgomp``. Utilizing environment variable ``LD_PRELOAD`` can switch OpenMP library to ``libiomp``:
228
#
229
# .. code-block:: sh
230
#
231
# export LD_PRELOAD=<path>/libiomp5.so:$LD_PRELOAD
232
233
###############################################################################
234
# Similar to CPU affinity settings in GNU OpenMP, environment variables are provided in ``libiomp`` to control CPU affinity settings.
235
# ``KMP_AFFINITY`` binds OpenMP threads to physical processing units. ``KMP_BLOCKTIME`` sets the time, in milliseconds, that a thread should wait, after completing the execution of a parallel region, before sleeping. In most cases, setting ``KMP_BLOCKTIME`` to 1 or 0 yields good performances.
236
# The following commands show a common settings with Intel OpenMP Runtime Library.
237
#
238
# .. code-block:: sh
239
#
240
# export KMP_AFFINITY=granularity=fine,compact,1,0
241
# export KMP_BLOCKTIME=1
242
243
###############################################################################
244
# Switch Memory allocator
245
# ~~~~~~~~~~~~~~~~~~~~~~~
246
# For deep learning workloads, ``Jemalloc`` or ``TCMalloc`` can get better performance by reusing memory as much as possible than default ``malloc`` function. `Jemalloc <https://github.com/jemalloc/jemalloc>`_ is a general purpose ``malloc`` implementation that emphasizes fragmentation avoidance and scalable concurrency support. `TCMalloc <https://google.github.io/tcmalloc/overview.html>`_ also features a couple of optimizations to speed up program executions. One of them is holding memory in caches to speed up access of commonly-used objects. Holding such caches even after deallocation also helps avoid costly system calls if such memory is later re-allocated.
247
# Use environment variable ``LD_PRELOAD`` to take advantage of one of them.
248
#
249
# .. code-block:: sh
250
#
251
# export LD_PRELOAD=<jemalloc.so/tcmalloc.so>:$LD_PRELOAD
252
253
###############################################################################
254
# Use oneDNN Graph with TorchScript for inference
255
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
256
# oneDNN Graph can significantly boost inference performance. It fuses some compute-intensive operations such as convolution, matmul with their neighbor operations.
257
# In PyTorch 2.0, it is supported as a beta feature for ``Float32`` & ``BFloat16`` data-types.
258
# oneDNN Graph receives the model’s graph and identifies candidates for operator-fusion with respect to the shape of the example input.
259
# A model should be JIT-traced using an example input.
260
# Speed-up would then be observed after a couple of warm-up iterations for inputs with the same shape as the example input.
261
# The example code-snippets below are for resnet50, but they can very well be extended to use oneDNN Graph with custom models as well.
262
263
# Only this extra line of code is required to use oneDNN Graph
264
torch.jit.enable_onednn_fusion(True)
265
266
###############################################################################
267
# Using the oneDNN Graph API requires just one extra line of code for inference with Float32.
268
# If you are using oneDNN Graph, please avoid calling ``torch.jit.optimize_for_inference``.
269
270
# sample input should be of the same shape as expected inputs
271
sample_input = [torch.rand(32, 3, 224, 224)]
272
# Using resnet50 from torchvision in this example for illustrative purposes,
273
# but the line below can indeed be modified to use custom models as well.
274
model = getattr(torchvision.models, "resnet50")().eval()
275
# Tracing the model with example input
276
traced_model = torch.jit.trace(model, sample_input)
277
# Invoking torch.jit.freeze
278
traced_model = torch.jit.freeze(traced_model)
279
280
###############################################################################
281
# Once a model is JIT-traced with a sample input, it can then be used for inference after a couple of warm-up runs.
282
283
with torch.no_grad():
284
# a couple of warm-up runs
285
traced_model(*sample_input)
286
traced_model(*sample_input)
287
# speedup would be observed after warm-up runs
288
traced_model(*sample_input)
289
290
###############################################################################
291
# While the JIT fuser for oneDNN Graph also supports inference with ``BFloat16`` datatype,
292
# performance benefit with oneDNN Graph is only exhibited by machines with AVX512_BF16
293
# instruction set architecture (ISA).
294
# The following code snippets serves as an example of using ``BFloat16`` datatype for inference with oneDNN Graph:
295
296
# AMP for JIT mode is enabled by default, and is divergent with its eager mode counterpart
297
torch._C._jit_set_autocast_mode(False)
298
299
with torch.no_grad(), torch.cpu.amp.autocast(cache_enabled=False, dtype=torch.bfloat16):
300
# Conv-BatchNorm folding for CNN-based Vision Models should be done with ``torch.fx.experimental.optimization.fuse`` when AMP is used
301
import torch.fx.experimental.optimization as optimization
302
# Please note that optimization.fuse need not be called when AMP is not used
303
model = optimization.fuse(model)
304
model = torch.jit.trace(model, (example_input))
305
model = torch.jit.freeze(model)
306
# a couple of warm-up runs
307
model(example_input)
308
model(example_input)
309
# speedup would be observed in subsequent runs.
310
model(example_input)
311
312
313
###############################################################################
314
# Train a model on CPU with PyTorch ``DistributedDataParallel``(DDP) functionality
315
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
316
# For small scale models or memory-bound models, such as DLRM, training on CPU is also a good choice. On a machine with multiple sockets, distributed training brings a high-efficient hardware resource usage to accelerate the training process. `Torch-ccl <https://github.com/intel/torch-ccl>`_, optimized with Intel(R) ``oneCCL`` (collective communications library) for efficient distributed deep learning training implementing such collectives like ``allreduce``, ``allgather``, ``alltoall``, implements PyTorch C10D ``ProcessGroup`` API and can be dynamically loaded as external ``ProcessGroup``. Upon optimizations implemented in PyTorch DDP module, ``torch-ccl`` accelerates communication operations. Beside the optimizations made to communication kernels, ``torch-ccl`` also features simultaneous computation-communication functionality.
317
318
###############################################################################
319
# GPU specific optimizations
320
# --------------------------
321
322
###############################################################################
323
# Enable Tensor cores
324
# ~~~~~~~~~~~~~~~~~~~~~~~
325
# Tensor cores are specialized hardware designed to compute matrix-matrix multiplication
326
# operations, primarily utilized in deep learning and AI workloads. Tensor cores have
327
# specific precision requirements which can be adjusted manually or via the Automatic
328
# Mixed Precision API.
329
#
330
# In particular, tensor operations take advantage of lower precision workloads.
331
# Which can be controlled via ``torch.set_float32_matmul_precision``.
332
# The default format is set to 'highest,' which utilizes the tensor data type.
333
# However, PyTorch offers alternative precision settings: 'high' and 'medium.'
334
# These options prioritize computational speed over numerical precision."
335
336
###############################################################################
337
# Use CUDA Graphs
338
# ~~~~~~~~~~~~~~~~~~~~~~~
339
# At the time of using a GPU, work first must be launched from the CPU and
340
# in some cases the context switch between CPU and GPU can lead to bad resource
341
# utilization. CUDA graphs are a way to keep computation within the GPU without
342
# paying the extra cost of kernel launches and host synchronization.
343
344
# It can be enabled using
345
torch.compile(m, "reduce-overhead")
346
# or
347
torch.compile(m, "max-autotune")
348
349
###############################################################################
350
# Support for CUDA graph is in development, and its usage can incur in increased
351
# device memory consumption and some models might not compile.
352
353
###############################################################################
354
# Enable cuDNN auto-tuner
355
# ~~~~~~~~~~~~~~~~~~~~~~~
356
# `NVIDIA cuDNN <https://developer.nvidia.com/cudnn>`_ supports many algorithms
357
# to compute a convolution. Autotuner runs a short benchmark and selects the
358
# kernel with the best performance on a given hardware for a given input size.
359
#
360
# For convolutional networks (other types currently not supported), enable cuDNN
361
# autotuner before launching the training loop by setting:
362
363
torch.backends.cudnn.benchmark = True
364
###############################################################################
365
#
366
# * the auto-tuner decisions may be non-deterministic; different algorithm may
367
# be selected for different runs. For more details see
368
# `PyTorch: Reproducibility <https://pytorch.org/docs/stable/notes/randomness.html?highlight=determinism>`_
369
# * in some rare cases, such as with highly variable input sizes, it's better
370
# to run convolutional networks with autotuner disabled to avoid the overhead
371
# associated with algorithm selection for each input size.
372
#
373
374
###############################################################################
375
# Avoid unnecessary CPU-GPU synchronization
376
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
377
# Avoid unnecessary synchronizations, to let the CPU run ahead of the
378
# accelerator as much as possible to make sure that the accelerator work queue
379
# contains many operations.
380
#
381
# When possible, avoid operations which require synchronizations, for example:
382
#
383
# * ``print(cuda_tensor)``
384
# * ``cuda_tensor.item()``
385
# * memory copies: ``tensor.cuda()``, ``cuda_tensor.cpu()`` and equivalent
386
# ``tensor.to(device)`` calls
387
# * ``cuda_tensor.nonzero()``
388
# * python control flow which depends on results of operations performed on CUDA
389
# tensors e.g. ``if (cuda_tensor != 0).all()``
390
#
391
392
###############################################################################
393
# Create tensors directly on the target device
394
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
395
# Instead of calling ``torch.rand(size).cuda()`` to generate a random tensor,
396
# produce the output directly on the target device:
397
# ``torch.rand(size, device='cuda')``.
398
#
399
# This is applicable to all functions which create new tensors and accept
400
# ``device`` argument:
401
# `torch.rand() <https://pytorch.org/docs/stable/generated/torch.rand.html#torch.rand>`_,
402
# `torch.zeros() <https://pytorch.org/docs/stable/generated/torch.zeros.html#torch.zeros>`_,
403
# `torch.full() <https://pytorch.org/docs/stable/generated/torch.full.html#torch.full>`_
404
# and similar.
405
406
###############################################################################
407
# Use mixed precision and AMP
408
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~
409
# Mixed precision leverages
410
# `Tensor Cores <https://www.nvidia.com/en-us/data-center/tensor-cores/>`_
411
# and offers up to 3x overall speedup on Volta and newer GPU architectures. To
412
# use Tensor Cores AMP should be enabled and matrix/tensor dimensions should
413
# satisfy requirements for calling kernels that use Tensor Cores.
414
#
415
# To use Tensor Cores:
416
#
417
# * set sizes to multiples of 8 (to map onto dimensions of Tensor Cores)
418
#
419
# * see
420
# `Deep Learning Performance Documentation
421
# <https://docs.nvidia.com/deeplearning/performance/index.html#optimizing-performance>`_
422
# for more details and guidelines specific to layer type
423
# * if layer size is derived from other parameters rather than fixed, it can
424
# still be explicitly padded e.g. vocabulary size in NLP models
425
#
426
# * enable AMP
427
#
428
# * Introduction to Mixed Precision Training and AMP:
429
# `video <https://www.youtube.com/watch?v=jF4-_ZK_tyc&feature=youtu.be>`_,
430
# `slides <https://nvlabs.github.io/eccv2020-mixed-precision-tutorial/files/dusan_stosic-training-neural-networks-with-tensor-cores.pdf>`_
431
# * native PyTorch AMP is available starting from PyTorch 1.6:
432
# `documentation <https://pytorch.org/docs/stable/amp.html>`_,
433
# `examples <https://pytorch.org/docs/stable/notes/amp_examples.html#amp-examples>`_,
434
# `tutorial <https://pytorch.org/tutorials/recipes/recipes/amp_recipe.html>`_
435
#
436
#
437
438
###############################################################################
439
# Preallocate memory in case of variable input length
440
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
441
# Models for speech recognition or for NLP are often trained on input tensors
442
# with variable sequence length. Variable length can be problematic for PyTorch
443
# caching allocator and can lead to reduced performance or to unexpected
444
# out-of-memory errors. If a batch with a short sequence length is followed by
445
# an another batch with longer sequence length, then PyTorch is forced to
446
# release intermediate buffers from previous iteration and to re-allocate new
447
# buffers. This process is time consuming and causes fragmentation in the
448
# caching allocator which may result in out-of-memory errors.
449
#
450
# A typical solution is to implement preallocation. It consists of the
451
# following steps:
452
#
453
# #. generate a (usually random) batch of inputs with maximum sequence length
454
# (either corresponding to max length in the training dataset or to some
455
# predefined threshold)
456
# #. execute a forward and a backward pass with the generated batch, do not
457
# execute an optimizer or a learning rate scheduler, this step preallocates
458
# buffers of maximum size, which can be reused in subsequent
459
# training iterations
460
# #. zero out gradients
461
# #. proceed to regular training
462
#
463
464
###############################################################################
465
# Distributed optimizations
466
# -------------------------
467
468
###############################################################################
469
# Use efficient data-parallel backend
470
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
471
# PyTorch has two ways to implement data-parallel training:
472
#
473
# * `torch.nn.DataParallel <https://pytorch.org/docs/stable/generated/torch.nn.DataParallel.html#torch.nn.DataParallel>`_
474
# * `torch.nn.parallel.DistributedDataParallel <https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html#torch.nn.parallel.DistributedDataParallel>`_
475
#
476
# ``DistributedDataParallel`` offers much better performance and scaling to
477
# multiple-GPUs. For more information refer to the
478
# `relevant section of CUDA Best Practices <https://pytorch.org/docs/stable/notes/cuda.html#use-nn-parallel-distributeddataparallel-instead-of-multiprocessing-or-nn-dataparallel>`_
479
# from PyTorch documentation.
480
481
###############################################################################
482
# Skip unnecessary all-reduce if training with ``DistributedDataParallel`` and gradient accumulation
483
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
484
# By default
485
# `torch.nn.parallel.DistributedDataParallel <https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html#torch.nn.parallel.DistributedDataParallel>`_
486
# executes gradient all-reduce after every backward pass to compute the average
487
# gradient over all workers participating in the training. If training uses
488
# gradient accumulation over N steps, then all-reduce is not necessary after
489
# every training step, it's only required to perform all-reduce after the last
490
# call to backward, just before the execution of the optimizer.
491
#
492
# ``DistributedDataParallel`` provides
493
# `no_sync() <https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html#torch.nn.parallel.DistributedDataParallel.no_sync>`_
494
# context manager which disables gradient all-reduce for particular iteration.
495
# ``no_sync()`` should be applied to first ``N-1`` iterations of gradient
496
# accumulation, the last iteration should follow the default execution and
497
# perform the required gradient all-reduce.
498
499
###############################################################################
500
# Match the order of layers in constructors and during the execution if using ``DistributedDataParallel(find_unused_parameters=True)``
501
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
502
# `torch.nn.parallel.DistributedDataParallel <https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html#torch.nn.parallel.DistributedDataParallel>`_
503
# with ``find_unused_parameters=True`` uses the order of layers and parameters
504
# from model constructors to build buckets for ``DistributedDataParallel``
505
# gradient all-reduce. ``DistributedDataParallel`` overlaps all-reduce with the
506
# backward pass. All-reduce for a particular bucket is asynchronously triggered
507
# only when all gradients for parameters in a given bucket are available.
508
#
509
# To maximize the amount of overlap, the order in model constructors should
510
# roughly match the order during the execution. If the order doesn't match, then
511
# all-reduce for the entire bucket waits for the gradient which is the last to
512
# arrive, this may reduce the overlap between backward pass and all-reduce,
513
# all-reduce may end up being exposed, which slows down the training.
514
#
515
# ``DistributedDataParallel`` with ``find_unused_parameters=False`` (which is
516
# the default setting) relies on automatic bucket formation based on order of
517
# operations encountered during the backward pass. With
518
# ``find_unused_parameters=False`` it's not necessary to reorder layers or
519
# parameters to achieve optimal performance.
520
521
###############################################################################
522
# Load-balance workload in a distributed setting
523
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
524
# Load imbalance typically may happen for models processing sequential data
525
# (speech recognition, translation, language models etc.). If one device
526
# receives a batch of data with sequence length longer than sequence lengths for
527
# the remaining devices, then all devices wait for the worker which finishes
528
# last. Backward pass functions as an implicit synchronization point in a
529
# distributed setting with
530
# `DistributedDataParallel <https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html#torch.nn.parallel.DistributedDataParallel>`_
531
# backend.
532
#
533
# There are multiple ways to solve the load balancing problem. The core idea is
534
# to distribute workload over all workers as uniformly as possible within each
535
# global batch. For example Transformer solves imbalance by forming batches with
536
# approximately constant number of tokens (and variable number of sequences in a
537
# batch), other models solve imbalance by bucketing samples with similar
538
# sequence length or even by sorting dataset by sequence length.
539
540