Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place.
Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place.
Path: blob/main/recipes_source/recipes/changing_default_device.py
Views: 713
"""1Changing default device2=======================34It is common practice to write PyTorch code in a device-agnostic way,5and then switch between CPU and CUDA depending on what hardware is available.6Typically, to do this you might have used if-statements and ``cuda()`` calls7to do this:89.. note::10This recipe requires PyTorch 2.0.0 or later.1112"""13import torch1415USE_CUDA = False1617mod = torch.nn.Linear(20, 30)18if USE_CUDA:19mod.cuda()2021device = 'cpu'22if USE_CUDA:23device = 'cuda'24inp = torch.randn(128, 20, device=device)25print(mod(inp).device)2627###################################################################28# PyTorch now also has a context manager which can take care of the29# device transfer automatically. Here is an example:3031with torch.device('cuda'):32mod = torch.nn.Linear(20, 30)33print(mod.weight.device)34print(mod(torch.randn(128, 20)).device)3536#########################################37# You can also set it globally like this:3839torch.set_default_device('cuda')4041mod = torch.nn.Linear(20, 30)42print(mod.weight.device)43print(mod(torch.randn(128, 20)).device)4445################################################################46# This function imposes a slight performance cost on every Python47# call to the torch API (not just factory functions). If this48# is causing problems for you, please comment on49# `this issue <https://github.com/pytorch/pytorch/issues/92701>`__505152