Path: blob/main/beginner_source/examples_autograd/polynomial_autograd.py
1384 views
"""1PyTorch: Tensors and autograd2-------------------------------34A third order polynomial, trained to predict :math:`y=\sin(x)` from :math:`-\pi`5to :math:`\pi` by minimizing squared Euclidean distance.67This implementation computes the forward pass using operations on PyTorch8Tensors, and uses PyTorch autograd to compute gradients.91011A PyTorch Tensor represents a node in a computational graph. If ``x`` is a12Tensor that has ``x.requires_grad=True`` then ``x.grad`` is another Tensor13holding the gradient of ``x`` with respect to some scalar value.14"""15import torch16import math1718# We want to be able to train our model on an `accelerator <https://pytorch.org/docs/stable/torch.html#accelerators>`__19# such as CUDA, MPS, MTIA, or XPU. If the current accelerator is available, we will use it. Otherwise, we use the CPU.2021dtype = torch.float22device = torch.accelerator.current_accelerator().type if torch.accelerator.is_available() else "cpu"23print(f"Using {device} device")24torch.set_default_device(device)2526# Create Tensors to hold input and outputs.27# By default, requires_grad=False, which indicates that we do not need to28# compute gradients with respect to these Tensors during the backward pass.29x = torch.linspace(-math.pi, math.pi, 2000, dtype=dtype)30y = torch.sin(x)3132# Create random Tensors for weights. For a third order polynomial, we need33# 4 weights: y = a + b x + c x^2 + d x^334# Setting requires_grad=True indicates that we want to compute gradients with35# respect to these Tensors during the backward pass.36a = torch.randn((), dtype=dtype, requires_grad=True)37b = torch.randn((), dtype=dtype, requires_grad=True)38c = torch.randn((), dtype=dtype, requires_grad=True)39d = torch.randn((), dtype=dtype, requires_grad=True)4041learning_rate = 1e-642for t in range(2000):43# Forward pass: compute predicted y using operations on Tensors.44y_pred = a + b * x + c * x ** 2 + d * x ** 34546# Compute and print loss using operations on Tensors.47# Now loss is a Tensor of shape (1,)48# loss.item() gets the scalar value held in the loss.49loss = (y_pred - y).pow(2).sum()50if t % 100 == 99:51print(t, loss.item())5253# Use autograd to compute the backward pass. This call will compute the54# gradient of loss with respect to all Tensors with requires_grad=True.55# After this call a.grad, b.grad. c.grad and d.grad will be Tensors holding56# the gradient of the loss with respect to a, b, c, d respectively.57loss.backward()5859# Manually update weights using gradient descent. Wrap in torch.no_grad()60# because weights have requires_grad=True, but we don't need to track this61# in autograd.62with torch.no_grad():63a -= learning_rate * a.grad64b -= learning_rate * b.grad65c -= learning_rate * c.grad66d -= learning_rate * d.grad6768# Manually zero the gradients after updating weights69a.grad = None70b.grad = None71c.grad = None72d.grad = None7374print(f'Result: y = {a.item()} + {b.item()} x + {c.item()} x^2 + {d.item()} x^3')757677