Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place.
Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place.
Path: blob/main/beginner_source/examples_autograd/polynomial_autograd.py
Views: 713
# -*- coding: utf-8 -*-1"""2PyTorch: Tensors and autograd3-------------------------------45A third order polynomial, trained to predict :math:`y=\sin(x)` from :math:`-\pi`6to :math:`\pi` by minimizing squared Euclidean distance.78This implementation computes the forward pass using operations on PyTorch9Tensors, and uses PyTorch autograd to compute gradients.101112A PyTorch Tensor represents a node in a computational graph. If ``x`` is a13Tensor that has ``x.requires_grad=True`` then ``x.grad`` is another Tensor14holding the gradient of ``x`` with respect to some scalar value.15"""16import torch17import math1819dtype = torch.float20device = "cuda" if torch.cuda.is_available() else "cpu"21torch.set_default_device(device)2223# Create Tensors to hold input and outputs.24# By default, requires_grad=False, which indicates that we do not need to25# compute gradients with respect to these Tensors during the backward pass.26x = torch.linspace(-math.pi, math.pi, 2000, dtype=dtype)27y = torch.sin(x)2829# Create random Tensors for weights. For a third order polynomial, we need30# 4 weights: y = a + b x + c x^2 + d x^331# Setting requires_grad=True indicates that we want to compute gradients with32# respect to these Tensors during the backward pass.33a = torch.randn((), dtype=dtype, requires_grad=True)34b = torch.randn((), dtype=dtype, requires_grad=True)35c = torch.randn((), dtype=dtype, requires_grad=True)36d = torch.randn((), dtype=dtype, requires_grad=True)3738learning_rate = 1e-639for t in range(2000):40# Forward pass: compute predicted y using operations on Tensors.41y_pred = a + b * x + c * x ** 2 + d * x ** 34243# Compute and print loss using operations on Tensors.44# Now loss is a Tensor of shape (1,)45# loss.item() gets the scalar value held in the loss.46loss = (y_pred - y).pow(2).sum()47if t % 100 == 99:48print(t, loss.item())4950# Use autograd to compute the backward pass. This call will compute the51# gradient of loss with respect to all Tensors with requires_grad=True.52# After this call a.grad, b.grad. c.grad and d.grad will be Tensors holding53# the gradient of the loss with respect to a, b, c, d respectively.54loss.backward()5556# Manually update weights using gradient descent. Wrap in torch.no_grad()57# because weights have requires_grad=True, but we don't need to track this58# in autograd.59with torch.no_grad():60a -= learning_rate * a.grad61b -= learning_rate * b.grad62c -= learning_rate * c.grad63d -= learning_rate * d.grad6465# Manually zero the gradients after updating weights66a.grad = None67b.grad = None68c.grad = None69d.grad = None7071print(f'Result: y = {a.item()} + {b.item()} x + {c.item()} x^2 + {d.item()} x^3')727374