CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutSign UpSign In
pytorch

CoCalc provides the best real-time collaborative environment for Jupyter Notebooks, LaTeX documents, and SageMath, scalable from individual users to large groups and classes!

GitHub Repository: pytorch/tutorials
Path: blob/main/beginner_source/examples_autograd/polynomial_autograd.py
Views: 494
1
# -*- coding: utf-8 -*-
2
"""
3
PyTorch: Tensors and autograd
4
-------------------------------
5
6
A third order polynomial, trained to predict :math:`y=\sin(x)` from :math:`-\pi`
7
to :math:`\pi` by minimizing squared Euclidean distance.
8
9
This implementation computes the forward pass using operations on PyTorch
10
Tensors, and uses PyTorch autograd to compute gradients.
11
12
13
A PyTorch Tensor represents a node in a computational graph. If ``x`` is a
14
Tensor that has ``x.requires_grad=True`` then ``x.grad`` is another Tensor
15
holding the gradient of ``x`` with respect to some scalar value.
16
"""
17
import torch
18
import math
19
20
dtype = torch.float
21
device = "cuda" if torch.cuda.is_available() else "cpu"
22
torch.set_default_device(device)
23
24
# Create Tensors to hold input and outputs.
25
# By default, requires_grad=False, which indicates that we do not need to
26
# compute gradients with respect to these Tensors during the backward pass.
27
x = torch.linspace(-math.pi, math.pi, 2000, dtype=dtype)
28
y = torch.sin(x)
29
30
# Create random Tensors for weights. For a third order polynomial, we need
31
# 4 weights: y = a + b x + c x^2 + d x^3
32
# Setting requires_grad=True indicates that we want to compute gradients with
33
# respect to these Tensors during the backward pass.
34
a = torch.randn((), dtype=dtype, requires_grad=True)
35
b = torch.randn((), dtype=dtype, requires_grad=True)
36
c = torch.randn((), dtype=dtype, requires_grad=True)
37
d = torch.randn((), dtype=dtype, requires_grad=True)
38
39
learning_rate = 1e-6
40
for t in range(2000):
41
# Forward pass: compute predicted y using operations on Tensors.
42
y_pred = a + b * x + c * x ** 2 + d * x ** 3
43
44
# Compute and print loss using operations on Tensors.
45
# Now loss is a Tensor of shape (1,)
46
# loss.item() gets the scalar value held in the loss.
47
loss = (y_pred - y).pow(2).sum()
48
if t % 100 == 99:
49
print(t, loss.item())
50
51
# Use autograd to compute the backward pass. This call will compute the
52
# gradient of loss with respect to all Tensors with requires_grad=True.
53
# After this call a.grad, b.grad. c.grad and d.grad will be Tensors holding
54
# the gradient of the loss with respect to a, b, c, d respectively.
55
loss.backward()
56
57
# Manually update weights using gradient descent. Wrap in torch.no_grad()
58
# because weights have requires_grad=True, but we don't need to track this
59
# in autograd.
60
with torch.no_grad():
61
a -= learning_rate * a.grad
62
b -= learning_rate * b.grad
63
c -= learning_rate * c.grad
64
d -= learning_rate * d.grad
65
66
# Manually zero the gradients after updating weights
67
a.grad = None
68
b.grad = None
69
c.grad = None
70
d.grad = None
71
72
print(f'Result: y = {a.item()} + {b.item()} x + {c.item()} x^2 + {d.item()} x^3')
73
74