CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutSign UpSign In
pytorch

CoCalc provides the best real-time collaborative environment for Jupyter Notebooks, LaTeX documents, and SageMath, scalable from individual users to large groups and classes!

GitHub Repository: pytorch/tutorials
Path: blob/main/beginner_source/examples_nn/polynomial_nn.py
Views: 494
1
# -*- coding: utf-8 -*-
2
"""
3
PyTorch: nn
4
-----------
5
6
A third order polynomial, trained to predict :math:`y=\sin(x)` from :math:`-\pi`
7
to :math:`pi` by minimizing squared Euclidean distance.
8
9
This implementation uses the nn package from PyTorch to build the network.
10
PyTorch autograd makes it easy to define computational graphs and take gradients,
11
but raw autograd can be a bit too low-level for defining complex neural networks;
12
this is where the nn package can help. The nn package defines a set of Modules,
13
which you can think of as a neural network layer that produces output from
14
input and may have some trainable weights.
15
"""
16
import torch
17
import math
18
19
20
# Create Tensors to hold input and outputs.
21
x = torch.linspace(-math.pi, math.pi, 2000)
22
y = torch.sin(x)
23
24
# For this example, the output y is a linear function of (x, x^2, x^3), so
25
# we can consider it as a linear layer neural network. Let's prepare the
26
# tensor (x, x^2, x^3).
27
p = torch.tensor([1, 2, 3])
28
xx = x.unsqueeze(-1).pow(p)
29
30
# In the above code, x.unsqueeze(-1) has shape (2000, 1), and p has shape
31
# (3,), for this case, broadcasting semantics will apply to obtain a tensor
32
# of shape (2000, 3)
33
34
# Use the nn package to define our model as a sequence of layers. nn.Sequential
35
# is a Module which contains other Modules, and applies them in sequence to
36
# produce its output. The Linear Module computes output from input using a
37
# linear function, and holds internal Tensors for its weight and bias.
38
# The Flatten layer flatens the output of the linear layer to a 1D tensor,
39
# to match the shape of `y`.
40
model = torch.nn.Sequential(
41
torch.nn.Linear(3, 1),
42
torch.nn.Flatten(0, 1)
43
)
44
45
# The nn package also contains definitions of popular loss functions; in this
46
# case we will use Mean Squared Error (MSE) as our loss function.
47
loss_fn = torch.nn.MSELoss(reduction='sum')
48
49
learning_rate = 1e-6
50
for t in range(2000):
51
52
# Forward pass: compute predicted y by passing x to the model. Module objects
53
# override the __call__ operator so you can call them like functions. When
54
# doing so you pass a Tensor of input data to the Module and it produces
55
# a Tensor of output data.
56
y_pred = model(xx)
57
58
# Compute and print loss. We pass Tensors containing the predicted and true
59
# values of y, and the loss function returns a Tensor containing the
60
# loss.
61
loss = loss_fn(y_pred, y)
62
if t % 100 == 99:
63
print(t, loss.item())
64
65
# Zero the gradients before running the backward pass.
66
model.zero_grad()
67
68
# Backward pass: compute gradient of the loss with respect to all the learnable
69
# parameters of the model. Internally, the parameters of each Module are stored
70
# in Tensors with requires_grad=True, so this call will compute gradients for
71
# all learnable parameters in the model.
72
loss.backward()
73
74
# Update the weights using gradient descent. Each parameter is a Tensor, so
75
# we can access its gradients like we did before.
76
with torch.no_grad():
77
for param in model.parameters():
78
param -= learning_rate * param.grad
79
80
# You can access the first layer of `model` like accessing the first item of a list
81
linear_layer = model[0]
82
83
# For linear layer, its parameters are stored as `weight` and `bias`.
84
print(f'Result: y = {linear_layer.bias.item()} + {linear_layer.weight[:, 0].item()} x + {linear_layer.weight[:, 1].item()} x^2 + {linear_layer.weight[:, 2].item()} x^3')
85
86