CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutSign UpSign In
pytorch

CoCalc provides the best real-time collaborative environment for Jupyter Notebooks, LaTeX documents, and SageMath, scalable from individual users to large groups and classes!

GitHub Repository: pytorch/tutorials
Path: blob/main/beginner_source/examples_nn/polynomial_optim.py
Views: 494
1
# -*- coding: utf-8 -*-
2
"""
3
PyTorch: optim
4
--------------
5
6
A third order polynomial, trained to predict :math:`y=\sin(x)` from :math:`-\pi`
7
to :math:`pi` by minimizing squared Euclidean distance.
8
9
This implementation uses the nn package from PyTorch to build the network.
10
11
Rather than manually updating the weights of the model as we have been doing,
12
we use the optim package to define an Optimizer that will update the weights
13
for us. The optim package defines many optimization algorithms that are commonly
14
used for deep learning, including SGD+momentum, RMSProp, Adam, etc.
15
"""
16
import torch
17
import math
18
19
20
# Create Tensors to hold input and outputs.
21
x = torch.linspace(-math.pi, math.pi, 2000)
22
y = torch.sin(x)
23
24
# Prepare the input tensor (x, x^2, x^3).
25
p = torch.tensor([1, 2, 3])
26
xx = x.unsqueeze(-1).pow(p)
27
28
# Use the nn package to define our model and loss function.
29
model = torch.nn.Sequential(
30
torch.nn.Linear(3, 1),
31
torch.nn.Flatten(0, 1)
32
)
33
loss_fn = torch.nn.MSELoss(reduction='sum')
34
35
# Use the optim package to define an Optimizer that will update the weights of
36
# the model for us. Here we will use RMSprop; the optim package contains many other
37
# optimization algorithms. The first argument to the RMSprop constructor tells the
38
# optimizer which Tensors it should update.
39
learning_rate = 1e-3
40
optimizer = torch.optim.RMSprop(model.parameters(), lr=learning_rate)
41
for t in range(2000):
42
# Forward pass: compute predicted y by passing x to the model.
43
y_pred = model(xx)
44
45
# Compute and print loss.
46
loss = loss_fn(y_pred, y)
47
if t % 100 == 99:
48
print(t, loss.item())
49
50
# Before the backward pass, use the optimizer object to zero all of the
51
# gradients for the variables it will update (which are the learnable
52
# weights of the model). This is because by default, gradients are
53
# accumulated in buffers( i.e, not overwritten) whenever .backward()
54
# is called. Checkout docs of torch.autograd.backward for more details.
55
optimizer.zero_grad()
56
57
# Backward pass: compute gradient of the loss with respect to model
58
# parameters
59
loss.backward()
60
61
# Calling the step function on an Optimizer makes an update to its
62
# parameters
63
optimizer.step()
64
65
66
linear_layer = model[0]
67
print(f'Result: y = {linear_layer.bias.item()} + {linear_layer.weight[:, 0].item()} x + {linear_layer.weight[:, 1].item()} x^2 + {linear_layer.weight[:, 2].item()} x^3')
68
69