CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutSign UpSign In
pytorch

CoCalc provides the best real-time collaborative environment for Jupyter Notebooks, LaTeX documents, and SageMath, scalable from individual users to large groups and classes!

GitHub Repository: pytorch/tutorials
Path: blob/main/beginner_source/basics/tensorqs_tutorial.py
Views: 494
1
"""
2
`Learn the Basics <intro.html>`_ ||
3
`Quickstart <quickstart_tutorial.html>`_ ||
4
**Tensors** ||
5
`Datasets & DataLoaders <data_tutorial.html>`_ ||
6
`Transforms <transforms_tutorial.html>`_ ||
7
`Build Model <buildmodel_tutorial.html>`_ ||
8
`Autograd <autogradqs_tutorial.html>`_ ||
9
`Optimization <optimization_tutorial.html>`_ ||
10
`Save & Load Model <saveloadrun_tutorial.html>`_
11
12
Tensors
13
==========================
14
15
Tensors are a specialized data structure that are very similar to arrays and matrices.
16
In PyTorch, we use tensors to encode the inputs and outputs of a model, as well as the model’s parameters.
17
18
Tensors are similar to `NumPy’s <https://numpy.org/>`_ ndarrays, except that tensors can run on GPUs or other hardware accelerators. In fact, tensors and
19
NumPy arrays can often share the same underlying memory, eliminating the need to copy data (see :ref:`bridge-to-np-label`). Tensors
20
are also optimized for automatic differentiation (we'll see more about that later in the `Autograd <autogradqs_tutorial.html>`__
21
section). If you’re familiar with ndarrays, you’ll be right at home with the Tensor API. If not, follow along!
22
"""
23
24
import torch
25
import numpy as np
26
27
28
######################################################################
29
# Initializing a Tensor
30
# ~~~~~~~~~~~~~~~~~~~~~
31
#
32
# Tensors can be initialized in various ways. Take a look at the following examples:
33
#
34
# **Directly from data**
35
#
36
# Tensors can be created directly from data. The data type is automatically inferred.
37
38
data = [[1, 2],[3, 4]]
39
x_data = torch.tensor(data)
40
41
######################################################################
42
# **From a NumPy array**
43
#
44
# Tensors can be created from NumPy arrays (and vice versa - see :ref:`bridge-to-np-label`).
45
np_array = np.array(data)
46
x_np = torch.from_numpy(np_array)
47
48
49
###############################################################
50
# **From another tensor:**
51
#
52
# The new tensor retains the properties (shape, datatype) of the argument tensor, unless explicitly overridden.
53
54
x_ones = torch.ones_like(x_data) # retains the properties of x_data
55
print(f"Ones Tensor: \n {x_ones} \n")
56
57
x_rand = torch.rand_like(x_data, dtype=torch.float) # overrides the datatype of x_data
58
print(f"Random Tensor: \n {x_rand} \n")
59
60
61
######################################################################
62
# **With random or constant values:**
63
#
64
# ``shape`` is a tuple of tensor dimensions. In the functions below, it determines the dimensionality of the output tensor.
65
66
shape = (2,3,)
67
rand_tensor = torch.rand(shape)
68
ones_tensor = torch.ones(shape)
69
zeros_tensor = torch.zeros(shape)
70
71
print(f"Random Tensor: \n {rand_tensor} \n")
72
print(f"Ones Tensor: \n {ones_tensor} \n")
73
print(f"Zeros Tensor: \n {zeros_tensor}")
74
75
76
77
######################################################################
78
# --------------
79
#
80
81
######################################################################
82
# Attributes of a Tensor
83
# ~~~~~~~~~~~~~~~~~~~~~~
84
#
85
# Tensor attributes describe their shape, datatype, and the device on which they are stored.
86
87
tensor = torch.rand(3,4)
88
89
print(f"Shape of tensor: {tensor.shape}")
90
print(f"Datatype of tensor: {tensor.dtype}")
91
print(f"Device tensor is stored on: {tensor.device}")
92
93
94
######################################################################
95
# --------------
96
#
97
98
######################################################################
99
# Operations on Tensors
100
# ~~~~~~~~~~~~~~~~~~~~~~~
101
#
102
# Over 100 tensor operations, including arithmetic, linear algebra, matrix manipulation (transposing,
103
# indexing, slicing), sampling and more are
104
# comprehensively described `here <https://pytorch.org/docs/stable/torch.html>`__.
105
#
106
# Each of these operations can be run on the GPU (at typically higher speeds than on a
107
# CPU). If you’re using Colab, allocate a GPU by going to Runtime > Change runtime type > GPU.
108
#
109
# By default, tensors are created on the CPU. We need to explicitly move tensors to the GPU using
110
# ``.to`` method (after checking for GPU availability). Keep in mind that copying large tensors
111
# across devices can be expensive in terms of time and memory!
112
113
# We move our tensor to the GPU if available
114
if torch.cuda.is_available():
115
tensor = tensor.to("cuda")
116
117
118
######################################################################
119
# Try out some of the operations from the list.
120
# If you're familiar with the NumPy API, you'll find the Tensor API a breeze to use.
121
#
122
123
###############################################################
124
# **Standard numpy-like indexing and slicing:**
125
126
tensor = torch.ones(4, 4)
127
print(f"First row: {tensor[0]}")
128
print(f"First column: {tensor[:, 0]}")
129
print(f"Last column: {tensor[..., -1]}")
130
tensor[:,1] = 0
131
print(tensor)
132
133
######################################################################
134
# **Joining tensors** You can use ``torch.cat`` to concatenate a sequence of tensors along a given dimension.
135
# See also `torch.stack <https://pytorch.org/docs/stable/generated/torch.stack.html>`__,
136
# another tensor joining operator that is subtly different from ``torch.cat``.
137
t1 = torch.cat([tensor, tensor, tensor], dim=1)
138
print(t1)
139
140
141
######################################################################
142
# **Arithmetic operations**
143
144
# This computes the matrix multiplication between two tensors. y1, y2, y3 will have the same value
145
# ``tensor.T`` returns the transpose of a tensor
146
y1 = tensor @ tensor.T
147
y2 = tensor.matmul(tensor.T)
148
149
y3 = torch.rand_like(y1)
150
torch.matmul(tensor, tensor.T, out=y3)
151
152
153
# This computes the element-wise product. z1, z2, z3 will have the same value
154
z1 = tensor * tensor
155
z2 = tensor.mul(tensor)
156
157
z3 = torch.rand_like(tensor)
158
torch.mul(tensor, tensor, out=z3)
159
160
161
######################################################################
162
# **Single-element tensors** If you have a one-element tensor, for example by aggregating all
163
# values of a tensor into one value, you can convert it to a Python
164
# numerical value using ``item()``:
165
166
agg = tensor.sum()
167
agg_item = agg.item()
168
print(agg_item, type(agg_item))
169
170
171
######################################################################
172
# **In-place operations**
173
# Operations that store the result into the operand are called in-place. They are denoted by a ``_`` suffix.
174
# For example: ``x.copy_(y)``, ``x.t_()``, will change ``x``.
175
176
print(f"{tensor} \n")
177
tensor.add_(5)
178
print(tensor)
179
180
######################################################################
181
# .. note::
182
# In-place operations save some memory, but can be problematic when computing derivatives because of an immediate loss
183
# of history. Hence, their use is discouraged.
184
185
186
187
######################################################################
188
# --------------
189
#
190
191
192
######################################################################
193
# .. _bridge-to-np-label:
194
#
195
# Bridge with NumPy
196
# ~~~~~~~~~~~~~~~~~
197
# Tensors on the CPU and NumPy arrays can share their underlying memory
198
# locations, and changing one will change the other.
199
200
201
######################################################################
202
# Tensor to NumPy array
203
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
204
t = torch.ones(5)
205
print(f"t: {t}")
206
n = t.numpy()
207
print(f"n: {n}")
208
209
######################################################################
210
# A change in the tensor reflects in the NumPy array.
211
212
t.add_(1)
213
print(f"t: {t}")
214
print(f"n: {n}")
215
216
217
######################################################################
218
# NumPy array to Tensor
219
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
220
n = np.ones(5)
221
t = torch.from_numpy(n)
222
223
######################################################################
224
# Changes in the NumPy array reflects in the tensor.
225
np.add(n, 1, out=n)
226
print(f"t: {t}")
227
print(f"n: {n}")
228
229