Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place.
Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place.
Path: blob/main/beginner_source/basics/tensorqs_tutorial.py
Views: 713
"""1`Learn the Basics <intro.html>`_ ||2`Quickstart <quickstart_tutorial.html>`_ ||3**Tensors** ||4`Datasets & DataLoaders <data_tutorial.html>`_ ||5`Transforms <transforms_tutorial.html>`_ ||6`Build Model <buildmodel_tutorial.html>`_ ||7`Autograd <autogradqs_tutorial.html>`_ ||8`Optimization <optimization_tutorial.html>`_ ||9`Save & Load Model <saveloadrun_tutorial.html>`_1011Tensors12==========================1314Tensors are a specialized data structure that are very similar to arrays and matrices.15In PyTorch, we use tensors to encode the inputs and outputs of a model, as well as the model’s parameters.1617Tensors are similar to `NumPy’s <https://numpy.org/>`_ ndarrays, except that tensors can run on GPUs or other hardware accelerators. In fact, tensors and18NumPy arrays can often share the same underlying memory, eliminating the need to copy data (see :ref:`bridge-to-np-label`). Tensors19are also optimized for automatic differentiation (we'll see more about that later in the `Autograd <autogradqs_tutorial.html>`__20section). If you’re familiar with ndarrays, you’ll be right at home with the Tensor API. If not, follow along!21"""2223import torch24import numpy as np252627######################################################################28# Initializing a Tensor29# ~~~~~~~~~~~~~~~~~~~~~30#31# Tensors can be initialized in various ways. Take a look at the following examples:32#33# **Directly from data**34#35# Tensors can be created directly from data. The data type is automatically inferred.3637data = [[1, 2],[3, 4]]38x_data = torch.tensor(data)3940######################################################################41# **From a NumPy array**42#43# Tensors can be created from NumPy arrays (and vice versa - see :ref:`bridge-to-np-label`).44np_array = np.array(data)45x_np = torch.from_numpy(np_array)464748###############################################################49# **From another tensor:**50#51# The new tensor retains the properties (shape, datatype) of the argument tensor, unless explicitly overridden.5253x_ones = torch.ones_like(x_data) # retains the properties of x_data54print(f"Ones Tensor: \n {x_ones} \n")5556x_rand = torch.rand_like(x_data, dtype=torch.float) # overrides the datatype of x_data57print(f"Random Tensor: \n {x_rand} \n")585960######################################################################61# **With random or constant values:**62#63# ``shape`` is a tuple of tensor dimensions. In the functions below, it determines the dimensionality of the output tensor.6465shape = (2,3,)66rand_tensor = torch.rand(shape)67ones_tensor = torch.ones(shape)68zeros_tensor = torch.zeros(shape)6970print(f"Random Tensor: \n {rand_tensor} \n")71print(f"Ones Tensor: \n {ones_tensor} \n")72print(f"Zeros Tensor: \n {zeros_tensor}")73747576######################################################################77# --------------78#7980######################################################################81# Attributes of a Tensor82# ~~~~~~~~~~~~~~~~~~~~~~83#84# Tensor attributes describe their shape, datatype, and the device on which they are stored.8586tensor = torch.rand(3,4)8788print(f"Shape of tensor: {tensor.shape}")89print(f"Datatype of tensor: {tensor.dtype}")90print(f"Device tensor is stored on: {tensor.device}")919293######################################################################94# --------------95#9697######################################################################98# Operations on Tensors99# ~~~~~~~~~~~~~~~~~~~~~~~100#101# Over 100 tensor operations, including arithmetic, linear algebra, matrix manipulation (transposing,102# indexing, slicing), sampling and more are103# comprehensively described `here <https://pytorch.org/docs/stable/torch.html>`__.104#105# Each of these operations can be run on the GPU (at typically higher speeds than on a106# CPU). If you’re using Colab, allocate a GPU by going to Runtime > Change runtime type > GPU.107#108# By default, tensors are created on the CPU. We need to explicitly move tensors to the GPU using109# ``.to`` method (after checking for GPU availability). Keep in mind that copying large tensors110# across devices can be expensive in terms of time and memory!111112# We move our tensor to the GPU if available113if torch.cuda.is_available():114tensor = tensor.to("cuda")115116117######################################################################118# Try out some of the operations from the list.119# If you're familiar with the NumPy API, you'll find the Tensor API a breeze to use.120#121122###############################################################123# **Standard numpy-like indexing and slicing:**124125tensor = torch.ones(4, 4)126print(f"First row: {tensor[0]}")127print(f"First column: {tensor[:, 0]}")128print(f"Last column: {tensor[..., -1]}")129tensor[:,1] = 0130print(tensor)131132######################################################################133# **Joining tensors** You can use ``torch.cat`` to concatenate a sequence of tensors along a given dimension.134# See also `torch.stack <https://pytorch.org/docs/stable/generated/torch.stack.html>`__,135# another tensor joining operator that is subtly different from ``torch.cat``.136t1 = torch.cat([tensor, tensor, tensor], dim=1)137print(t1)138139140######################################################################141# **Arithmetic operations**142143# This computes the matrix multiplication between two tensors. y1, y2, y3 will have the same value144# ``tensor.T`` returns the transpose of a tensor145y1 = tensor @ tensor.T146y2 = tensor.matmul(tensor.T)147148y3 = torch.rand_like(y1)149torch.matmul(tensor, tensor.T, out=y3)150151152# This computes the element-wise product. z1, z2, z3 will have the same value153z1 = tensor * tensor154z2 = tensor.mul(tensor)155156z3 = torch.rand_like(tensor)157torch.mul(tensor, tensor, out=z3)158159160######################################################################161# **Single-element tensors** If you have a one-element tensor, for example by aggregating all162# values of a tensor into one value, you can convert it to a Python163# numerical value using ``item()``:164165agg = tensor.sum()166agg_item = agg.item()167print(agg_item, type(agg_item))168169170######################################################################171# **In-place operations**172# Operations that store the result into the operand are called in-place. They are denoted by a ``_`` suffix.173# For example: ``x.copy_(y)``, ``x.t_()``, will change ``x``.174175print(f"{tensor} \n")176tensor.add_(5)177print(tensor)178179######################################################################180# .. note::181# In-place operations save some memory, but can be problematic when computing derivatives because of an immediate loss182# of history. Hence, their use is discouraged.183184185186######################################################################187# --------------188#189190191######################################################################192# .. _bridge-to-np-label:193#194# Bridge with NumPy195# ~~~~~~~~~~~~~~~~~196# Tensors on the CPU and NumPy arrays can share their underlying memory197# locations, and changing one will change the other.198199200######################################################################201# Tensor to NumPy array202# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^203t = torch.ones(5)204print(f"t: {t}")205n = t.numpy()206print(f"n: {n}")207208######################################################################209# A change in the tensor reflects in the NumPy array.210211t.add_(1)212print(f"t: {t}")213print(f"n: {n}")214215216######################################################################217# NumPy array to Tensor218# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^219n = np.ones(5)220t = torch.from_numpy(n)221222######################################################################223# Changes in the NumPy array reflects in the tensor.224np.add(n, 1, out=n)225print(f"t: {t}")226print(f"n: {n}")227228229