Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place.
Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place.
Path: blob/main/beginner_source/blitz/tensor_tutorial.py
Views: 713
"""1Tensors2========34Tensors are a specialized data structure that are very similar to arrays5and matrices. In PyTorch, we use tensors to encode the inputs and6outputs of a model, as well as the model’s parameters.78Tensors are similar to NumPy’s ndarrays, except that tensors can run on9GPUs or other specialized hardware to accelerate computing. If you’re familiar with ndarrays, you’ll10be right at home with the Tensor API. If not, follow along in this quick11API walkthrough.1213"""1415import torch16import numpy as np171819######################################################################20# Tensor Initialization21# ~~~~~~~~~~~~~~~~~~~~~22#23# Tensors can be initialized in various ways. Take a look at the following examples:24#25# **Directly from data**26#27# Tensors can be created directly from data. The data type is automatically inferred.2829data = [[1, 2], [3, 4]]30x_data = torch.tensor(data)3132######################################################################33# **From a NumPy array**34#35# Tensors can be created from NumPy arrays (and vice versa - see :ref:`bridge-to-np-label`).36np_array = np.array(data)37x_np = torch.from_numpy(np_array)383940###############################################################41# **From another tensor:**42#43# The new tensor retains the properties (shape, datatype) of the argument tensor, unless explicitly overridden.4445x_ones = torch.ones_like(x_data) # retains the properties of x_data46print(f"Ones Tensor: \n {x_ones} \n")4748x_rand = torch.rand_like(x_data, dtype=torch.float) # overrides the datatype of x_data49print(f"Random Tensor: \n {x_rand} \n")505152######################################################################53# **With random or constant values:**54#55# ``shape`` is a tuple of tensor dimensions. In the functions below, it determines the dimensionality of the output tensor.5657shape = (2, 3,)58rand_tensor = torch.rand(shape)59ones_tensor = torch.ones(shape)60zeros_tensor = torch.zeros(shape)6162print(f"Random Tensor: \n {rand_tensor} \n")63print(f"Ones Tensor: \n {ones_tensor} \n")64print(f"Zeros Tensor: \n {zeros_tensor}")6566676869######################################################################70# --------------71#727374######################################################################75# Tensor Attributes76# ~~~~~~~~~~~~~~~~~77#78# Tensor attributes describe their shape, datatype, and the device on which they are stored.7980tensor = torch.rand(3, 4)8182print(f"Shape of tensor: {tensor.shape}")83print(f"Datatype of tensor: {tensor.dtype}")84print(f"Device tensor is stored on: {tensor.device}")858687######################################################################88# --------------89#909192######################################################################93# Tensor Operations94# ~~~~~~~~~~~~~~~~~95#96# Over 100 tensor operations, including transposing, indexing, slicing,97# mathematical operations, linear algebra, random sampling, and more are98# comprehensively described99# `here <https://pytorch.org/docs/stable/torch.html>`__.100#101# Each of them can be run on the GPU (at typically higher speeds than on a102# CPU). If you’re using Colab, allocate a GPU by going to Edit > Notebook103# Settings.104#105106# We move our tensor to the GPU if available107if torch.cuda.is_available():108tensor = tensor.to('cuda')109print(f"Device tensor is stored on: {tensor.device}")110111112######################################################################113# Try out some of the operations from the list.114# If you're familiar with the NumPy API, you'll find the Tensor API a breeze to use.115#116117###############################################################118# **Standard numpy-like indexing and slicing:**119120tensor = torch.ones(4, 4)121tensor[:,1] = 0122print(tensor)123124######################################################################125# **Joining tensors** You can use ``torch.cat`` to concatenate a sequence of tensors along a given dimension.126# See also `torch.stack <https://pytorch.org/docs/stable/generated/torch.stack.html>`__,127# another tensor joining op that is subtly different from ``torch.cat``.128t1 = torch.cat([tensor, tensor, tensor], dim=1)129print(t1)130131######################################################################132# **Multiplying tensors**133134# This computes the element-wise product135print(f"tensor.mul(tensor) \n {tensor.mul(tensor)} \n")136# Alternative syntax:137print(f"tensor * tensor \n {tensor * tensor}")138139######################################################################140#141# This computes the matrix multiplication between two tensors142print(f"tensor.matmul(tensor.T) \n {tensor.matmul(tensor.T)} \n")143# Alternative syntax:144print(f"tensor @ tensor.T \n {tensor @ tensor.T}")145146147######################################################################148# **In-place operations**149# Operations that have a ``_`` suffix are in-place. For example: ``x.copy_(y)``, ``x.t_()``, will change ``x``.150151print(tensor, "\n")152tensor.add_(5)153print(tensor)154155######################################################################156# .. note::157# In-place operations save some memory, but can be problematic when computing derivatives because of an immediate loss158# of history. Hence, their use is discouraged.159160######################################################################161# --------------162#163164165######################################################################166# .. _bridge-to-np-label:167#168# Bridge with NumPy169# ~~~~~~~~~~~~~~~~~170# Tensors on the CPU and NumPy arrays can share their underlying memory171# locations, and changing one will change the other.172173174######################################################################175# Tensor to NumPy array176# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^177t = torch.ones(5)178print(f"t: {t}")179n = t.numpy()180print(f"n: {n}")181182######################################################################183# A change in the tensor reflects in the NumPy array.184185t.add_(1)186print(f"t: {t}")187print(f"n: {n}")188189190######################################################################191# NumPy array to Tensor192# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^193n = np.ones(5)194t = torch.from_numpy(n)195196######################################################################197# Changes in the NumPy array reflects in the tensor.198np.add(n, 1, out=n)199print(f"t: {t}")200print(f"n: {n}")201202203