CoCalc provides the best real-time collaborative environment for Jupyter Notebooks, LaTeX documents, and SageMath, scalable from individual users to large groups and classes!
CoCalc provides the best real-time collaborative environment for Jupyter Notebooks, LaTeX documents, and SageMath, scalable from individual users to large groups and classes!
Path: blob/main/prototype_source/maskedtensor_advanced_semantics.py
Views: 494
# -*- coding: utf-8 -*-12"""3(Prototype) MaskedTensor Advanced Semantics4===========================================5"""67######################################################################8#9# Before working on this tutorial, please make sure to review our10# `MaskedTensor Overview tutorial <https://pytorch.org/tutorials/prototype/maskedtensor_overview.html>`.11#12# The purpose of this tutorial is to help users understand how some of the advanced semantics work13# and how they came to be. We will focus on two particular ones:14#15# *. Differences between MaskedTensor and `NumPy's MaskedArray <https://numpy.org/doc/stable/reference/maskedarray.html>`__16# *. Reduction semantics17#18# Preparation19# -----------20#2122import torch23from torch.masked import masked_tensor24import numpy as np25import warnings2627# Disable prototype warnings and such28warnings.filterwarnings(action='ignore', category=UserWarning)2930######################################################################31# MaskedTensor vs NumPy's MaskedArray32# -----------------------------------33#34# NumPy's ``MaskedArray`` has a few fundamental semantics differences from MaskedTensor.35#36# *. Their factory function and basic definition inverts the mask (similar to ``torch.nn.MHA``); that is, MaskedTensor37# uses ``True`` to denote "specified" and ``False`` to denote "unspecified", or "valid"/"invalid",38# whereas NumPy does the opposite. We believe that our mask definition is not only more intuitive,39# but it also aligns more with the existing semantics in PyTorch as a whole.40# *. Intersection semantics. In NumPy, if one of two elements are masked out, the resulting element will be41# masked out as well -- in practice, they42# `apply the logical_or operator <https://github.com/numpy/numpy/blob/68299575d8595d904aff6f28e12d21bf6428a4ba/numpy/ma/core.py#L1016-L1024>`__.43#4445data = torch.arange(5.)46mask = torch.tensor([True, True, False, True, False])47npm0 = np.ma.masked_array(data.numpy(), (~mask).numpy())48npm1 = np.ma.masked_array(data.numpy(), (mask).numpy())4950print("npm0:\n", npm0)51print("npm1:\n", npm1)52print("npm0 + npm1:\n", npm0 + npm1)5354######################################################################55# Meanwhile, MaskedTensor does not support addition or binary operators with masks that don't match --56# to understand why, please find the :ref:`section on reductions <reduction-semantics>`.57#5859mt0 = masked_tensor(data, mask)60mt1 = masked_tensor(data, ~mask)61print("mt0:\n", mt0)62print("mt1:\n", mt1)6364try:65mt0 + mt166except ValueError as e:67print ("mt0 + mt1 failed. Error: ", e)6869######################################################################70# However, if this behavior is desired, MaskedTensor does support these semantics by giving access to the data and masks71# and conveniently converting a MaskedTensor to a Tensor with masked values filled in using :func:`to_tensor`.72# For example:73#7475t0 = mt0.to_tensor(0)76t1 = mt1.to_tensor(0)77mt2 = masked_tensor(t0 + t1, mt0.get_mask() & mt1.get_mask())7879print("t0:\n", t0)80print("t1:\n", t1)81print("mt2 (t0 + t1):\n", mt2)8283######################################################################84# Note that the mask is `mt0.get_mask() & mt1.get_mask()` since :class:`MaskedTensor`'s mask is the inverse of NumPy's.85#86# .. _reduction-semantics:87#88# Reduction Semantics89# -------------------90#91# Recall in `MaskedTensor's Overview tutorial <https://pytorch.org/tutorials/prototype/maskedtensor_overview.html>`__92# we discussed "Implementing missing torch.nan* ops". Those are examples of reductions -- operators that remove one93# (or more) dimensions from a Tensor and then aggregate the result. In this section, we will use reduction semantics94# to motivate our strict requirements around matching masks from above.95#96# Fundamentally, :class:`MaskedTensor`s perform the same reduction operation while ignoring the masked out97# (unspecified) values. By way of example:98#99100data = torch.arange(12, dtype=torch.float).reshape(3, 4)101mask = torch.randint(2, (3, 4), dtype=torch.bool)102mt = masked_tensor(data, mask)103104print("data:\n", data)105print("mask:\n", mask)106print("mt:\n", mt)107108######################################################################109# Now, the different reductions (all on dim=1):110#111112print("torch.sum:\n", torch.sum(mt, 1))113print("torch.mean:\n", torch.mean(mt, 1))114print("torch.prod:\n", torch.prod(mt, 1))115print("torch.amin:\n", torch.amin(mt, 1))116print("torch.amax:\n", torch.amax(mt, 1))117118######################################################################119# Of note, the value under a masked out element is not guaranteed to have any specific value, especially if the120# row or column is entirely masked out (the same is true for normalizations).121# For more details on masked semantics, you can find this `RFC <https://github.com/pytorch/rfcs/pull/27>`__.122#123# Now, we can revisit the question: why do we enforce the invariant that masks must match for binary operators?124# In other words, why don't we use the same semantics as ``np.ma.masked_array``? Consider the following example:125#126127data0 = torch.arange(10.).reshape(2, 5)128data1 = torch.arange(10.).reshape(2, 5) + 10129mask0 = torch.tensor([[True, True, False, False, False], [False, False, False, True, True]])130mask1 = torch.tensor([[False, False, False, True, True], [True, True, False, False, False]])131npm0 = np.ma.masked_array(data0.numpy(), (mask0).numpy())132npm1 = np.ma.masked_array(data1.numpy(), (mask1).numpy())133134print("npm0:", npm0)135print("npm1:", npm1)136137######################################################################138# Now, let's try addition:139#140141print("(npm0 + npm1).sum(0):\n", (npm0 + npm1).sum(0))142print("npm0.sum(0) + npm1.sum(0):\n", npm0.sum(0) + npm1.sum(0))143144######################################################################145# Sum and addition should clearly be associative, but with NumPy's semantics, they are not,146# which can certainly be confusing for the user.147#148# :class:`MaskedTensor`, on the other hand, will simply not allow this operation since `mask0 != mask1`.149# That being said, if the user wishes, there are ways around this150# (for example, filling in the MaskedTensor's undefined elements with 0 values using :func:`to_tensor`151# like shown below), but the user must now be more explicit with their intentions.152#153154mt0 = masked_tensor(data0, ~mask0)155mt1 = masked_tensor(data1, ~mask1)156157(mt0.to_tensor(0) + mt1.to_tensor(0)).sum(0)158159######################################################################160# Conclusion161# ----------162#163# In this tutorial, we have learned about the different design decisions behind MaskedTensor and164# NumPy's MaskedArray, as well as reduction semantics.165# In general, MaskedTensor is designed to avoid ambiguity and confusing semantics (for example, we try to preserve166# the associative property amongst binary operations), which in turn can necessitate the user167# to be more intentional with their code at times, but we believe this to be the better move.168# If you have any thoughts on this, please `let us know <https://github.com/pytorch/pytorch/issues>`__!169#170171172