CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutSign UpSign In
pytorch

CoCalc provides the best real-time collaborative environment for Jupyter Notebooks, LaTeX documents, and SageMath, scalable from individual users to large groups and classes!

GitHub Repository: pytorch/tutorials
Path: blob/main/prototype_source/maskedtensor_sparsity.py
Views: 494
1
# -*- coding: utf-8 -*-
2
3
"""
4
(Prototype) MaskedTensor Sparsity
5
=================================
6
"""
7
8
######################################################################
9
# Before working on this tutorial, please make sure to review our
10
# `MaskedTensor Overview tutorial <https://pytorch.org/tutorials/prototype/maskedtensor_overview.html>`.
11
#
12
# Introduction
13
# ------------
14
#
15
# Sparsity has been an area of rapid growth and importance within PyTorch; if any sparsity terms are confusing below,
16
# please refer to the `sparsity tutorial <https://pytorch.org/docs/stable/sparse.html>`__ for additional details.
17
#
18
# Sparse storage formats have been proven to be powerful in a variety of ways. As a primer, the first use case
19
# most practitioners think about is when the majority of elements are equal to zero (a high degree of sparsity),
20
# but even in cases of lower sparsity, certain formats (e.g. BSR) can take advantage of substructures within a matrix.
21
#
22
# .. note::
23
#
24
# At the moment, MaskedTensor supports COO and CSR tensors with plans to support additional formats
25
# (such as BSR and CSC) in the future. If you have any requests for additional formats,
26
# please file a feature request `here <https://github.com/pytorch/pytorch/issues>`__!
27
#
28
# Principles
29
# ----------
30
#
31
# When creating a :class:`MaskedTensor` with sparse tensors, there are a few principles that must be observed:
32
#
33
# 1. ``data`` and ``mask`` must have the same storage format, whether that's :attr:`torch.strided`, :attr:`torch.sparse_coo`, or :attr:`torch.sparse_csr`
34
# 2. ``data`` and ``mask`` must have the same size, indicated by :func:`size()`
35
#
36
# .. _sparse-coo-tensors:
37
#
38
# Sparse COO tensors
39
# ------------------
40
#
41
# In accordance with Principle #1, a sparse COO MaskedTensor is created by passing in two sparse COO tensors,
42
# which can be initialized by any of its constructors, for example :func:`torch.sparse_coo_tensor`.
43
#
44
# As a recap of `sparse COO tensors <https://pytorch.org/docs/stable/sparse.html#sparse-coo-tensors>`__, the COO format
45
# stands for "coordinate format", where the specified elements are stored as tuples of their indices and the
46
# corresponding values. That is, the following are provided:
47
#
48
# * ``indices``: array of size ``(ndim, nse)`` and dtype ``torch.int64``
49
# * ``values``: array of size `(nse,)` with any integer or floating point dtype
50
#
51
# where ``ndim`` is the dimensionality of the tensor and ``nse`` is the number of specified elements.
52
#
53
# For both sparse COO and CSR tensors, you can construct a :class:`MaskedTensor` by doing either:
54
#
55
# 1. ``masked_tensor(sparse_tensor_data, sparse_tensor_mask)``
56
# 2. ``dense_masked_tensor.to_sparse_coo()`` or ``dense_masked_tensor.to_sparse_csr()``
57
#
58
# The second method is easier to illustrate so we've shown that below, but for more on the first and the nuances behind
59
# the approach, please read the :ref:`Sparse COO Appendix <sparse-coo-appendix>`.
60
#
61
62
import torch
63
from torch.masked import masked_tensor
64
import warnings
65
66
# Disable prototype warnings and such
67
warnings.filterwarnings(action='ignore', category=UserWarning)
68
69
values = torch.tensor([[0, 0, 3], [4, 0, 5]])
70
mask = torch.tensor([[False, False, True], [False, False, True]])
71
mt = masked_tensor(values, mask)
72
sparse_coo_mt = mt.to_sparse_coo()
73
74
print("mt:\n", mt)
75
print("mt (sparse coo):\n", sparse_coo_mt)
76
print("mt data (sparse coo):\n", sparse_coo_mt.get_data())
77
78
######################################################################
79
# Sparse CSR tensors
80
# ------------------
81
#
82
# Similarly, :class:`MaskedTensor` also supports the
83
# `CSR (Compressed Sparse Row) <https://pytorch.org/docs/stable/sparse.html#sparse-csr-tensor>`__
84
# sparse tensor format. Instead of storing the tuples of the indices like sparse COO tensors, sparse CSR tensors
85
# aim to decrease the memory requirements by storing compressed row indices.
86
# In particular, a CSR sparse tensor consists of three 1-D tensors:
87
#
88
# * ``crow_indices``: array of compressed row indices with size ``(size[0] + 1,)``. This array indicates which row
89
# a given entry in values lives in. The last element is the number of specified elements,
90
# while `crow_indices[i+1] - crow_indices[i]` indicates the number of specified elements in row i.
91
# * ``col_indices``: array of size ``(nnz,)``. Indicates the column indices for each value.
92
# * ``values``: array of size ``(nnz,)``. Contains the values of the CSR tensor.
93
#
94
# Of note, both sparse COO and CSR tensors are in a `beta <https://pytorch.org/docs/stable/index.html>`__ state.
95
#
96
# By way of example:
97
#
98
99
mt_sparse_csr = mt.to_sparse_csr()
100
101
print("mt (sparse csr):\n", mt_sparse_csr)
102
print("mt data (sparse csr):\n", mt_sparse_csr.get_data())
103
104
######################################################################
105
# Supported Operations
106
# --------------------
107
#
108
# Unary
109
# ^^^^^
110
# All `unary operators <https://pytorch.org/docs/master/masked.html#unary-operators>`__ are supported, e.g.:
111
#
112
113
mt.sin()
114
115
######################################################################
116
# Binary
117
# ^^^^^^
118
# `Binary operators <https://pytorch.org/docs/master/masked.html#unary-operators>`__ are also supported, but the
119
# input masks from the two masked tensors must match. For more information on why this decision was made, please
120
# find our `MaskedTensor: Advanced Semantics tutorial <https://pytorch.org/tutorials/prototype/maskedtensor_advanced_semantics.html>`__.
121
#
122
# Please find an example below:
123
#
124
125
i = [[0, 1, 1],
126
[2, 0, 2]]
127
v1 = [3, 4, 5]
128
v2 = [20, 30, 40]
129
m = torch.tensor([True, False, True])
130
131
s1 = torch.sparse_coo_tensor(i, v1, (2, 3))
132
s2 = torch.sparse_coo_tensor(i, v2, (2, 3))
133
mask = torch.sparse_coo_tensor(i, m, (2, 3))
134
135
mt1 = masked_tensor(s1, mask)
136
mt2 = masked_tensor(s2, mask)
137
138
print("mt1:\n", mt1)
139
print("mt2:\n", mt2)
140
141
######################################################################
142
#
143
144
print("torch.div(mt2, mt1):\n", torch.div(mt2, mt1))
145
print("torch.mul(mt1, mt2):\n", torch.mul(mt1, mt2))
146
147
######################################################################
148
# Reductions
149
# ^^^^^^^^^^
150
# Finally, `reductions <https://pytorch.org/docs/master/masked.html#reductions>`__ are supported:
151
#
152
153
mt
154
155
######################################################################
156
#
157
158
print("mt.sum():\n", mt.sum())
159
print("mt.sum(dim=1):\n", mt.sum(dim=1))
160
print("mt.amin():\n", mt.amin())
161
162
######################################################################
163
# MaskedTensor Helper Methods
164
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^
165
# For convenience, :class:`MaskedTensor` has a number of methods to help convert between the different layouts
166
# and identify the current layout:
167
#
168
# Setup:
169
#
170
171
v = [[3, 0, 0],
172
[0, 4, 5]]
173
m = [[True, False, False],
174
[False, True, True]]
175
176
mt = masked_tensor(torch.tensor(v), torch.tensor(m))
177
mt
178
179
######################################################################
180
# :meth:`MaskedTensor.to_sparse_coo()` / :meth:`MaskedTensor.to_sparse_csr()` / :meth:`MaskedTensor.to_dense()`
181
# to help convert between the different layouts.
182
#
183
184
mt_sparse_coo = mt.to_sparse_coo()
185
mt_sparse_csr = mt.to_sparse_csr()
186
mt_dense = mt_sparse_coo.to_dense()
187
188
######################################################################
189
# :meth:`MaskedTensor.is_sparse` -- this will check if the :class:`MaskedTensor`'s layout
190
# matches any of the supported sparse layouts (currently COO and CSR).
191
#
192
193
print("mt_dense.is_sparse: ", mt_dense.is_sparse)
194
print("mt_sparse_coo.is_sparse: ", mt_sparse_coo.is_sparse)
195
print("mt_sparse_csr.is_sparse: ", mt_sparse_csr.is_sparse)
196
197
######################################################################
198
# :meth:`MaskedTensor.is_sparse_coo()`
199
#
200
201
print("mt_dense.is_sparse_coo(): ", mt_dense.is_sparse_coo())
202
print("mt_sparse_coo.is_sparse_coo: ", mt_sparse_coo.is_sparse_coo())
203
print("mt_sparse_csr.is_sparse_coo: ", mt_sparse_csr.is_sparse_coo())
204
205
######################################################################
206
# :meth:`MaskedTensor.is_sparse_csr()`
207
#
208
209
print("mt_dense.is_sparse_csr(): ", mt_dense.is_sparse_csr())
210
print("mt_sparse_coo.is_sparse_csr: ", mt_sparse_coo.is_sparse_csr())
211
print("mt_sparse_csr.is_sparse_csr: ", mt_sparse_csr.is_sparse_csr())
212
213
######################################################################
214
# Appendix
215
# --------
216
#
217
# .. _sparse-coo-appendix:
218
#
219
# Sparse COO Construction
220
# ^^^^^^^^^^^^^^^^^^^^^^^
221
#
222
# Recall in our :ref:`original example <sparse-coo-tensors>`, we created a :class:`MaskedTensor`
223
# and then converted it to a sparse COO MaskedTensor with :meth:`MaskedTensor.to_sparse_coo`.
224
#
225
# Alternatively, we can also construct a sparse COO MaskedTensor directly by passing in two sparse COO tensors:
226
#
227
228
values = torch.tensor([[0, 0, 3], [4, 0, 5]]).to_sparse()
229
mask = torch.tensor([[False, False, True], [False, False, True]]).to_sparse()
230
mt = masked_tensor(values, mask)
231
232
print("values:\n", values)
233
print("mask:\n", mask)
234
print("mt:\n", mt)
235
236
######################################################################
237
# Instead of using :meth:`torch.Tensor.to_sparse`, we can also create the sparse COO tensors directly,
238
# which brings us to a warning:
239
#
240
# .. warning::
241
#
242
# When using a function like :meth:`MaskedTensor.to_sparse_coo` (analogous to :meth:`Tensor.to_sparse`),
243
# if the user does not specify the indices like in the above example,
244
# then the 0 values will be "unspecified" by default.
245
#
246
# Below, we explicitly specify the 0's:
247
#
248
249
i = [[0, 1, 1],
250
[2, 0, 2]]
251
v = [3, 4, 5]
252
m = torch.tensor([True, False, True])
253
values = torch.sparse_coo_tensor(i, v, (2, 3))
254
mask = torch.sparse_coo_tensor(i, m, (2, 3))
255
mt2 = masked_tensor(values, mask)
256
257
print("values:\n", values)
258
print("mask:\n", mask)
259
print("mt2:\n", mt2)
260
261
######################################################################
262
# Note that ``mt`` and ``mt2`` look identical on the surface, and in the vast majority of operations, will yield the same
263
# result. But this brings us to a detail on the implementation:
264
#
265
# ``data`` and ``mask`` -- only for sparse MaskedTensors -- can have a different number of elements (:func:`nnz`)
266
# **at creation**, but the indices of ``mask`` must then be a subset of the indices of ``data``. In this case,
267
# ``data`` will assume the shape of ``mask`` by ``data = data.sparse_mask(mask)``; in other words, any of the elements
268
# in ``data`` that are not ``True`` in ``mask`` (that is, not specified) will be thrown away.
269
#
270
# Therefore, under the hood, the data looks slightly different; ``mt2`` has the "4" value masked out and ``mt``
271
# is completely without it. Their underlying data has different shapes,
272
# which would make operations like ``mt + mt2`` invalid.
273
#
274
275
print("mt data:\n", mt.get_data())
276
print("mt2 data:\n", mt2.get_data())
277
278
######################################################################
279
# .. _sparse-csr-appendix:
280
#
281
# Sparse CSR Construction
282
# ^^^^^^^^^^^^^^^^^^^^^^^
283
#
284
# We can also construct a sparse CSR MaskedTensor using sparse CSR tensors,
285
# and like the example above, this results in a similar treatment under the hood.
286
#
287
288
crow_indices = torch.tensor([0, 2, 4])
289
col_indices = torch.tensor([0, 1, 0, 1])
290
values = torch.tensor([1, 2, 3, 4])
291
mask_values = torch.tensor([True, False, False, True])
292
293
csr = torch.sparse_csr_tensor(crow_indices, col_indices, values, dtype=torch.double)
294
mask = torch.sparse_csr_tensor(crow_indices, col_indices, mask_values, dtype=torch.bool)
295
mt = masked_tensor(csr, mask)
296
297
print("mt:\n", mt)
298
print("mt data:\n", mt.get_data())
299
300
######################################################################
301
# Conclusion
302
# ----------
303
# In this tutorial, we have introduced how to use :class:`MaskedTensor` with sparse COO and CSR formats and
304
# discussed some of the subtleties under the hood in case users decide to access the underlying data structures
305
# directly. Sparse storage formats and masked semantics indeed have strong synergies, so much so that they are
306
# sometimes used as proxies for each other (as we will see in the next tutorial). In the future, we certainly plan
307
# to invest and continue developing in this direction.
308
#
309
# Further Reading
310
# ---------------
311
#
312
# To continue learning more, you can find our
313
# `Efficiently writing "sparse" semantics for Adagrad with MaskedTensor tutorial <https://pytorch.org/tutorials/prototype/maskedtensor_adagrad.html>`__
314
# to see an example of how MaskedTensor can simplify existing workflows with native masking semantics.
315
#
316
317