GAP 4.8.9 installation with standard packages -- copy to your CoCalc project to get it
<?xml version="1.0" encoding="UTF-8"?>12<!DOCTYPE Book SYSTEM "gapdoc.dtd"3[ <!ENTITY see '<Alt Only="LaTeX">$\to$</Alt><Alt Not="LaTeX">--></Alt>'>4<!ENTITY C "<Package>C</Package>">5<!ENTITY Gauss "<Package>Gauss</Package>">6<!ENTITY GaussForHomalg "<Package>GaussForHomalg</Package>">7<!ENTITY homalg "<Package>homalg</Package>">8<!ENTITY RingsForHomalg "<Package>RingsForHomalg</Package>">9<!ENTITY SCO "<Package>SCO</Package>">10<!ENTITY GAPDoc "<Package>GAPDoc</Package>">11]>1213<Book Name="Gauss">1415<TitlePage>16<Title> The &Gauss; Package Manual</Title>17<Subtitle>Extended Gauss Functionality for &GAP;</Subtitle>18<Version>19Version <#Include SYSTEM "../VERSION">20</Version>21<Author>Simon Goertzen<Alt Only="LaTeX"><Br/></Alt>22<Email>[email protected]</Email>23<Homepage>http://wwwb.math.rwth-aachen.de/goertzen/</Homepage>24<Address>25Lehrstuhl B für Mathematik<Br/>26Templergraben 64<Br/>2752062 Aachen<Br/>28(Germany)29</Address>30</Author>31<Date>March 2013</Date>32<Abstract>This document explains the primary uses of the &Gauss; package.33Included is a documented list of the most important methods34and functions needed to work with sparse matrices and the35algorithms provided by the &Gauss; package.36</Abstract>37<Copyright>©right; 2007-2013 by Simon Goertzen<P/>38This package may be distributed under the terms and conditions of39the GNU Public License Version 2.40</Copyright>41<Acknowledgements>The &Gauss; package would not have been possible without the helpful contributions by42<List>43<Item>Max Neunhöffer, University of St Andrews, and</Item>44<Item>Mohamed Barakat, Lehrstuhl B für Mathematik, RWTH Aachen.</Item>45</List>46Many thanks to these two and the Lehrstuhl B für Mathematik in general.47It should be noted that the &GAP; algorithms for48<C>SemiEchelonForm</C> and other methods formed an important and49informative basis for the development of the extended Gaussian50algorithms. This manual was created with the help of the &GAPDoc;51package by F. Lübeck and M. Neunhöffer <Cite Key="GAPDoc"/>.52</Acknowledgements>53</TitlePage>5455<TableOfContents/>5657<Body>5859<Chapter Label="chap:intro"><Heading>Introduction</Heading>6061<Section Label="sec:overview">62<Heading>Overview over this manual</Heading>6364Chapter <Ref Chap="chap:intro"/> is concerned with the technical details of65installing and running this package. Chapter <Ref Chap="chap:EGF"/>66answers the question why and how the &GAP; functionality concerning a67sparse matrix type and gaussian algorithms was extended. The following68chapters are concerned with the workings of the sparse matrix type69(<Ref Chap="chap:SM"/>) and sparse Gaussian algorithms (<Ref70Chap="chap:Gauss"/>). Included is a documented list of the most71important methods and functions needed to work with sparse matrices72and the algorithms provided by the &Gauss; package. Anyone interested73in source code should just check out the files in the74<F>gap/pkg/Gauss/gap/</F> folder (&see; Appendix <Ref Label="FileOverview"/>).7576</Section>7778<#Include SYSTEM "install.xml"/>7980</Chapter>8182<Chapter Label="chap:EGF"><Heading>Extending Gauss Functionality</Heading>8384<Section Label="sec:need"><Heading>The need for extended functionality</Heading>8586&GAP; has a lot of functionality for row echelon forms of87matrices. These can be called by <C>SemiEchelonForm</C> and88similar commands. All of these work for the &GAP; matrix type over89fields. However, these algorithms are not capable of computing a90reduced row echelon form (RREF) of a matrix, there is no way to91"Gauss upwards". While this is not neccessary for things like Rank92or Kernel computations, this was one in a number of missing features93important for the development of the &GAP; package &homalg; by94M. Barakat <Cite Key="homalg-package"/>.<P/><P/>9596Parallel to this development I worked on &SCO; <Cite Key="SCO"/>,97a package for creating simplicial sets and computing the98cohomology of orbifolds, based on the paper "Simplicial Cohomology99of Orbifolds" by I. Moerdijk and D. A. Pronk <Cite100Key="MP_SCO"/>. Very early on it became clear that the cohomology101matrices (with entries in &ZZ; or finite quotients of &ZZ;) would102grow exponentially in size with the cohomology degree. At one103point in time, for example, a 50651 x 1133693 matrix had to be104handled.<P/><P/>105106It should be quite clear that there was a need for a sparse matrix107data type and corresponding Gaussian algorithms. After an108unfruitful search for a computer algebra system capable of this109task, the &Gauss; package was born - to provide not only the110missing RREF algorithms, but also support a new data type,111enabling &GAP; to handle sparse matrices of almost arbritrary112size.<P/><P/>113114I am proud to tell you that, thanks to optimizing the algorithms115for matrices over GF(2), it was possible to compute the GF(2)-Rank116of the matrix mentioned above in less than 20 minutes with a117memory usage of about 3 GB.118119</Section>120121<Section Label="sec:app"><Heading>The applications of the &Gauss; package algorithms</Heading>122123Please refer to <Cite Key="homalg-project"/> to find out more about the124&homalg; project and its related packages. Most of the motivation125for the algorithms in the &Gauss; package can be found there. If126you are interested in this project, you might also want to check127out my &GaussForHomalg; <Cite Key="GaussForHomalg"/> package,128which, just as &RingsForHomalg; <Cite Key="RingsForHomalg"/> does129for external Rings, serves as the connection between &homalg;130and &Gauss;. By allowing &homalg; to delegate computational tasks131to &Gauss; this small package extends &homalg;'s capabilities to132dense and sparse matrices over fields and rings of the form133<M>&ZZ; / \langle p^n \rangle</M>.<P/>134135For those unfamiliar with the &homalg; project let me explain a136couple of points. As outlined in <Cite Key="BR"/> by D. Robertz137and M. Barakat homological computations can be reduced to three138basic tasks:<P/>139140<List>141<Item>Computing a row basis of a module (<C>BasisOfRowModule</C>).</Item>142<Item>Reducing a module with a basis (<C>DecideZeroRows</C>).</Item>143<Item>Compute the relations between module elements (<C>SyzygiesGeneratorsOfRows</C>).</Item>144</List>145146In addition to these tasks only relatively easy tools for matrix147manipulation are needed, ranging from addition and multiplication148to finding the zero rows in a matrix. However, to reduce the need for149communication it might be helpful to supply &homalg; with some150more advanced procedures.<P/><P/>151152While the above tasks can be quite difficult when, for example,153working in noncommutative polynomial rings, in the &Gauss; case154they can all be done as long as you can compute a Reduced Row155Echelon Form. This is clear for <C>BasisOfRowModule</C>, as the156rows of the RREF of the matrix are already a basis of the157module. <Ref Meth="EchelonMat"/> is used to compute RREFs, based158on the &GAP; internal method <C>SemiEchelonMat</C> for Row Echelon159Forms.<P/><P/>160161Lets look at the second point, the basic function162<C>DecideZeroRows</C>: When you face the task of reducing a module163<M>A</M> with a given basis <M>B</M>, you can compute the RREF of164the following block matrix:165<Table Align="|c|c|">166<HorLine/>167<Row>168<Item><Alt Not="LaTeX">Id</Alt>169<Alt Only="LaTeX"><![CDATA[170$\begin{array}{ccc}1711&\\172&\ddots&\\173&&1\\174\end{array}$175]]></Alt></Item>176<Item>A</Item>177</Row>178<HorLine/>179<Row>180<Item>0</Item>181<Item>B</Item>182</Row>183<HorLine/>184</Table>185By computing the RREF (notice how important "Gaussing upwards" is186here) <M>A</M> is reduced with <M>B</M>. However, the left side of187the matrix just serves the single purpose of tricking the Gaussian188algorithms into doing what we want. Therefore, it was a logical189step to implement <Ref Meth="ReduceMat"/>, which does the same190thing but without needing unneccessary columns.<P/>191192Note: When, much later, it became clear that it was important to compute193the transformation matrices of the reduction, <Ref194Meth="ReduceMatTransformation"/> was born, similar to <Ref195Meth="EchelonMatTransformation"/>. This corresponds to the196&homalg; procedure <C>DecideZeroRowsEffectively</C>.<P/><P/>197198The third procedure, <C>SygygiesGeneratorsOfRows</C>, is concerned with the199relations between rows of a matrix, each row representing a module200element. Over a field these relations are exactly the kernel of201the matrix. One can easily see that this can be achieved by taking202a matrix203<Table Align="|c|c|">204<HorLine/>205<Row>206<Item>A</Item>207<Item><Alt Not="LaTeX">Id</Alt>208<Alt Only="LaTeX"><![CDATA[209$\begin{array}{ccc}2101&\\211&\ddots&\\212&&1\\213\end{array}$214]]></Alt></Item>215</Row>216<HorLine/>217</Table>218and computing its Row Echelon Form. Then the row relations are219generated by the rows to the right of the zero rows of the220REF. There are two problems with this approach: The computation221diagonalizes the kernel, which might not be wanted, and, much222worse, it does not work at all for rings with zero divisors. For223example, the <M>1 \times 1</M> matrix <M>[2 + 8&ZZ;]</M> has a row224relation <M>[4 + 8&ZZ;]</M> which would not have been found by225this method.<P/>226227Approaching this problem led to the method <Ref228Meth="EchelonMatTransformation"/>, which additionally computes the229transformation matrix <M>T</M>, such that RREF <M>= T \cdot M</M>.230Similar to <C>SemiEchelonMatTransformation</C>, <M>T</M> is split231up into the rows needed to create the basis vectors of the RREF,232and the relations that led to zero rows. Focussing on the233computations over fields, it was an easy step to write <Ref234Meth="KernelMat"/>, which terminates after the REF and returns the235kernel generators.<P/>236237The syzygy computation over <M>&ZZ; / \langle p^n \rangle</M> was solved by238carefully keeping track of basis vectors with a zero-divising239head. If, for <M> v = (0,\ldots,0,h,*,\ldots,*), h \neq 0,</M>240there exists <M>g \neq 0</M> such that <M>g \cdot h = 0</M>, the241vector <M>g \cdot v</M> is regarded as an additional row vector242which has to be reduced and can be reduced with. After some more243work this allowed for the implementation of <Ref244Meth="KernelMat"/> for matrices over <M>&ZZ; / \langle p^n \rangle</M>.<P/>245246This concludes the explanation of the so-called basic tasks247&Gauss; has to handle when called by &homalg; to do matrix248calculations. Here is a tabular overview of the current249capabilities of &Gauss; (<M>p</M> is a prime, <M>n \in &NN;</M>):<P/>250251<Table Align="|c||c|c|c|c|c|">252<HorLine/>253<Row>254<Item>Matrix Type:</Item>255<Item>Dense</Item>256<Item>Dense</Item>257<Item>Sparse</Item>258<Item>Sparse</Item>259<Item>Sparse</Item>260</Row>261<HorLine/>262<Row>263<Item>Base Ring:</Item>264<Item>Field</Item>265<Item><M>&ZZ; / \langle p^n \rangle</M></Item>266<Item>Field</Item>267<Item>GF(2)</Item>268<Item><M>&ZZ; / \langle p^n \rangle</M></Item>269</Row>270<HorLine/>271<HorLine/>272<Row>273<Item>RankMat</Item>274<Item>&GAP;</Item>275<Item>n.a.</Item>276<Item>+</Item>277<Item>++</Item>278<Item>n.a.</Item>279</Row>280<HorLine/>281<Row>282<Item>EchelonMat</Item>283<Item>+</Item>284<Item>-</Item>285<Item>+</Item>286<Item>++</Item>287<Item>+</Item>288</Row>289<HorLine/>290<Row>291<Item>EchelonMatTransf.</Item>292<Item>+</Item>293<Item>-</Item>294<Item>+</Item>295<Item>++</Item>296<Item>+</Item>297</Row>298<HorLine/>299<Row>300<Item>ReduceMat</Item>301<Item>+</Item>302<Item>-</Item>303<Item>+</Item>304<Item>++</Item>305<Item>+</Item>306</Row>307<HorLine/>308<Row>309<Item>ReduceMatTransf.</Item>310<Item>+</Item>311<Item>-</Item>312<Item>+</Item>313<Item>++</Item>314<Item>+</Item>315</Row>316<HorLine/>317<Row>318<Item>KernelMat</Item>319<Item>+</Item>320<Item>-</Item>321<Item>+</Item>322<Item>++</Item>323<Item>+</Item>324</Row>325<HorLine/>326</Table>327328As you can see, the development of hermite algorithms was not329continued for dense matrices. There are two reasons for that:330&GAP; already has very good algorithms for &ZZ;, and for small331matrices the disadvantage of computing over &ZZ;, potentially332leading to coefficient explosion, is marginal.333334</Section>335336</Chapter>337338<Chapter Label="chap:SM"><Heading>The Sparse Matrix Data Type</Heading>339340<Section Label="sec:workings"><Heading>The inner workings of &Gauss;341sparse matrices</Heading>342343When doing any kind of computation there is a constant conflict344between memory load and speed. On the one hand, memory usage is345bounded by the total available memory, on the other hand,346computation time should also not exceed certain347proportions. Memory usage and CPU time are generally348inversely proportional, because the computer needs more time to349perform operations on a compactified data structure. The350idea of sparse matrices mirrors exactly the need for less memory351load, therefore it is natural that sparse algorithms take more352time than dense ones. However, if the matrix is sufficiently large353and sparse at the same time, sparse algorithms can easily be354faster than dense ones while maintaining minimal memory load.<P/>355356It should be noted that, although matrices that appear naturally357in homological algebra are almost always sparse, they do not have358to stay sparse under (R)REF algorithms, especially when the359computation is concerned with transformation matrices. Therefore,360in a perfect world there should be ways implemented to not only361find out which data structure to use, but also at what point to362convert from one to the other. This was, however, not the aim of363the &Gauss; package and is just one of many points in which this364package could be optimized or extended.365366Take a look at this matrix <M>M</M>:367368<Table Align="|ccccc|">369<HorLine/>370<Row>371<Item>0</Item><Item>0</Item><Item>2</Item><Item>9</Item><Item>0</Item>372</Row>373<Row>374<Item>0</Item><Item>5</Item><Item>0</Item><Item>0</Item><Item>0</Item>375</Row>376<Row>377<Item>0</Item><Item>0</Item><Item>0</Item><Item>1</Item><Item>0</Item>378</Row>379<HorLine/>380</Table>381382The matrix <M>M</M> carries the same information as the following table,383if and only if you know how many rows and columns the matrix384has. There is also the matter of the base ring, but this is not385important for now:386387<Table Align="|cc|">388<HorLine/>389<Row><Item>(i,j)</Item><Item>Entry</Item></Row>390<HorLine/>391<Row><Item>(1,3)</Item><Item>2</Item></Row>392<Row><Item>(1,4)</Item><Item>9</Item></Row>393<Row><Item>(2,2)</Item><Item>5</Item></Row>394<Row><Item>(3,4)</Item><Item>1</Item></Row>395<HorLine/>396</Table>397398This table relates each index tuple to its nonzero entry, all399other matrix entries are defined to be zero. This only works for400known dimensions of the matrix, otherwise trailing zero rows and401columns could get lost (notice how the table gives no hint about402the existence of a 5th column). To convert the above table into a403sparse data structure, one could list the table entries in this404way:<P/>405406<Table Align="c">407<Row><Item><M>[ [ 1, 3, 2 ], [ 1, 4, 9 ], [ 2, 2, 5 ], [ 3, 4, 1 ] ]</M></Item></Row>408</Table>409410However, this data structure would not be very efficient. Whenever411you are interested in a row <M>i</M> of <M>M</M> (this happens all the time412when performing Gaussian elimination) the whole list would have413to be searched for 3-tuples of the form <M>[ i, *, *414]</M>. This is why I tried to manage the row index by putting the415tuples into the corresponding list entry:<Br/>416417<Table Align = "l">418<Row><Item><M>[ [ 3, 2 ], [ 4, 9 ] ],</M></Item></Row>419<Row><Item><M>[ [ 2, 5 ] ],</M></Item></Row>420<Row><Item><M>[ [ 4, 1 ] ] ]</M></Item></Row>421</Table>422423As you can see, this looks fairly complicated. However, the same424information can be stored in this form, which would become the425final data structure for &Gauss; sparse matrices:426427<Table Align = "clcl">428<Row><Item>indices :=</Item><Item>[ [ 3, 4 ],</Item><Item>entries:=</Item><Item>[ [ 2, 9 ],</Item></Row>429<Row><Item></Item><Item> [ 2 ],</Item><Item></Item><Item> [ 5 ],</Item></Row>430<Row><Item></Item><Item> [ 4 ] ]</Item><Item></Item><Item> [ 1 ] ]</Item></Row>431</Table>432433Although now the number of rows is equal to the Length of both434`indices' and `entries', it is still stored in the sparse435matrix. Here is the full data structure (&see;436<Ref Func="SparseMatrix" Label="constructor using gap matrices"/>):437438<Listing Type="from SparseMatrix.gi">439DeclareRepresentation( "IsSparseMatrixRep",440IsSparseMatrix, [ "nrows", "ncols", "indices", "entries", "ring" ] );441</Listing>442443As you can see, the matrix stores its ring to be on the safe side.444This is especially important for zero matrices, as there is no way445to determine the base ring from the sparse matrix structure. For446further information on sparse matrix construction and converting,447refer to <Ref Func="SparseMatrix" Label="constructor using gap matrices"/>.448449<Subsection Label="sub:gf2"><Heading>A special case: GF(2)</Heading>450451<Listing Type="from SparseMatrix.gi">452DeclareRepresentation( "IsSparseMatrixGF2Rep",453IsSparseMatrix, [ "nrows", "ncols", "indices", "ring" ] );454</Listing>455456Because the nonzero entries of a matrix over GF(2) are all "1",457the entries of M are not stored at all. It is of course crucial458that all operations and algorithms make 100% sure that all459appearing zero entries are deleted from the `indices' as well as460the `entries' list as they arise.461462</Subsection>463464</Section>465466<Section Label="sec:mfSM"><Heading>Methods and functions for sparse matrices</Heading>467<#Include Label="SparseMatrix">468<#Include Label="ConvertSparseMatrixToMatrix">469<#Include Label="CopyMat">470<#Include Label="GetEntry">471<#Include Label="SetEntry">472<#Include Label="AddToEntry">473<#Include Label="SparseZeroMatrix">474<#Include Label="SparseIdentityMatrix">475<#Include Label="TransposedSparseMat">476<#Include Label="CertainRows">477<#Include Label="CertainColumns">478<#Include Label="UnionOfRows">479<#Include Label="UnionOfColumns">480<#Include Label="SparseDiagMat">481<#Include Label="Nrows">482<#Include Label="Ncols">483<#Include Label="IndicesOfSparseMatrix">484<#Include Label="EntriesOfSparseMatrix">485<#Include Label="RingOfDefinition">486</Section>487488</Chapter>489490<Chapter Label="chap:Gauss"><Heading>Gaussian Algorithms</Heading>491492<Section Label="sec:list"><Heading>A list of the available algorithms</Heading>493494As decribed earlier, the main functions of &Gauss; are <Ref495Meth="EchelonMat"/> and <Ref Meth="EchelonMatTransformation"/>,496<Ref Meth="ReduceMat"/> and <Ref Meth="ReduceMatTransformation"/>,497<Ref Meth="KernelMat"/> and, additionally <Ref Meth="Rank"/>.498499These are all documented in the next section, but of course rely on500specific algorithms depending on the base ring of the matrix. These501are not fully documented but it should be very easy to find out how502they work based on the documentation of the main functions.503504<Table Align="lll">505<Row><Item>EchelonMat</Item></Row>506<Row><Item></Item><Item>Field:</Item><Item><C>EchelonMatDestructive</C></Item></Row>507<Row><Item></Item><Item>Ring:</Item><Item><C>HermiteMatDestructive</C></Item></Row>508<Row><Item>EchelonMatTransformation</Item></Row>509<Row><Item></Item><Item>Field:</Item><Item><C>EchelonMatTransformationDestructive</C></Item></Row>510<Row><Item></Item><Item>Ring:</Item><Item><C>HermiteMatTransformationDestructive</C></Item></Row>511<Row><Item>ReduceMat</Item></Row>512<Row><Item></Item><Item>Field:</Item><Item><C>ReduceMatWithEchelonMat</C></Item></Row>513<Row><Item></Item><Item>Ring:</Item><Item><C>ReduceMatWithHermiteMat</C></Item></Row>514<Row><Item>ReduceMatTransformation</Item></Row>515<Row><Item></Item><Item>Field:</Item><Item><C>ReduceMatWithEchelonMatTransformation</C></Item></Row>516<Row><Item></Item><Item>Ring:</Item><Item><C>ReduceMatWithHermiteMatTransformation</C></Item></Row>517<Row><Item>KernelMat</Item></Row>518<Row><Item></Item><Item>Field:</Item><Item><C>KernelEchelonMatDestructive</C></Item></Row>519<Row><Item></Item><Item>Ring:</Item><Item><C>KernelHermiteMatDestructive</C></Item></Row>520<Row><Item>Rank</Item></Row>521<Row><Item></Item><Item>Field (dense):</Item><Item><C>Rank</C> (&GAP; method)</Item></Row>522<Row><Item></Item><Item>Field (sparse):</Item><Item><C>RankDestructive</C></Item></Row>523<Row><Item></Item><Item>GF(2) (sparse):</Item><Item><C>RankOfIndicesListList</C></Item></Row>524<Row><Item></Item><Item>Ring:</Item><Item>n.a.</Item></Row>525</Table>526527</Section>528529<Section Label="sec:mfGauss"><Heading>Methods and Functions for &Gauss;ian algorithms</Heading>530<#Include Label="EchelonMat">531<#Include Label="EchelonMatTransformation">532<#Include Label="ReduceMat">533<#Include Label="ReduceMatTransformation">534<#Include Label="KernelMat">535<#Include Label="Rank">536</Section>537538539</Chapter>540541</Body>542543<Appendix Label="FileOverview">544<Heading>An Overview of the &Gauss; package source code</Heading>545<Table Align="l|l">546<Caption><E>The &Gauss; package files.</E></Caption>547<Row><Item>Filename</Item><Item>Content</Item></Row>548<HorLine/>549<Row><Item>SparseMatrix.gi</Item><Item>Definitions and methods for550the sparse matrix type</Item></Row>551<Row><Item>SparseMatrixGF2.gi</Item><Item>Special case GF(2): no552matrix entries needed</Item></Row>553<Row><Item>GaussDense.gi</Item><Item>Gaussian elmination for &GAP;554matrices over fields</Item></Row>555<Row><Item>Sparse.gi</Item><Item>Documentation and forking depending556on the base ring</Item></Row>557<Row><Item>GaussSparse.gi</Item><Item>Gaussian elimination for sparse558matrices over fields</Item></Row>559<Row><Item>HermiteSparse.gi</Item><Item>Hermite elimination for sparse560matrices over <M>&ZZ; / \langle p^n \rangle</M></Item></Row>561</Table>562</Appendix>563564<Bibliography Databases="GaussBib.xml"/>565566<TheIndex/>567568</Book>569570571