📚 The CoCalc Library - books, templates and other resources
License: OTHER
<?xml version="1.0" encoding="UTF-8" ?>12<!-- Sage and Linear Algebra Worksheets -->3<!-- Robert A. Beezer -->4<!-- Copyright 2017-2019 License: CC BY-SA -->5<!-- See COPYING for more information -->67<pretext xmlns:xi="http://www.w3.org/2001/XInclude">89<xi:include href="../worksheetinfo.xml" />1011<article xml:id="PDM">12<title>Sage and Linear Algebra Worksheet</title>13<subtitle>FCLA Section PDM</subtitle>1415<!-- header inclusion needs -xinclude switch on xsltproc -->16<frontmatter>17<xi:include href="../header.xml" />18</frontmatter>1920<section>21<title>LU Decomposition, Triangular Form</title>2223<p>This is a topic not covered in our text. You <em>can</em> find a discussion in <pubtitle>A Second Course in Linear Algebra</pubtitle> at <url href="http://linear.ups.edu/scla/html/index.html" />.</p>2425<p>Our goal is to row-reduce a matrix with elementary matrices, track the changes, and arrive at an expression for a square matrix <m>A</m> as a product of a lower-triangular matrix, <m>L</m>, and an upper-triangular matrix, <m>U</m>, that is <me>A=LU</me> the so-called <term>LU decomposition</term>. I sometimes prefer to call it <term>triangular form</term>.</p>2627<p>There are no exercises in this worksheet, but instead there is a careful and detailed exposition of using elementary matrices (row operations) to arrive at a <term>matrix decomposition</term>. There are many kinds of matrix decompositions, such as the <term>singular value decomposition</term> (SVD). Five or six such decompositions form a central part of the linear algebra canon. Again, see <pubtitle>A Second Course in Linear Algebra</pubtitle> for details on these.</p>2829<p>We decompose a <m>5\times 5</m> matrix. It is most natural to describe an LU decomposition of a square matrix, but the decomposition can be generalized to rectangular matrices.</p>3031<sage><input>32A = matrix(QQ, [[-6, -10, 0, 10, 14],33[ 2, 3, 0, -4, -3],34[ 0, -2, -3, 1, 8],35[ 5, 6, -3, -7, -3],36[-1, 1, 6, -1, -8]])37A38</input></sage>3940<p>Elementary matrices to <q>do</q> row operations in the first column.</p>4142<sage><input>43actionA = elementary_matrix(QQ, 5, row1=1, row2=0, scale=-2)*elementary_matrix(QQ, 5, row1=3, row2=0, scale=-5)*elementary_matrix(QQ, 5, row1=4, row2=0, scale=1)*elementary_matrix(QQ, 5, row1=0, scale=-1/6)44B = actionA*A45B46</input></sage>4748<p>Now in second column, moving to <term>row-echelon form</term> (<ie /> <em>not</em> <term>reduced row-echelon form</term>).</p>4950<sage><input>51actionB = elementary_matrix(QQ, 5, row1=2, row2=1, scale=2)*elementary_matrix(QQ, 5, row1=3, row2=1, scale=7/3)*elementary_matrix(QQ, 5, row1=4, row2=1, scale=-8/3)*elementary_matrix(QQ, 5, row1=1, scale=-3)52C = actionB*B53C54</input></sage>5556<p>The <q>bottom</q> of the third column.</p>5758<sage><input>59actionC = elementary_matrix(QQ, 5, row1=3, row2=2, scale=3)*elementary_matrix(QQ, 5, row1=4, row2=2, scale=-6)*elementary_matrix(QQ, 5, row1=2, scale=-1/3)60D = actionC*C61D62</input></sage>6364<p>And now the penultimate column.</p>6566<sage><input>67actionD = elementary_matrix(QQ, 5, row1=4, row2=3, scale=-2)*elementary_matrix(QQ, 5, row1=3, scale=1)68E = actionD*D69E70</input></sage>7172<p>And done.</p>7374<sage><input>75actionE = elementary_matrix(QQ, 5, row1=4, scale=1)76F = actionE*E77F78</input></sage>798081<p>Clearly, <c>F</c> has determinant 1, since it is an upper triangular matrix with diagonal entries equal to <m>1</m>. By tracking the effect of the above manipulations (tantamount to performing row operations) we expect that <me>\det(A) = \left(\frac{1}{-1/6}\right)\left(\frac{1}{-3}\right)\left(\frac{1}{-1/3}\right)\left(\frac{1}{1}\right)\left(\frac{1}{1}\right)\det(F) = -6.</me> Let's check.</p>8283<sage><input>84A.determinant()85</input></sage>8687<p>Yep. But it gets better. <c>F</c> is the product of the <q>action</q> matrices on the left of <c>A</c>.</p>8889<sage><input>90total_action = prod([actionE, actionD, actionC, actionB, actionA])91total_action92</input></sage>9394<p>Notice that the elementary matrices we used are all lower triangular (because we just formed zeros below the diagonal of the original matrix as we brought it to row-echelon form, and there were no row swaps). Hence their product is again lower triangular. Now check that we have the correct matrix.</p>9596<sage><input>97F == total_action * A98</input></sage>99100<p>The <q>total action</q> matrix is a product of elementary matrices, which are individually nonsingular. So their product is nonsingular. Futhermore, the inverse is again lower triangular.</p>101102<sage><input>103ta_inv = total_action.inverse()104ta_inv105</input></sage>106107<p>We reach our goal by rearranging the equality above, writing <c>A</c> as a product of a lower-triangular matrix with an upper-triangular matrix.</p>108109<sage><input>110A == ta_inv * F111</input></sage>112113<p>Yes! So we have decomposed the original matrix (<c>A</c>) into the product of a lower triangular matrix (inverse of the total action matrix) and an upper triangular matrix with all ones on the diagonal (<c>F</c>, the original matrix in row-echelon form).</p>114115<sage><input>116A, ta_inv, F117</input></sage>118119<p>This decomposition (the <term>LU decomposition</term>) can be useful for solving systems quickly. You <term>forward solve</term> with <m>L</m>, then <term>back solve</term> with <m>U</m>.</p>120121<p>More specifically, suppose you want to solve <m>A\mathbf{x}=\mathbf{b}</m> for <m>\mathbf{x}</m>, and you have a decomposition <m>A=LU</m>. First solve the intermediate system, <m>L\mathbf{y}=\mathbf{b}</m> for <m>\mathbf{y}</m>, which can be accomplished easily by determining the entries of <m>\mathbf{y}</m> in order, exploiting the lower triangular nature of <m>L</m>. This is what is meant by the term <term>forward solve</term>.</p>122123<p>With a solution for <m>\mathbf{y}</m>, form the system <m>U\mathbf{x}=\mathbf{y}</m>. You can check that a solution, <m>\mathbf{x}</m>, to this system is also a solution to the original system <m>A\mathbf{x}=\mathbf{b}</m>. Further, this solution can be found easily by determining the entries of <m>\mathbf{x}</m> in reverse order, exploiting the upper triangular nature of <m>U</m>. This is what is meant by the term <term>back solve</term>.</p>124125<p>We solve <em>two</em> simple systems, but only do half as many row-operations as if we went fully to reduced row-echelon form. If you count the opertions carefully, you will see that this is a big win, roughly reducing computation time by a factor of half for large systems.</p>126</section>127128<xi:include href="../legal.xml" />129130</article>131</pretext>132133134