Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
Download

📚 The CoCalc Library - books, templates and other resources

132928 views
License: OTHER
1
<?xml version="1.0" encoding="UTF-8" ?>
2
3
<!-- Sage and Linear Algebra Worksheets -->
4
<!-- Robert A. Beezer -->
5
<!-- Copyright 2017-2019 License: CC BY-SA -->
6
<!-- See COPYING for more information -->
7
8
<pretext xmlns:xi="http://www.w3.org/2001/XInclude">
9
10
<xi:include href="../worksheetinfo.xml" />
11
12
<article xml:id="PDM">
13
<title>Sage and Linear Algebra Worksheet</title>
14
<subtitle>FCLA Section PDM</subtitle>
15
16
<!-- header inclusion needs -xinclude switch on xsltproc -->
17
<frontmatter>
18
<xi:include href="../header.xml" />
19
</frontmatter>
20
21
<section>
22
<title>LU Decomposition, Triangular Form</title>
23
24
<p>This is a topic not covered in our text. You <em>can</em> find a discussion in <pubtitle>A Second Course in Linear Algebra</pubtitle> at <url href="http://linear.ups.edu/scla/html/index.html" />.</p>
25
26
<p>Our goal is to row-reduce a matrix with elementary matrices, track the changes, and arrive at an expression for a square matrix <m>A</m> as a product of a lower-triangular matrix, <m>L</m>, and an upper-triangular matrix, <m>U</m>, that is <me>A=LU</me> the so-called <term>LU decomposition</term>. I sometimes prefer to call it <term>triangular form</term>.</p>
27
28
<p>There are no exercises in this worksheet, but instead there is a careful and detailed exposition of using elementary matrices (row operations) to arrive at a <term>matrix decomposition</term>. There are many kinds of matrix decompositions, such as the <term>singular value decomposition</term> (SVD). Five or six such decompositions form a central part of the linear algebra canon. Again, see <pubtitle>A Second Course in Linear Algebra</pubtitle> for details on these.</p>
29
30
<p>We decompose a <m>5\times 5</m> matrix. It is most natural to describe an LU decomposition of a square matrix, but the decomposition can be generalized to rectangular matrices.</p>
31
32
<sage><input>
33
A = matrix(QQ, [[-6, -10, 0, 10, 14],
34
[ 2, 3, 0, -4, -3],
35
[ 0, -2, -3, 1, 8],
36
[ 5, 6, -3, -7, -3],
37
[-1, 1, 6, -1, -8]])
38
A
39
</input></sage>
40
41
<p>Elementary matrices to <q>do</q> row operations in the first column.</p>
42
43
<sage><input>
44
actionA = elementary_matrix(QQ, 5, row1=1, row2=0, scale=-2)*elementary_matrix(QQ, 5, row1=3, row2=0, scale=-5)*elementary_matrix(QQ, 5, row1=4, row2=0, scale=1)*elementary_matrix(QQ, 5, row1=0, scale=-1/6)
45
B = actionA*A
46
B
47
</input></sage>
48
49
<p>Now in second column, moving to <term>row-echelon form</term> (<ie /> <em>not</em> <term>reduced row-echelon form</term>).</p>
50
51
<sage><input>
52
actionB = elementary_matrix(QQ, 5, row1=2, row2=1, scale=2)*elementary_matrix(QQ, 5, row1=3, row2=1, scale=7/3)*elementary_matrix(QQ, 5, row1=4, row2=1, scale=-8/3)*elementary_matrix(QQ, 5, row1=1, scale=-3)
53
C = actionB*B
54
C
55
</input></sage>
56
57
<p>The <q>bottom</q> of the third column.</p>
58
59
<sage><input>
60
actionC = elementary_matrix(QQ, 5, row1=3, row2=2, scale=3)*elementary_matrix(QQ, 5, row1=4, row2=2, scale=-6)*elementary_matrix(QQ, 5, row1=2, scale=-1/3)
61
D = actionC*C
62
D
63
</input></sage>
64
65
<p>And now the penultimate column.</p>
66
67
<sage><input>
68
actionD = elementary_matrix(QQ, 5, row1=4, row2=3, scale=-2)*elementary_matrix(QQ, 5, row1=3, scale=1)
69
E = actionD*D
70
E
71
</input></sage>
72
73
<p>And done.</p>
74
75
<sage><input>
76
actionE = elementary_matrix(QQ, 5, row1=4, scale=1)
77
F = actionE*E
78
F
79
</input></sage>
80
81
82
<p>Clearly, <c>F</c> has determinant 1, since it is an upper triangular matrix with diagonal entries equal to <m>1</m>. By tracking the effect of the above manipulations (tantamount to performing row operations) we expect that <me>\det(A) = \left(\frac{1}{-1/6}\right)\left(\frac{1}{-3}\right)\left(\frac{1}{-1/3}\right)\left(\frac{1}{1}\right)\left(\frac{1}{1}\right)\det(F) = -6.</me> Let's check.</p>
83
84
<sage><input>
85
A.determinant()
86
</input></sage>
87
88
<p>Yep. But it gets better. <c>F</c> is the product of the <q>action</q> matrices on the left of <c>A</c>.</p>
89
90
<sage><input>
91
total_action = prod([actionE, actionD, actionC, actionB, actionA])
92
total_action
93
</input></sage>
94
95
<p>Notice that the elementary matrices we used are all lower triangular (because we just formed zeros below the diagonal of the original matrix as we brought it to row-echelon form, and there were no row swaps). Hence their product is again lower triangular. Now check that we have the correct matrix.</p>
96
97
<sage><input>
98
F == total_action * A
99
</input></sage>
100
101
<p>The <q>total action</q> matrix is a product of elementary matrices, which are individually nonsingular. So their product is nonsingular. Futhermore, the inverse is again lower triangular.</p>
102
103
<sage><input>
104
ta_inv = total_action.inverse()
105
ta_inv
106
</input></sage>
107
108
<p>We reach our goal by rearranging the equality above, writing <c>A</c> as a product of a lower-triangular matrix with an upper-triangular matrix.</p>
109
110
<sage><input>
111
A == ta_inv * F
112
</input></sage>
113
114
<p>Yes! So we have decomposed the original matrix (<c>A</c>) into the product of a lower triangular matrix (inverse of the total action matrix) and an upper triangular matrix with all ones on the diagonal (<c>F</c>, the original matrix in row-echelon form).</p>
115
116
<sage><input>
117
A, ta_inv, F
118
</input></sage>
119
120
<p>This decomposition (the <term>LU decomposition</term>) can be useful for solving systems quickly. You <term>forward solve</term> with <m>L</m>, then <term>back solve</term> with <m>U</m>.</p>
121
122
<p>More specifically, suppose you want to solve <m>A\mathbf{x}=\mathbf{b}</m> for <m>\mathbf{x}</m>, and you have a decomposition <m>A=LU</m>. First solve the intermediate system, <m>L\mathbf{y}=\mathbf{b}</m> for <m>\mathbf{y}</m>, which can be accomplished easily by determining the entries of <m>\mathbf{y}</m> in order, exploiting the lower triangular nature of <m>L</m>. This is what is meant by the term <term>forward solve</term>.</p>
123
124
<p>With a solution for <m>\mathbf{y}</m>, form the system <m>U\mathbf{x}=\mathbf{y}</m>. You can check that a solution, <m>\mathbf{x}</m>, to this system is also a solution to the original system <m>A\mathbf{x}=\mathbf{b}</m>. Further, this solution can be found easily by determining the entries of <m>\mathbf{x}</m> in reverse order, exploiting the upper triangular nature of <m>U</m>. This is what is meant by the term <term>back solve</term>.</p>
125
126
<p>We solve <em>two</em> simple systems, but only do half as many row-operations as if we went fully to reduced row-echelon form. If you count the opertions carefully, you will see that this is a big win, roughly reducing computation time by a factor of half for large systems.</p>
127
</section>
128
129
<xi:include href="../legal.xml" />
130
131
</article>
132
</pretext>
133
134