📚 The CoCalc Library - books, templates and other resources
License: OTHER
Abstract Algebra: An Interactive Approach, 2e
©2015 This notebook is provided with the textbook, "Abstract Algebra: An Interactive Approach, 2nd Ed." by William Paulsen. Users of this notebook are encouraged to buy the textbook.
Chapter 11
Integral Domains and Fields
Initialization: This cell MUST be evaluated first:
Polynomial Rings
In this chapter we will consider some useful ways of forming integral domains and fields from smaller rings. These provide us with useful examples for experimentation in hopes of finding properties of general integral domains and fields.
One major source of integral domains are the polynomial rings. We can construct a polynomial ring from any ring, but the polynomial rings with the familiar properties are formed either from fields or integral domains.
DEFINITION 11.1
Let K be a communitive ring. We define the set of polynomials in x over K, denoted K[x], to be the set of all expressions of the form
k0 + k1 x + k2 x2 + k3 x3 + ⋯
where the coefficients kn are elements of K, and only a finite number of the coefficients are nonzero. If kd is the last nonzero coefficient, then d is called the degree of the polynomial.Notice that if d = 0, we essentually obtain elements of K. Thus, the nonzero elements of K are the polynomials of degree 0, which are refered to as constant polynomials. The degree for the zero polynomial
0 + 0 x + 0 x2 + 0 x3 + ⋯
is not defined.By convention, the terms with zero coefficients are omited when writing polynomials. Thus, the second degree polynomial in ℤ[x]
1 + 0 x + 3 x2 + 0 x3 + ⋯
would be writen 1 + 3 x2. The one exception to this convention is the zero polynomial, which is simply writen as 0.We can define the sum and product of two polynomial in the familar way. If
A = a0 + a1 x + a2 x2 + a3 x3 + ⋯, and
B = b0 + b1 x + b2 x2 + b3 x3 + ⋯, then
A + B = (a0 + b0) + (a1 + b1) x + (a2 + b2) x2 + (a3 + b3) x3 + ⋯, and
∞ | ∞ | ||
A·B = | ∑ | ∑ | (ai·bj) xi+j. |
i=0 | j=0 |
Although this looks like a double infinite sum, bear in mind that only a finite number of the terms will be nonzero. Thus, we do not need to be concerned about such issues as "convergence." If fact, this product could be written as
A·B = a0·b0 + (a0·b1 + a1·b0) x+ (a0·b2 + a1·b1 + a2·b0 ) x2 + (a0·b3 + a1·b2 + a2·b1 + a3·b0) x3 + ⋯
so each coefficient is determined by a finite sum.Before we prove that the set of polynomials forms a ring, we need the following lemma:
Let A and B be two nonzero polynomials in x over K of degree m and n respectively, where K has no zero divisors. Then A·B is a polynomial of degree m + n, and A + B is a polynomial of degree no greater than the larger of m or n.
This lemma verifies that the both sum and product of two polynomials are polynomials. The remaining ring properties are not difficult to verify.
Although this proposition holds for polynomials defined over a integral domain, there is no reason why we cannot have Sage work with polynomials defined over
any commutative ring. However, we will discover that the familar properties of polynomials will radically change!
Let us consider the commutative ring of order 8 that we worked with in Chapter 9.
We form a polynomial ring over R by defining a new symbol x, which is an indeterminant over the ring R.
We can now enter in a typical polynomial.
We can experiment with powers of this first degree polynomial.
It seems as though (a·x + b) satisfies the simplistic property:
(u + v)n = un + vn.
Does this hold for all polynomials in this ring?EXPERIMENT:
Try to find a first degree polynomial (u·x + v), where u and v are in R, which does not satisfy the above property for some value of n.
In the process of doing this experiment, you may have noticed a property that is even more bizarre: Sometimes the square of a first degree polynomial is not a second degree polynomial! In fact, consider
which yields the identity element in R. Thus, (2 a·x + a + b) is its own multiplicative inverse! To further complicate matters, notice that polynomials may be "factored" in more than one way. Note that
yield the same quadratic polynomial.
EXPERIMENT:
See if you can find a third polynomial whose product with (b + 2 a·x) is the same as the two above.
Such properties of polynomials are enough to drive any begining algebra student crazy. Because of the bizarre properties of polynomials over general rings, we mainly focus out attention to polynomial rings K[x], where K is an integral domain or field. We will discover in Chapter 12 the exact conditions for which every polynomial in K[x] will have a unique factorization.
In order to find new integral domains and fields, we will use a simple property that will classify all rings.
DEFINITION 11.2
Let R be a ring. We define the characteristic of R to be the smallest positive number n such that n x = 0 for all elements x of R. If no such positive number exists, we say the ring has characteristic zero.
For integral domains or fields, the characteristic plays an extremely important role, as the next proposition illustrates.
Let R be a nontrivial ring with no zero-divisors. If the characteristic is 0, then for n an integer and x a nonzero element of R, n x = 0 only if n = 0. If the characteristic is positive then it is a prime number p, and for nonzero x, n x = 0 if, and only if, n is a multiple of p.
Characteristics are important because they give us a way of finding domains and fields in Sage. We begin by telling Sage the characteristic p
of the ring we want to define with the command InitDomain(p). Because of Proposition 11.2, we see that p must either be 0 or a
prime number. This command does three things. First, it tells Sage that the ring to be defined is commutative. Sage also defines the identity element
to be 1. Finally, Sage assumes that the ring to be defined has no zero-divisors, and may latter report an error if zero divisors are found.
For example, let us find a new ring with characteristic 3. We begin with the command
This actually defines the field Z3, as we can see with the command
We can create polynomials over this new domain by the AddFieldVar command.
Now we can do computations in the polynomial ring Z3[i]:
Let us try imitating the complex numbers, and tell Sage that i2 = −1.
This now defines a new ring! We will use the command ListField to see the new ring.
This gives us a ring with nine elements. Let us look at the addition and multiplication tables.
The multiplication table shows that there are no zero divisors. By Corollary 9.1, K is a field. (You might have guessed this, since we used the ListField command.) We could call K the field of "complex numbers modulo 3".
What if we tried to define complex numbers modulo 5?
Sage complains about a modulus that is not irreducible. The problem is that the polynomial x2 + 1 is not irreducible in Z5[x]. If we tried to define this as a ring,
we find that there are zero divisors.
EXPERIMENT:
Try defining a ring using InitDomain(5), but have i2 be some integer other than −1. Does this new ring have zero divisors? Is this a field?
Sage can now work with polynomials over an integral domain or field. We can add an additional parameter for the InitDomain command that will tell Sage the name of the polynomial variable, usually "x". For example, with the command
we can now form polynomials in Z3[x]:
If we continue to expand the field to "complex numbers modulo 3",
the variable x automatically promotes to a variable of the larger field. Thus, we can form polynomials like
We can form products as we would with standard polynomials.
Sage can factor polynomials defined over any finite field. In the next chapter we will prove that such factorizations are unique. (We will also see an example of an integral domain, not a field, for which factorizations are not unique.) If Sage tries to factor x2 + 1 in the standard way (using the ring ℤ),
it finds the polynomial is irreducable. But if we factor the polynomial over the field of "complex numbers modulo 3",
we find that it does factor.
EXPERIMENT:
Try looking at other quadratic polynomials which do not factor over the integers. Do they factor using the "complex numbers modulo 3"? What about cubic polynomials?
The polynomial rings defined over integral domains give us some good examples of integral domains. In the next chapter we will find other ways of forming integral domains, some of which have some unusual properties. But even these are based on polynomial rings. So polynomials are the basic building blocks that are used for forming new integral domains and fields.
The Field of Quotients
In the last section, we found a way to form integral domains by imitating the familar polynomials from high school algebra. In this section we will show how we can form a field from an integral domain. In fact we will imitate mathematics from grade school—the fractions.
We can view a fraction as one integer divided by another. Thus, the properties of the rational numbers stem entirely on the properties of the integers, ℤ.
Although the integers have some special properties, some of which we will discover later in this chapter, perhaps the most important property of the integers is that it is a simple example of an integral domain. This raises the question: Can we form fractions out of any integral domain? The answer is "yes", but before we can proceed, let us review the basic definitions of addition, multiplication, and equality of the rational numbers ℚ.
First of all, there is a restriction that we put on the rational number p⁄q: that q cannot be 0. It is reasonable to expect that we will have to make a similar restriction for general integal domains.
We can have Sage show us how two rational numbers are added:
Since nether y nor v are 0, the denominator of the sum is not zero, so this is a rational number. Similarly, the product
is a rational number. However, there is a twist in how we view two rational numbers as being equal. Since fourth grade, we view
2⁄4 = 3⁄6,
even though these are two different fractions. Both the numerators and denominators are different. What we really mean is that these two fractions are equivalent, and in fact we definex⁄y ≡ u⁄v if, and only if, x·v = u·y.
This forms an equivalence relation on the set of fractions x⁄y. We have already seen equivalence relations while working with cosets of a group. What we call a rational number is really a set of fractions of the form x⁄y which are all equivalent. In other words, 1⁄2 really represents a set of fractions
{1⁄2, -1⁄-2, 2⁄4, -2⁄-4, 3⁄6, -3⁄-6, …}
with each element of this set equivalent to any other. Since 2⁄4 and 3⁄6 are both in this set, we consider them to be equal to 1⁄2.We can define a similar equivalence relation for any integral domain K.
DEFINITION 11.3
Let K be an integral domain, and let P denote the set of all ordered pairs (x, y) of elements of K, with y nonzero:
P = { (x, y) | x, y ∈ K and y ≠ 0 }.
We define a relation on P by(x, y) ≡ (u, v) if x·v = y·u.
The first step is to show that this relation is an equivalence relation.
This equivalence relation becomes the key for defining our field of quotients, as the next definition shows.
DEFINITION 11.4
Let K be an integral domain, let P denote the set
P = { (x, y) | x, y ∈ K and y ≠ 0 },
and let the equivalence relation on P be(x, y) ≡ (u, v) if x·v = y·u.
For each (x, y) in P, let (x⁄y) denote the equivalence class of P that contains (x, y). Let Q denote the set of all equivalence classes (a⁄b). The set Q is called the set of quotients for K.This definition allows us to replace an equivalence of two expressions with an equality. We now have that
(x⁄y) = (u⁄v) if, and only if, x·v = u·y.
For example, (x·z⁄y·z) = (x⁄y) for any nonzero z, since x·z·y = y·z·x.The next step is to define addition and multiplication on our set of quotients Q. Once again, we will use the rational numbers to guide us in the definition.
LEMMA 11.3
Let K be an integral domain, and let Q be the set of quotients for K. The addition and multiplication of two equivalence classed in
Q, defined by
(x⁄y) + (u⁄v) = (x·v + u·y⁄y·v)
and(x⁄y)·(u⁄v) = (x·u⁄y·v)
are both well defined operations on Q. That is, the sum and product do not depend on the choice of the representative elements and (u, v) of the equivalence classes.Now that we have defined addition and multiplication on the sets of equivalent classes Q, we are ready to show that Q is in fact a field.
Let K be an integral domain, and let Q be the set of quotients for K. Then Q forms a field using the above definitions of addition and multiplication. The field Q is called the field of quotients for K.
Notice that in the construction of the field Q, we never used the identity element of K. Hence, if we started with a commutative ring without zero divisors instead of an integral domain, the construction would still produce a field. We can mention this as a corollary.
COROLLARY 11.1
Let K be any commutative ring with no zero divisors. Then the set of quotients Q defined above forms a field.
Although the field of quotients was designed from the way we formed rational numbers from the set of integers, we can apply the field of quotients to any other
integral domain. The only other integral domains that we have worked with so far are the polynomial rings from the last section. What happens if we form a
field of quotients for the polynomial ring K[x]?
Let us first consider the most familar polynomial ring ℤ[x]—the polynomials with integer coefficients. An element in the field of quotients would be of the form p(x)⁄q(x), where p(x) and q(x) are polynomials with integer coefficients. But we consider two such fractions
p(x)⁄q(x) and r(x)⁄s(x)
to be equivalent if p(x)·s(x) = r(x)·q(x). Thus, if there are any common factors between p(x) and q(x), we can cancel them out to produce a simpler, equivalent fraction.For example, the fractions
and
can be seen to be equivalent, since
and
are the same.
EXPERIMENT:
Can you find a simpler polynomial that is equivalent to A and B?
We call the field of quotients for the polynomials ℤ[x] the field of rational functions in x, denoted ℤ(x).
Here two other ways to verify that the two polynomials are equivalent:
The first command obviously shows that A and B are equal. The second command shows that the quotient of the two polynomials is the identity element, hence A and B must represent the same element in the field.
It should be mentioned that a rational function, in this context, is not a function! The rational functions A and B are merely elements of ℤ(x), which may in turn be arguments for some homomorphism. To say that "A is undefined when x = −2" or "B is undefined at x = 1" is meaningless, since x is not a variable for which numbers can be plugged in. Rather, x is merely a symbol that is used as a place holder. This is why we can say that A and B are truely equal, even though the "graphs" would disagree at two points.
We can form rational functions from any integral domain K. This produces the field K(x), the rational functions in x over K.
EXAMPLE:
Using the field of order 9 that was discovered in the last section,
simplify the following rational function.
Can Sage factor this over a finite field K? Let us see.
We see that Sage automatically simplifies the rational function for us. However, if we consider the simplier looking rational function
we find that they are equal
Sage shows us that these two expressions are the same rational function in K(x)!
As you can see from this experiment, the definition of the quotient field does not depend whether elements in the integral domain can be factored uniquely.
However, unique factorization is an important property that we will study in depth in chapter 12. We will learn that the polynomial ring K[x] used in the above example really does have a type of unique factorization, after we have studied the true definition of what a unique factorization is. But before we go into this, let us look closely at some of the more familiar fields: the rational numbers, the real numbers, and the complex numbers. These fields will be the basis for defining many other fields, so it is natural to learn the properties of these fields before going on to study more complicated fields.
Complex Numbers
In this section we will explore the field of complex numbers. Although we will develop this field algebraically, the results of this section have applications in many different fields, including differential equations.
We are somewhat familar with complex numbers from algebra. These are numbers of the form a + b i where i represents the "square root of negative one." Sage can perform all algebraic opperations in complex numbers as easily as it can with real numbers.
However, in this definition it is not at all clear where the "i " came from. This gives the complex numbers a rather mysterious quality that is compounded by their common misnomer, "imaginary numbers."
We would like to define the complex numbers in a way that is more realistic. Instead of considering quantities of the form (a + b i), we will consider ordered pairs (a, b). This identifies the complex numbers with points in the plane, which makes them easier to visualize.
Let us declare the following properties for ordered pairs of real numbers:
1) | (a, b) = (c, d) if, and only if, a = c and b = d. |
2) | (a, b) + (c, d) = (a + c, b + d). |
3) | (a, b)·(c, d) = (a·c − b·d, a·d + b·c). |
We define ℂ to be the set of all ordered pairs of real numbers. Using these three properties, it is not hard to prove the following:
The set ℂ forms a field, called the field of complex numbers. This field contains a subfield isomorphic to the real numbers.
The purpose of constructing the complex numbers was to produce a field for which we can take the square root of negative one. We can now show that we have succeeded in doing this.
There are exactly two solutions to the equation x2 = (−1, 0) in the field ℂ, given by (0, ±1).
By defining the complex numbers as ordered pairs, we have taken some of the mystery out of the complex numbers. Lemma 11.4 shows that the square root of negative one
comes as a natural consequence of the way we defined the product.
We can now convert ordered pairs to the customary notation by defining i = (0,1), and identifying the identity element (1,0) with 1. Then any complex number (a, b) can be written
(a, b) = (a, 0) + (0, b) = a·(1, 0) + b· (0, 1) = a + b i.
We can rewrite the rules for addition and multiplication in ℂ as follows:(a + b i) + (c + d i) = (a + c) + (b + d) i.
(a + b i)·(c + d i) = (a·c − b·d) +
(b·c + a·d) i.
DEFINITION 11.5
A ring automorphism is a one-to-one and onto ring homomorphism that maps a ring to itself.
The natural question that arises is determining all of the group of ring automorphisms of ℂ.
This is in fact a difficult question to answer in general, but if we only consider the automorphisms that
send each real number to itself, the question becomes easy to answer.
PROPOSITION 11.4
Besides the identity automorphism, there is another automorphism on ℂ, given by
ϕ(a + b i) = a − b i.
In fact, these are the only automorphisms for which ϕ(x) = x for all real numbers x.The automorphism found in Proposition 11.4 is called the conjugate. The conjugate of z is generally denoted by z. That is, if z = a + b i, then
z = ϕ(z) = a − b i.
We call an element of ℂ to be real if z = z. Clearly the real elements of ℂ are{ x + 0 i | x ∈ ℝ }.
These elements are, of course, the image of the mapping found in Proposition 11.3 of the reals into ℂ.The conjugate automorphism is defined in Sage as
so the conjugate of 3 + 4i is given by:
EXPERIMENT:
Try multiplying an element by its conjugate.
Try this with several different elements of ℂ. What do you observe?
It is an easy computation to prove that the observation found by this experiment holds for all complex numbers:
z·z = (a + b i)·(a − b i) = a2 + a·b·i − a·b·i − b2·i2 = a2 + b2.
Not only is z·z always a real number, but it is also non-negative. Moreover, there is a geometric significance to this formula a2 + b2. By the Pythagorian theorem, this is the square of the distance from the point (a, b) to the origin. Thus, it makes sense to give a special name to the square root of z·z.DEFINITION 11.6
We say the absolute value of a complex number z = a + b i is
|z| = | √ z · z | . |
Since z·z is non-negative real number for all elements of ℂ, |z| is well defined and is non-negative. The geometric interpretation of |z| is the distance from to the origin. In Sage, the function abs(z) gives the absolute value for both real and complex numbers.
The familar property for the absolute value of real numbers holds for all complex numbers as well.
For any two elements x and y in ℂ, |x·y| = |x|·|y|.
Since there is a geometric interpretation of the absolute value, this proposition suggests that there is also a geometric interpretation to the product of two
complex numbers.
From polar coordinates it is known that any point in the plane can be located by knowing its distance r from the origin, and its angle θ from the positive x-axis. Execute the following command to see a picture.
Since r is the absolute value of (x + y i), perhaps the angle θ also is significant to the complex number. By using trigonometry in the above figure, we have that
x = r cos θ, and y = r sin θ.
Thus,x + y i = r(cos θ + i sin θ).
This form is called the polar form of the complex number x + y i. We have already seen that r is the absolute value of x + y i. The angle θ is called the argument of x + y i. We can find the argument of a complex number in Sage with the command arg(z).This angle is approximately
Sage puts the answer in radians, and we will soon understand why radians should always be used in expressing the angle θ.
Although the absolute value of a complex number is unique, the argument is not uniquely determined. This is because for every angle θ, we consider the angles
…, θ − 6 π, θ − 4 π, θ − 2 π, θ, θ + 2 π, θ + 4 π, θ + 6 π, ….
All of these angles have the same sine and cosine, and hence are interchangable in the polar coordinate system. We call these angles coterminal. The set of angles coterminal to θ can be writen{θ + 2 π n | n ∈ ℤ}.
Sage always picks θ to be between −π and π, but there are times when we need to consider the angles coterminal to θ as well.For example, let us use Sage to help us find the polar form of −√3 − i. The absolute value is given by
while the argument is given by
So Sage has converted the point (−√3, −1) into polar coordinates, and found that
−√3 − i = 2 (− √3⁄2 − i⁄2) = 2 ( cos(−5π⁄6) + i sin(−5π⁄6) ).
However, we could have used any coterminal angle instead of the one Sage gave us. Thus, the following are also polar forms of −√3 − i:2 ( cos(7π⁄6) + i sin(7π⁄6) ), 2 ( cos(19π⁄6) + i sin(19π⁄6) ), ….
There actually will be an infinite number of such expressions.The usefulness of the polar form of a complex number is hinted at by the next lemma. This lemma makes use of the trigonometric identities
cos(A + B) = cos(A) cos(B) · sin(A) sin(B), and sin(A + B) = sin(A) cos(B) + cos(A) sin(B).
We can now use induction to prove the following important theorem:
If n is an integer, and
z = r ( cos θ + i sin θ )
is a nonzero complex number in polar form, thenzn = rn ( cos(n θ) + i sin(n θ) ).
De Moivre's Theorem (11.2) allows us to quickly raise a complex number to an integer power.
EXAMPLE:
Compute (−√3 − i)5.
Since
r = | √ (−√3)² + (−1)² | = 2, |
(−√3 − i)5 = 25( cos(−25π⁄6) + i sin(−25π⁄6) ) = 32(√3⁄2 − i⁄2) = 16 √3 − 16 i.
We can check this result with Sage:
EXAMPLE:
Compute (√2⁄2 − √2⁄2 i)27.
(√2⁄2 − √2⁄2 i)27 = ( 1 ( cos(−π⁄4) + i sin(−π⁄4) ) )27 = 127( cos(−27π⁄4) + i sin(−27π⁄4) ) = − √2⁄2 − √2⁄2 i.
This can also be verified using Sage:We can also use De Moivre's theorem to find the nth root of 1. We first define
ωn = cos(2π⁄n) + i sin(2π⁄n).
For example, ω1 = 1, ω2 = −1, ω3 = (−1 + i√3)/2, and ω4 = i. From DeMoivre's theorem (11.2),(ωn)n = cos(2π) + i sin(2π) = 1.
So ωn is a nth root of 1. In fact, all nth roots of 1 are given by the numbers ωn, ωn2, ωn3, … up to ωnn = 1.EXAMPLE:
The eighth root of unity, ω8, is given by
ω8 = (1⁄2 + i⁄2)√2.
We can enter this into Sage with the commandThen
simplifies to just i.
We can now consider the group generated by ω8:
This gives the eight roots of unity, and shows that these elements form a group. In fact, the nth roots of unity will form a cyclic group isomorphic to Z8.
Let us rearrange the elements of G
so that the elements appear in the circle graph as they do in the complex plane. For example, the circle graph which multiplies every element by ω8 is given by
We are mainly interested in those elements of this subgroup that are generators.
DEFINITION 11.7:
A complex number z is called a primitive nth root of unity if the powers of z produce all n solutions to the equation xn = 1.
It is clear that ωn is a primitive nth root of unity, but also (ωn)k is an nth root of unity if k and n are coprime.
EXPERIMENT:
By trial and error, determine which elements of G are primitive eighth roots of unity. Note that ω82 is not is primitive eighth root, since the circle graph
shows that ω82 does not generate the group. Replace the 2 in the above example until all primitive eighth roots are found.
EXAMPLE:
One twelvth root of unity is given by
ω12 = √3⁄2 + i⁄2.
Form a circle graphs of the twelvth roots of unity.This graph shows that ω123 = i, and hence this is not a primitive twelve root of unity. But it is not hard to find primitive twelve roots from this circle graph.
EXPERIMENT:
Try changing the power in the last command to find all primitive twelvth roots of unity.
Thus, De Moivre's Theorem (11.2) gives us a very fast means of raising complex numbers to integer powers. It is not difficult to imagine that this formula could be used to raise a complex number to any real number as well. But what if we want to consider raising a number to a complex power? Could we somehow use De Moivre's Theorem (11.2) to help us find, for example, 2i?
First, we need to clarify exactly what we mean by raising an element to the power of an element. Since this is not a standard field operation, it is not surprising to learn that in most fields, raising an element to the power of an element is absurd. However, since the complex numbers contain a copy of the real numbers, such a concept seems plausible. If we examine how we take powers in the real number system we will discover that we utilize the exponiential function ex to compute quantities such as 2√2. We use the fact that
2 = eln 2, so 2√2 = (eln 2)√2 = e(ln 2)√2,
which is approximatelyIt is clear that if we know how to take the exponential function of a complex number, we will be able to raise any positive number to a complex power.
The key algebraic property of the exponiential function is that
ex+y = ex·ey for all x, y ∈ ℝ.
This indicates that the exponiential function is a group homomorphism mapping the additive group of real numbers to the multiplicative group of real numbers. The reason we can consider raising an element of the real numbers to the power of an element stems from the existance of a group homomorphism between the additive structure and the multiplicative structure.Using this obversation about the real numbers, let us rephase the question. Can we use De Moivre's theorem (11.2) to extend the exponiential function into a group homomorphism from the additive structure of ℂ (denoted ℂ+) to the multiplicative structure ℂ*, thereby allowing us to compute expressions such as 2i?
If such a group homomorphism exists, then
ea+b i = ea·(ei)b
so using De Moivre's theorem we can compute the exponential of any complex number (a + b i) where b is an integer, provided we know the value of ei. Can Sage give us a hint as to what ei should be? Let's try it.That didn't help much. Let us try looking at the approximation.
This at least tells us that ei is a complex number, but the decimals don't look familar. What we are really interested in is the absolute value and the argument.
EXPERIMENT:
Using Sage, try to determine the exact value for the absolute value and the argument of the number ei.
Although the absolute value and argument of ei seem simple enough, there is no way to algebraically prove that these values are correct. The reason is simple enough: there are many ways to extend the exponiental function to become a group homomorphism from ℂ+ to ℂ*. So typically an additional property of the homomorphism is imposed, such as the new function must have the derivative equal to itself. However, such additional properties are not algebraic: they all involve calculus in some way. In fact we cannot determine the value of ei without calculus. Problems 11.21 through 11.23 show three ways of using calculus to determining the value of ei.
For now, let us just use our observations from the experiment and assume that
ei = cos( 1 radian ) + i sin( 1 radian ).
From this, we have by De Moivre's theorem (11.2) thatea+b i = ea·(ei)b = ea·1b·( cos (b·1) + i sin(b·1) ) = ea·( cos b + i sin b )
whenever b is an integer. It makes sence to define this as the exponiential function for all complex numbers. Notice that radian measure must be used in this formula!This allows us another way of expressing ωn. Notice that
e2πi/n = cos(2π⁄n) + i sin(2π⁄n) = ωn.
So we now have a more succinct way of defining the nth root of 1.The real exponential function is one-to-one, but is not onto since there is no number for which ex = −1. However, the complex exponential function is onto, since for every nonzero complex number in polar form,
z = r( cos θ + i sin θ ),
there is a complex number whose exponential is z, namely ln(r) + i θ. This can be seen by observing thate(ln r+i θ) = eln r( cos θ + i sin θ ) = r( cos θ + i sin θ ) = z.
The drawback of the complex exponential function is that it is not one-to-one! Note thatgives the same result as e0. This suggests that we should look at the kernal of the group homomorphism, that is, the set of all z = x + i y such that
ez = ex+y i = 1.
It is apparent that x must be 1, cos(y) = 1, and sin(y) = 0. Thus, the kernal of the complex exponiential function is the setN = {2 k π i | k ∈ ℤ }.
This is, of course, a normal subgroup of the additive group ℂ+. We can denote this set by N = ƒ−1(1). We can similarly define the inverse exponiential for any nonzero number.
DEFINITION 11.8
For any nonzero complex number z, we define the complex logarithm of z, denoted log(z), to be the set of elements x such that ex = z.
Notice that we use the function ln(x) to denote the real logarithm, while we use log(z) to denote the complex logarithm.
We have already observed that when z is written in polar form,
z = r( cos θ + i sin θ ),
that one value of x which satisfies the equation is x = ln(r) + θ i. We also know that ƒ−1(z) will be a coset of the kernal of ƒ. Thus, we havelog(z) = ln(r) + θ i + N.
For example, log(−1) is the set{ π i + 2 k π i | k ∈ ℤ } = {…, −5 π i, −3 π i, −π i, π i, 3 π i, 5 π i, …}.
The Sage log function works for complex numbers,but only gives one element of the set. Thus, we must add the kernal N to this result to obtain the set given by log(z).
It is informative to see the graphs of the complex exponiential and complex logarithm. Even though these graphs are really surfaces in four dimensions, we can look at the real part and the complex part of the functions separately, giving us two three dimensional graphs. For example, the real part of the exponiential function is given by:
The graph of the imaginary part is very similar.
The real part of the logarithm function does not depend on which number of the set we pick. Therefore we can draw a graph of the real part of the logarithm function.
This surface looks like a tornado funnel. Even though there are many values for the imaginary part of the logarithm function, we could consider graphing all possible values at once. The result looks like the following graph.
This graph looks like a multi-level parking garage. Notice that for any ordered pair (x, y), there will be an infinite number of points for z. Yet one can travel on this surface from any point to any other point. The different "levels" of the parking garage correspond to different values of the logarithm.
Once the logarithm is defined, we can actually define raising any complex number to any complex power! We simply use the laws of exponients and say that
xz = (elog(x))z = ez·log(x).
Notice that this gives a set of numbers, not just a single number. In other words, we are to take the set log(x), multiply each element by z, and take the exponiential of this collection to get a collection of answers. Although there will at times be an infinite number of elements in the set xz, this will not always be so.EXPERIMENT:
Use Sage to find some of the elements of the set (8 i)1/3. Consider 5 different values for the logarithm. You may have to use N(_) to see an approximation of your answers. What do you observe about the solutions?
The duplications observed in this experiment are not a coincidence. Whenever z is a real, rational number, there will only be a finite number of possible values for xz. This is seen by the next proposition.
This last proposition is very useful for finding square roots and cube roots of complex numbers. This turns out to have some important applications in finding the
roots of real polynomials! In fact, complex numbers and the functions we have defined in this section have many applications in the real world. The complex
exponiential function was fundamental to the invention of the short wave radio. The complex logarithm can be used in solving real valued differential equations.
So even though these numbers are labled "imaginary," they are by no means just a figment of someone's imagination.
Ordered Commutative Rings
The integers, the rational numbers, and the real numbers all have one property that most rings do not have—these three rings have an ordering. That is, given two different elements in the ring, we say that one of them is greater than the other. Most rings do not have such an ordering, but we will find that some rings can be ordered in more than one way! The ordering of a ring can give us new insight into the structure of the ring.
We begin by making a formal definition of an ordered ring R. If there is a way to tell whether one element is greater than another, we should be able to distinquish those elements that are greater than zero, called the positive elements P. The main properties of positive numbers that we want to mimic is that the sum and product of two positive numbers yields positive numbers. Thus we have the following definition.
DEFINITION 11.9
A commutative ring R is ordered if there exists a set P such that the three properties hold:
P is closed under addition.
P is closed under multiplication.
For each x in R, one and only one of the following statements is true:
x ∈ P, x = 0, or −x ∈ P.
The third property is sometimes called the law of trichotomy. With this law, we can define what it means for one element to be greater than another.DEFINITION 11.10
We say that x is greater than y, denoted x > y, if x − y ∈ P. Likewise, we say that x is smaller than y, denoted x < y, if y − x ∈ P.
By the law of trichotomy, either
x > y, x < y, or x = y.
This notation keeps us from having to constantly refer to the set P. Instead of writing x ∈ P, we can merely write x > 0.We begin by proving some simple properties of the "greater than" sign.
If x > y, then x + z > y + z.
If x > y and z > 0, then x·z > y·z.
If x > y and y > z, then x > z.
Given a ring that has an ordering, one of the great challenges is determining the set of positive elements P. Are there any elements of an ordered ring that we know must be in P? Yes, there are at least some elements given by the next proposition.
An immediate consequence of this is that if the ring has an identity e, then e > 0, since e = e2. We can prove even more.
The standard examples of ordered rings are the integers, the rationals, and the real numbers. It should be noted that the complex numbers do not form an
ordered ring, since i2 = −1 < 0, and by Proposition 11.8, any square must be positive.
EXAMPLE
Consider the set
S = {x + y√2 | x, y ∈ ℤ}.
We saw in §10.1 that this is a subring of ℝ. We will call this ring ℤ[√2], the ring formed by ajoining the square root of 2 to ℤ. Find a non-standard ordering on this ring.We can verify that the product of two elements of S,
yields a number of the same form. By propostion 0.5, this ring has no zero divisors, so this is an integral domain.
Since all elements of ℤ[√2] are real, we could use the standard ordering, letting P consist of all numbers that, when viewed as a real number, are positive. But we could order these numbers in a non-standard way! By Corollary 11.2, the positive integers must be in P, but there is no way of proving that the square root of 2 is in P. (There is no number in ℤ[√2] whose square is √2.) Thus, there is nothing preventing us from stating that √2 is negative, and therefore −√2 would be in P! The rest of the elements would be identified as either positive or negative accordingly.
For example, 1 + √2 would be negative, since
(1 + √2)·(1 − √2) = −1 < 0,
and 1 − √2 is the sum of two positive numbers, so this term is positive.EXPERIMENT:
If √2 < 0, is 3 + 2√2 positive or negative?
(Hint: multiply by the "conjugate" as we did in the last example.)
To see what is really going on in this example, it is helpful to look at the ring automorphisms, which were introduced in the last section.The automorphism of particular interest is as follows:
ƒ : ℤ[√2] → ℤ[√2],
ƒ(x + y√2) = x − y√2.
This automorphism can be defined in Sage. Since Sage already knows that sqrt(2) · sqrt(2) = 2, we do not need to tell Sage anything to define the ring ℤ[√2]!We now can define the homomorphism. Since this is a field homomorphism on an infinite set of objects, the format is slightly different.
We do not have to indicate the domain and target, since the domain will be the currently defined field, which in this case is a subset of the real numbers. The command InitDomain(0) merely clears out any previous fields that have been defined, such as the "complex numbers modulo 3".
Since F(1) must be 1, this is all we need to define the homomorphism. We can check that this is a homomophism with the command
We can try this homomorphism out on different numbers.
What is the relationship between this homomorphism and the unusual ordering that we gave the ring ℤ[√2]? If we let P denote the set of positive elements using the "standard" ordering, and let P′ be the set of positive elements under the unusual ordering we saw above, then
P′ = ƒ(P).
In fact, for any automorphism ϕ on an ordered ring, we can construct an alternative way to order the ring by using ϕ(P) instead of P for the set of positive elements.While we are working with the integral domain ℤ[√2] we might mention what happens if we consider the field of quotients of this ring. Certainly this must include all numbers of the form
x + y√2, x, y ∈ ℚ
But are there other elements? We need to check that all non-zero elements of this form have a multiplicative inverse. For example, isof this form? The Together command will also rationalize the denominator, so we find that
produces a number of the correct form. In fact, by defining two variables a and b, we can find how to take the inverse of a + b√2, by multiplying by a − b√2.
Since this will be rational whenever a and b are rational, we see that
1 | = | a − b√2 | . |
a + b√2 | a² − 2b² |
By Proposition 0.5, the donominator will not be zero for rational a and b, so this is a field. We will call this field ℚ[√2].
Now that ℚ[√2] is defined in Sage, the automorphism ƒ extends to an automorphism on the field ℚ[√2]. Thus, the unusual ordering that we gave to ℤ[√2] should also extend to the field of quotients. In fact this generally happens, as seen in the next proposition.
Let R be an ordered integral domain, with P the set of positive elements. Then if Q is the field of quotients on R, then the ordering on R can be extended in a unique way to an ordering on Q. That is, there is a unique set P ′ that forms an ordering on Q, with
p ∈ P ⟹ (p⁄1) ∈ P ′
EXAMPLE:
What if we had considered ℤ[∛2] instead of ℤ[√2]? We
can write ∛2 as 2(1/3) in Sage, and try the Together command to see what
happens.
Sage was able to rationalize the denominator, but the result involved 2(2/3). No wonder, since
2(1/3)·2(1/3) = 2(2/3),
which yields a term we didn't consider. Thus, we have to include all numbers of the formx + y ∛2 + z ∛4 ∈ ℤ[∛2].
In Sage, we would enter this as:We can check if such expressions are now closed under multiplication:
This is a mess, but it is clear that the product is in ℤ[∛2]. Let us see if we can take inverses.
It seems like we can take inverses, but it is harder to prove that we can always rationalize the donominator. Notice that
will produce a rational number. Thus,
1 | = | a² − 2 b c + (2 c² − a b)∛2 + (b² − a c)∛4 | . |
a + b∛2 + c∛4 | a³ + 2 b³ + 4 c³ − 6 a b c |
It takes a bit more work to show that the denominator will never be zero when a, b, and c are rational (see Problem 15.) Once this has been proven, we see that ℚ[∛2] is a field.
Does this field have an unusual ordering, as the field ℚ[√2] did? Note that in this field,
2 ∛2 = (∛4)2 > 0.
So by Proposition 11.8, the cube root of 2 must be positive. Likewise, 2(2/3) must be positive, since it is a perfect square. It seems that there is little chance for finding an unusual ordering for this field. To eliminate this chance altogether, note that(4 ∛2 − 5)(16 ∛4 + 20 ∛2 + 25) = 128 − 125 = 3 > 0.
Hence, we have that 4 ∛2 − 5 must be positive, even though numerically it is very close to 0.The reason there is only one way to order this field stems from the fact that there is only one automorphism on this field—the trivial one. For if ƒ is an automorphism of ℚ[∛2], then ƒ(∛2) must be a number whose cube is 2. But there is only one such real number, and all elements in this field are real. Thus,
ƒ(∛2) = ∛2,
and therefore ƒ(x) = x for all x in the field.EXAMPLE:
Consider the set
S = {x + y cos(π⁄9) + z cos(2π⁄9) | x, y, z ∈ ℚ}.
Find several possible ways of defining an ordering on this field.To multiply two such numbers together, we need to use some trigometric identities:
cos2(x) = 1⁄2( 1 + cos(2x) ),
cos(x)·cos(y) = 1⁄2( cos(x + y) + cos(x − y) ), and
cos(4π⁄9) = cos(π⁄9) − cos(2π⁄9).
EXPERIMENT:
Can you derive the last trig identity from the other two?
Sage is already familiar with these trignometric identities. In fact, the command TrigReduce will utilize the above identities to simplify a trigonomic expression. Thus, we can see that the product of two elements of S as follows:
As messy as this is, one can see that it is an element of S, so S is closed under multiplication. In fact, the set S is a field, but it is very difficult to show this.
Since the elements of this field are all real there is a natural ordering of the elements of S. Are there other ways to order this field? We want to look for automorphisms on the field S. But consider the following:
Is this an automorphism? Let's check.
Apparently it is! Furthermore, we could consider the homomorphism ƒ 2(x) = ƒ(ƒ(x)):
This shows that ƒ 2 is even another automorphism on S.
EXPERIMENT:
If we considered the automorphism ƒ 3(x) = ƒ(ƒ(ƒ(x))), would we get yet another automorphism?
Are there any automorphisms on the field S besides these? We can show that there are not. We will take advantage of the trig identity
cos(3 x) = 4 cos3x − 3 cos x
from which we get1⁄2 = cos(3π⁄9) = 4 cos3(π⁄9) − 3 cos(π⁄9).
Thus, cos(π⁄9) satisfies the polynomial equation4 x3 − 3 x = 1⁄2.
Because ƒ is an automorphism, we have to have ƒ(cos(π⁄9) ) satisfying the same polynomial equation. But there are only three roots to a cubic equation, and so there are only three possible values for ƒ(cos(π⁄9) ). Each of these three solutions produces a unique automorphism on S, sinceƒ(cos(2π⁄9) ) = ƒ(2 cos2(π⁄9) − 1) = 2 ƒ(cos(2π⁄9) )2 − 1.
By Lemma 11.5, we see that the group of automorphisms of this ring must be isomorphic to Z3. In fact, we can computeand see that ƒ(ƒ(ƒ(x))) = x for all x.
The three automorphisms give us three ways to define an ordering on the field S:
a >1 b if a is larger than b as real numbers.
a >2 b if ƒ(a) >1 ƒ(b).
a >3 b if ƒ(ƒ(a)) >1 ƒ(ƒ(b)).
EXPERIMENT:
Comsider the numbers
Are these numbers positive or negative in the three ordering systems? Can you find a irrational number in S which is positive regardless of which ordering is used?
We can actually use the automorphisms to prove that S is a field. If we let
we can consider what happens if we multiply the three automorphisms together.
We discover that this product will be a rational number! See Problem 17 for an explanation. With this, we can see that
1 | = | 8 ƒ(x)·ƒ(ƒ(x)) | . |
x | 8a³ − 6ab² + b³ − 6abc + 6b²c − 6ac² + 3bc² − c³ |
Since we can compute
we find that
1 | = | (8a² − 2b² − 2bc − 2c²) + (8ab + 8bc + 4c²) cos(π/9) + (4b² − 8ac − 4 c²) cos(2π/9) | . |
x | 8a³ − 6ab² + b³ − 6abc + 6b²c − 6ac² + 3bc² − c³ |
We have seen that some fields may have many ways of assigning an order to the elements, while others have only 1. The key is the number of ring automorphisms. These ring automorphisms will play a major role in the following chapters.
Proofs:
Proof of Lemma 11.1:
Let A be a polynomial of degree m,
A = a0 + a1 x + a2 x2 + a3 x3 + ⋯ + am xm
and B be a polynomial of degree n,B = b0 + b1 x + b2 x2 + b3 x3 + ⋯ + bn xn.
Here, am and bn are nonzero elements of K. The product is determined by∞ | ∞ | ||
A·B = | ∑ | ∑ | ai·bj xi+j. |
i=0 | j=0 |
Note that ai and bj are zero for i > m and j > n. If i + j > m + n, either i > m or j > n, and in either case ai·bj = 0. Thus, there are no nonzero terms in A·B with coefficients larger than m + n.
However, if i + j = m + n, the only nonzero term would be the one coming from i = m and j = n, giving
am·bn xm+n.
Since there are no zero divisors in K, am·bn is nonzero, so A·B is a polynomial of degree m + n.Next we turn our attention to A + B. We may assume without loss of generality that m is no more than n. Then the sum of A and B can be expressed as
(a0 + b0) + (a1 + b1) x + (a2 + b2) x2 + ⋯ + (am + bm) xm + bm+1 xm+1 + ⋯ + bn xn.
If m < n, this clearly is a polynomial with degree n. Even if m = n, this still gives a polynomial whose degree cannot be more than n.Proof of Proposition 11.1:
We have seen that K[x] is closed under addition and multiplication. By the commutativity of K, addition and multiplication are obviously commutative. It is also clear that the zero polynomial acts as the additive identity in K[x]. Also, the additive inverse of
A = a0 + a1 x + a2 x2 + a3 x3 + ⋯
is given by−A = (−a0) + (−a1) x + (−a2) x2 + (−a3) x3 + ⋯
since the sum of these two polynomials isA + (−A) = 0 + 0 x + 0 x2 + 0 x3 + ⋯ = 0.
The polynomial with b0 = 1, and bj = 0 for all positive j,
I = 1 + 0 x + 0 x2 + 0 x3 + ⋯,
acts as the multiplicative identity, since∞ | ∞ | ∞ | |||
I·A = A·I = | ∑ | ∑ | ai·bj xi+j = | ∑ | ai· 1 xi = A. |
i=0 | j=0 | i=0 |
To check associativity of addition and multiplication, we need three polynomials
A = a0 + a1 x + a2 x2 + a3 x3 + ⋯, |
B = b0 + b1 x + b2 x2 + b3 x3 + ⋯, and |
C = c0 + c1 x + c2 x2 + c3 x3 + ⋯. |
Then
(A + B) + C = (a0 + b0) + c0 +
((a1 + b1) + c1) x +
((a2 + b2) + c2) x2 + ⋯
=
a0 + (b0 + c0) +
(a1 + (b1 + c1)) x +
(a2 + (b2 + c2)) x2 + ⋯ = A + (B + C).
⎛ | ∞ | ∞ | ⎞ | ∞ | ∞ | ∞ | ∞ | ∞ | ∞ | |||||
A·(B·C) = A· | ⎜ | ∑ | ∑ | bj·ck xj+k | ⎟ | = | ∑ | ∑ | ∑ | ai·(bj·ck) xi+j+k = | ∑ | ∑ | ∑ | (ai·bj)·ck xi+j+k = (A·B)·C. |
⎜ | ⎟ | |||||||||||||
⎜ | ⎟ | |||||||||||||
⎝ | j=0 | k=0 | ⎠ | i=0 | j=0 | k=0 | i=0 | j=0 | k=0 |
The two distributive laws are also easy to verify using the summation notation.
⎛ | ∞ | ⎞ | ∞ | ∞ | ∞ | ∞ | |||||
A·(B + C) = A· | ⎜ | ∑ | (bj + cj) xj | ⎟ | = | ∑ | ∑ | ai·(bj + cj) xi+j = | ∑ | ∑ | (ai·bj + ai·cj) xi+j |
⎜ | ⎟ | ||||||||||
⎜ | ⎟ | ||||||||||
⎝ | j=0 | ⎠ | i=0 | j=0 | i=0 | j=0 |
∞ | ∞ | ∞ | ∞ | |||
= | ∑ | ∑ | ai·bj xi+j + | ∑ | ∑ | ai·cj xi+j= A·B + A·C. |
i=0 | j=0 | i=0 | j=0 |
We can use the fact that multiplication is commutative to show that (A + B)·C = A·C + B·C. Thus, K[x] is a commutative ring with identity.
Next, let us show that K[x] has no zero divisors. Suppose that A·B = 0, with both A and B being nonzero polynomials. Say that A has degree m and B has degree n. Then by Lemma 11.1 A·B has degree m + n, which is impossible if either m or n were positive. But if A and B are constant polynomials, then [removed]a0·b0 = 0, which would indicate that either a0 or b0 is 0, since K has no zero divisors. Thus, either A or B would have to be 0, so we have that K[x] has no zero divisors.
Finally, let us show that K[x] is not a field by showing that the polynomial (1 + x) is not invertible. Suppose that there was a polynomial A such that [removed]A·(1 + x) = 1. Then A is not 0, so suppose A has degree m. Then by Lemma 11.1, we have m + 1 = 0, telling us m = −1, which is impossible. Thus, (1 + x) has no inverse in K[x], and therefore K[x] is an integral domain.
Proof of Proposition 11.2:
Suppose that n x = 0 for some nonzero x in R. Then for any other nonzero element y of R,
0 = (n x)·y = n (x·y) = x·(n y).
But x is nonzero, and the ring has no zero-divisors, so we have n y = 0. This argument works in both ways, so
(*) | n x = 0 ⟺ n y = 0 if x ≠ 0 and y ≠ 0. |
If n was not zero, then |n| would be a positive number such that n x = 0 for all x in the ring. Hence, if the ring has characteristic 0, then n x = 0 implies that either x = 0 or n = 0.
Now suppose that the ring has positive characteristic, and let x be any nonzero element of R. Let p be the smallest positive integer for which p x = 0. If p is not prime, then p a·b, with 0 < a < p and 0 < b < p. But then
(a x)·(b x) = (a b)·(x2) = (p x)·x = 0·x = 0.
Since the ring has no zero divisors, either a x = 0 or b x = 0. But this contradicts the fact that p was the smallest number such that p x = 0. Thus, p is prime. By (*) we have that p y = 0 for every element in R, and since this cannot be true for any smaller integer, we have that the characteristic of the ring is the prime number p.It is easy to see that if n is a multiple of p, then n = c p for some integer c. Thus, for any element x in R,
n x = (c p)·x = c·(p x) = c·0 = 0.
Suppose that n x = 0 for some n that is not a multiple of p. Then gcd(n, p) must be 1, and so by the greatest common divisor theorem (0.4), there are integers u and v such that u n + v p = 1. But thenx = 1 x = (u n + v p) x = u (n x) + v (p x) = u·0 + v·0 = 0.
So for nonzero x, n x = 0 if, and only if, n is a multiple of p.Proof of Lemma 11.2:
We need to show that the relation is reflexive, symmetric, and transitive. Let (x ,y), (u, v), and (s, t) be arbitrary elements of P.
Reflexive:
(x, y) ≡ (x, y)
is equivalent to saying x·y = x·y, which is, of course, true. So this relation is reflexive.Symmetric:
(x, y) ≡ (u, v) ⟹ x·v = y·u ⟹ u·y = v·x ⟹ (u, v) ≡ (x, y),
so this relation is also symmetric.Transitive:
If (x, y) ≡ (u, v) and (u, v) ≡ (s, t), then
(x, y) ≡ (u, v) ⟹ x·v =
y·u ⟹ x·v·t = y·u·t,
(u, v) ≡ (s, t) ⟹ u·t =
v·s ⟹ u·t·y = v·s·y.
x·t = y·s ⟹ (x, y) ≡ (s, t),
so we have the transitive law holding. Therefore, this relation is an equivalence relation.Proof of Lemma 11.3:
The first observation we need to make is that the formulas for the sum and product both form valid elements of Q, since y·v is nonzero as long as y and v are both nonzero.
Next let us work to show that addition does not depend on the choice of representative elements (x, y) and (u, v). That is, if (x⁄y) = (a⁄b), and (u⁄v) = (c⁄d), we need to show that
(x⁄y) + (u⁄v) = (a⁄b) + (c⁄d).
That is, we have to prove that(x·v + u·y⁄y·v) = (a·d + c·b⁄b·d).
Since (x⁄y) = (a⁄b), and (u⁄v) = (c⁄d), we have x·b = a·y and u·d = c·v. Multiplying the first equation by v·d and the second by y·b, we getx·b·v·d = a·y·v·d and u·d·y·b = c·v·y·b.
Adding these two equations together and factoring, we get(x·v + u·y)·b·d = (a·d + c·b)·y·v.
This gives us
(x·v + u·y⁄y·v) = (a·d + c·b⁄b·d),
which is what we wanted.We also need to show that multiplication is well defined, that is
(x⁄y)·(u⁄v) = (a⁄b)·(c⁄d).
But since x·b = a·y and u·d = c·v, we can multiply these two equations together to getx·b·u·d = a·y·c·v,
or(x·u)·(b·d) = (a·c)·(y·v).
Therefore,(x·u⁄y·v) = (a·c⁄b·d).
so multiplication also is well defined.Proof of Theorem 11.1:
We have already noted that addition and multiplication are closed in Q. We next want to look at the properties of addition. From the definition,
(x⁄y) + (u⁄v) = (x·v + u·y⁄y·v) = (u⁄v) + (x⁄y),
we see that addition is commutative. Let z be any nonzero element of K. Then (0⁄z) acts as the additive identity:(u⁄v) + (0⁄z) = (0⁄z) + (u⁄v) = (0·v + u·z⁄z·v) = (u·z⁄v·z) = (u⁄v).
Likewise, (−u⁄v) is the additive inverse of (u⁄v):(u⁄v) + (−u⁄v) = (−u⁄v) + (u⁄v) = (−u·v + u·v⁄v·v) = (0⁄v·v) = (0⁄z).
The associativity of addition is straightforward:((x⁄y) + (u⁄v)) + (a⁄b) = (x·v + u·y⁄y·v) + (a⁄b) = (x·v·b + u·y·b + a·y·v⁄y·v·b),
while(x⁄y) + ((u⁄v) + (a⁄b)) = (x⁄y) + (u·b + a·v⁄v·b) = (x·v·b + u·b·y + a·v·y⁄y·v·b).
So Q forms a group with respect to addition.Next we look at the properties of multiplication. Multiplication is obviously commutative, since
(x⁄y)· (u⁄v) = (x·u⁄y·v) = (u·x⁄v·y) = (u⁄v)· (x⁄y).
We also have associativity for multiplication:((x⁄y)·(u⁄v))·(a⁄b) = (x·u⁄y·v)·(a⁄b) = (x·u·a⁄y·v·b) = (x⁄y)·(u·a⁄v·b) = (x⁄y)·((u⁄v)·(a⁄b)).
The element (z⁄z) acts as the multiplicative identity for any z ≠ 0.(z⁄z)· (x⁄y) = (x⁄y)· (z⁄z) = (x·z⁄y·z) = (x⁄y).
If x = 0, then (x⁄y) = (0⁄z). Otherwise, the multiplicative inverse of (x⁄y) is (y⁄x), since(x⁄y)· (y⁄x) = (x·y⁄y·x) = (z⁄z).
Thus, every nonzero element of Q has a multiplicative inverse. Finally, we have to prove the two distribution laws. Because of the commutativity of multiplication, we only need to check one. Since((u⁄v) + (a⁄b))·(x⁄y) = (u·b + a·v⁄v·b)·(x⁄y) = (u·b·x + a·v·x⁄v·b·y).
while(u⁄v)·(x⁄y) + (a⁄b)·(x⁄y) = (u·x⁄v·y) + (a·x⁄b·y) = (u·x·b·y + a·x·v·y⁄v·y·b·y) = (u·x·b + a·x·v⁄v·b·y),
we have the distributive laws holding, and therefore Q is a field.Proof of Proposition 11.3:
Because the real numbers are closed with respect to both addition and multiplication, it is clear that both (a + c, b + d) and [removed](a·c − b·d, a·d + b·c) would be defined for all real numbers a, b, c, and d. Thus, ℂ is closed with respect to both addition and multiplication. Furthermore, since
(c, d) + (a, b) = (c + a, d + b) = (a + c, b + d) = (a, b) + (c, d)
and(c, d)·(a ,b) = (c·a − d·b, c·b + d·a) = (a·c − b·d, a·d + b·c) = (a, b)·(c, d),
we see that both addition and multiplication are commutative. The element (0, 0) acts as the zero element, since(0, 0) + (a, b) = (a, b).
The addition inverse of (a, b) is (−a, −b), since(a, b) + (−a, −b) = (0, 0).
Note that the order on the last two sums is irrelevant, since addition has already been shown to be commutative.To show that addition is associative, we note that
(a, b) + [ (c, d) + (e, f) ] = (a, b) + (c + e, d + f) = (a + c + e, b + d + f) = (a + c, b + d) + (e, f) = [ (a, b) + (c, d) ] + (e, f).
To show that multiplication is associative is a little more complicated. We have(a, b)·[ (c, d)·(e, f) ] = (a, b)·(c·e − d·f, c·f + d·e) = (a·c·e − a·d·f − b·c·f − b·d·e, a·c·f + a·d·e + b·c·e − b·d·f),
and[ (a, b)·(c, d) ]·(e, f) = (a·c − b·d, a·d + b·c)·(e, f) = (a·c·e − b·d·e − a·d·f − b·c·f, a·c·f − b·d·f + a·d·e + b·c·e).
By comparing these two, we see that they are equal, so multiplication is associative.We need to test the distributive laws next. The left distributive law we can get by expanding:
(a, b)·[ (c, d) + (e, f) ] =
(a, b)·(c + e, d + f) = (a·c + a·e −
b·d − b·f, a·d + a·f + b·c +
b·e)
= (a·c − b·d, a·d + b·c) +
(a·e − b·f, a·f + b·e) =
(a, b)·(c, d) + (a, b)·(e, f).
[ (a, b) + (c, d) ]·(e, f) = (e, f)·[ (a, b) + (c, d) ] = (e, f)·(a, b) + (e, f)·(c, d) = (a, b)·(e, f) + (c, d)·(e, f).
We have now shown that the set ℂ forms a commutative ring. To show that this ring has a multiplicative identity, we consider the element (1, 0). Since the ring is commutative, we only need to check(1, 0)·(a, b) = (1·a − 0·b, 1·b + 0·a) = (a, b).
Finally, we need to show that every nonzero element has an inverse. If (a, b) is nonzero, then a2 + b2 will be a positive number. Hence⎛ | a | , | −b | ⎞ |
⎝ | a² + b² | a² + b² | ⎠ |
(a, b)· | ⎛ | a | , | −b | ⎞ | = | ⎛ | a² + b² | , | −a·b + a·b | ⎞ | = (1, 0) |
⎝ | a² + b² | a² + b² | ⎠ | ⎝ | a² + b² | a² + b² | ⎠ |
(a, b)-1 = | ⎛ | a | , | −b | ⎞ |
⎝ | a² + b² | a² + b² | ⎠ |
since multiplication is commutative. Therefore, the set ℂ forms a field.
The second part of this proposition is to show that ℂ contains a copy of the real numbers as a subfield. Consider the mapping ƒ, which maps real numbers to ℂ, given by
ƒ(x) = (x, 0).
To check that ƒ is a homomorphism, we check thatƒ(x) + ƒ(y) = (x, 0) + (y, 0) = (x + y, 0) = ƒ(x + y)
andƒ(x)·ƒ(y) = (x, 0)·(y, 0) = (x·y + 0, 0 + 0) = (x·y, 0) = ƒ(x·y).
Thus, ƒ is a homomorphism from the reals to ℂ. It is clear that ƒ is one-to-one, since (x, 0) = (y, 0) if, and only if, x = y. Thus the image of ƒ:{ (x, 0) | x ∈ ℝ }
is isomorphic to the real numbers. Hence we have found a subring of ℂ isomorphic to ℝ.Proof of Lemma 11.4:
If (a, b) solves the equation x2 = (−1, 0), we have that
(a, b)2 = (a2 − b2, 2 a b) = (−1, 0).
Thus, a and b must satisfy the two equationsa2 − b2 = −1, and 2 a b = 0.
The second equation implies that either a or b must be 0. But if b = 0, then the first equation becomes a2 = −1, which has no real solutions. Thus, a = 0, and −b2 = −1. There are exactly two real solutions for b, ±1. Thus,(0, 1) and (0, −1)
both solve the equations for a and b, and so(0, 1)2 = (0, −1)2 = (−1, 0).
Proof of Lemma 11.5:
We first note that if ƒ(x) is an automorphism of a ring R, then ƒ−1(x) is well defined, since ƒ(x) is both one-to-one and onto. We see that
ƒ(ƒ−1(x) + ƒ−1(y)) = ƒ(ƒ−1(x)) + ƒ(ƒ−1(y)) = x + y,
so ƒ−1(x + y) = ƒ−1(x) + ƒ−1(y). Also,ƒ(ƒ−1(x)·ƒ−1(y)) = ƒ(ƒ−1(x))·ƒ(ƒ−1(y)) = x·y,
so ƒ−1(x·y) = ƒ−1(x)·ƒ−1(y). Thus, ƒ−1 is a ring homomorphism. Since ƒ was both one-to-one and onto, ƒ−1 is both one-to-one and onto. Therefore, ƒ−1 is a ring automorphism.If ƒ and ϕ are two ring automorphisms, then
ƒ(ϕ(x + y)) = ƒ(ϕ(x) + ϕ(y)) = ƒ(ϕ(x)) + ƒ(ϕ(y)),
andƒ(ϕ(x·y)) = ƒ(ϕ(x)·ϕ(y)) = ƒ(ϕ(x))·ƒ(ϕ(y)).
The combination ƒ(ϕ(x)) is also one-to-one and onto, so this product, which we can denote ƒ·ϕ, is a ring automorphism. Since the set of all ring automorphisms is closed with respect to multiplication and inverses, and the set of all ring automorphisms is a subgroup of the set of all group automorphisms with respect to addition, we see that this set is a group.Proof of Proposition 11.4:
We check that
ϕ(a + b i) + ϕ(c + d i) = (a − b i) + (c − d i) = (a + c) − (b + d)i = ϕ((a + c) + (b + d)i) = ϕ((a + b i) + (c + d i)).
ϕ(a + b i)·ϕ(c + d i) = (a − b i)·(c − d i) = (a·c − b·d) − (a·d + b·c)i = ϕ((a·c − b·d) + (a·d + b·c)i) = ϕ((a + b i)·(c + d i)).
Thus, ϕ is a homomorphism. Since a − b i = 0 if, and only if, a and b are both 0, the kernel of ϕ is just {0}, and so ϕ is one-to-one. Also, ϕ is onto, since [removed]ϕ(a − b i) = a + b i. Therefore, ϕ is an automorphism.
To show that there are exactly two such automorphisms, suppose that ƒ(x) is an automorphism of ℂ for which ƒ(x) = x for all real numbers x. Then
ƒ(i)2 = ƒ(i2) = ƒ(−1) = −1,
so by Lemma 11.4, ƒ(i) = ±i. If ƒ(i) = i, then ƒ(x) = x for all x ∈ ℂ, and if ƒ(i) = −i, then ƒ(x) = ϕ(x) for all x.Proof of Proposition 11.5:
We have
|x·y| = | √ x·y · x·y | = | √ x·y · x · y | = | √ x · x · y · y | = | √ x · x | · | √ y · y | = |x|·|y|. |
Thus, |x·y| = |x|·|y|.
Return to text
Proof of Lemma 11.6:
We note that
z1·z2 = r1( cos θ1 + i sin θ1)·r2( cos θ2 + i sin θ2) = r1·r2( (cos θ1·cos θ2 − sin θ1·sin θ2) + i (cos θ1·sin θ2 + sin θ1·cos θ2) ).
Using the trigonometric identities, this simplifies toz1·z2 = r1·r2( cos(θ1 + θ2) + i sin(θ1 + θ2) ).
Proof of Theorem 11.2:
Let us first prove the theorem for positive values of n. For n = 1, the statement is obvious. Let us assume that the statement is true for the previous case. That is,
zn−1 = rn−1( cos((n − 1)θ) + i sin((n − 1)θ) ).
We want to prove that the theorem holds for n as well. Using Lemma 11.6, we havezn = zn−1·z = rn−1( cos((n − 1)θ) + i sin((n − 1)θ) )·[r( cos θ + i sin θ )] = rn( cos((n − 1)θ + θ) + i sin((n − 1)θ + θ) ) = rn( cos(n θ) + i sin(n θ) ).
Thus, the theorem is true for n if we assume it is true for n − 1, and hence by induction it is true whenever n is positive.If z is nonzero, then letting n = 0 gives
r0(cos(0 θ) + i sin(0 θ) ) = 1(1 + i·0) = 1 = z0.
So the theorem holds for n = 0. If z is nonzero, then r > 0, and so[r−n( cos(−n θ) + i sin( cos(−n θ) )]·[rn( cos(n θ) + i sin( cos(n θ) )] = r(−n+n)( cos(−n θ + n θ) + i sin( cos(−n θ + n θ) ) = r0( cos 0 + i sin 0 ) = 1.
Now, if n < 0, then the theorem holds for −n, and soz−n·[rn( cos(n θ) + i sin( cos(n θ) )] = 1,
hence rn( cos(n θ) + i sin( cos(n θ) ) = zn even when n < 0.Proof of Proposition 11.6:
If z1 = a1 + b1 i and z2 = a2 + b2 i we observe that
ƒ(z1 + z2) = e(a¹+a²)( cos(b1 + b2) + i sin(b1 + b2) ).
By Lemma 11.6, this equalsea¹( cos(b1) + i sin(b1) )·ea²( cos(b2) + i sin(b2) ) = ƒ(z1)·ƒ(z2).
Thus, ƒ is a group homomorphism from ℂ+ to ℂ*.
Proof of Proposition 11.7:
Let z have the polar form
z = r ( cos θ + i sin θ).
Then log(z) is the set{ ln(r) + θ i + 2 k π i | k ∈ ℤ }.
Thus, log(z)/n is given by the set{ ln(r)⁄n + (θ + 2kπ) i⁄n | k ∈ ℤ }.
Thus, the exponential function of the elements of this set is given by{ eln(r)/n·( cos(θ + 2 kπ⁄n) + i sin(θ + 2kπ⁄n) ) | k ∈ ℤ } = { r1/n·( cos(θ + 2kπ⁄n) + i sin(θ + 2kπ⁄n) ) | k ∈ ℤ }.
Notice that for two different values of k that differ by n, the arguments of the cosine and sine will differ by 2π. Hence, we only have to consider the values of k from 0 to (n − 1). This gives us the set{ r1/n·( cos(θ + 2kπ⁄n) + i sin(θ + 2kπ⁄n) ) | k = 0, 1, 2, … n − 1 }.
However, these n solutions will have arguments that differ by less than 2 π, so these n solutions are distinct.Finally, we must show that x is an element of z1/n if, and only if, x solves the equation xn = z. But for any element in the above expression, we have that
xn = rn(1/n)·( cos(n(θ + 2kπ)⁄n) + i sin(n(θ + 2kπ)⁄n) ) = r( cos θ + i sin θ ) = z.
Likewise, if xn = z, we can raise both sides to the (1/n)th power to get that the two sets (xn)1/n and z1/n are equal. Since the element x is certainly in the first set, it must also be in the set z1/n that we have just computed.Proof of Lemma 11.7:
To prove the first statement, note that since x > y, we have that x − y ∈ P. But then
(x + z) − (y + z) ∈ P,
and so x + z > y + z.For the second statement, we have that x > y and z > 0, and so (x − y) ∈ P and z ∈ P. Since P is closed under multiplication, we have that
(x − y)·z = x·z − y·z ∈ P,
and so x·z > y·z.Finally, if x > y and y > z, then both x − y ∈ P and y − z ∈ P. Since P is closed under addition, we have that
(x − y) + (y − z) = x − z ∈ P,
and so x > z.
Proof of Proposition 11.8:
Since x is nonzero, by the law of trichotomy either x > 0, or −x > 0. If x > 0 then
x2 = x·x > 0.
On the other hand, if −x > 0, thenx2 = (−x)·(−x) > 0.
Thus, in either case x2 is in P.Proof of Corollary 11.2:
Since 12 = 1, we have from Proposition 11.8 that 1 > 0. Proceeding by induction, let us assume that (n − 1)·1 > 0, and show that n·1 > 0. But this is easy, since
n·1 = (n − 1)·1 + 1·1 = (n − 1)·1 + 1 > 0.
Thus, we have that n·1 > 0 for every positive number n. This immediately implies that the characteristic is zero, for if R had a positive characteristic p, then p·1 = 0, and we would have 0 > 0, a contradiction.
Proof of Proposition 11.9:
We will begin by showing that the ordering is uniquely determined. Since for any p in P, we have
(1⁄p)·(p⁄1) = (p⁄p) = (1⁄1) = 1 ∈ P,
(1⁄p) must be considered to be positive in the new ordering. But then (n⁄p) must be positive whenever n and p are in P. Thus P ′ contains at least those elements of the form (n⁄p), where n and p are in P. Note that every nonzero element in the field of quotients Q must be of one of the four forms(n⁄p), (−n⁄p), (n⁄−p), (−n⁄−p),
where n and p are in P. But the first and the last expressions are equivalent, and the middle two are also equivalent. Thus, for every nonzero element of Q, either that element or its negative is of the form (n⁄p), with n and p in P. Thus, P ′ cannot contain any more elements besides those of the form (n⁄p), and hence P ′ is uniquely determined.Now, suppose we consider the set of elements P ′ that can be expressed in the form (n⁄p), where n and p are in P. Does this form an ordering on Q? We have already seen that the law of trichotomy has been demonstrated. All we need to show is that P ′ is closed under addition and multiplication. But this is clear by looking at the formulas
(x⁄y) + (u⁄v) = (x·v+u·y⁄y·v)
and(x⁄y)·(u⁄v) = (x·u⁄y·v).
Thus, P ′ forms an ordering on Q, and is an extension of the ordering P.Sage Interactive Problems
§11.1 #24)
In the field of "complex numbers modulo 3":
Factor the polynomials x3 + 1, x3 + 2, x3 + i, x3 + 2*i. What do you notice about the factorizations? Knowing how real polynomials factor, explain what is happening.
§11.1 #25)
Expain why the ring "complex numbers modulo 5"
does not form a field. Can you determine a pattern as to which integers "complex numbers modulo n" form a field?
§11.2 #18)
Have Sage simplify the rational function over Z2(x):
x⁴ + x³ + x + 1 | . |
x³ + x² + x + 1 |
§11.2 #19)
Try squaring different elements of Z2(x). What do you observe? Any explanations?
§11.2 #20)
Have Sage compute the following operation in the rational function field K(x).
(1 + i)x + 2 | + | 2 x + 1 + i | . |
x² + 2i x + 2 + i | x² + (2 + i)x + 2 |
§11.2 #21)
It was mentioned that the definition of the quotient field does not depend on whether elements in the integral domain have unique factorization. An example of such a domain is ℤ[√−5], which we can enter in Sage as follows:
Show that the two fractions
3 x + 3 a | and | (1 − a)x + 5 + a |
(1 + a)x | 2 x |
are in fact equal, even though neither can simplify.
§11.3 #19)
Find the twelfth roots of unity, and arrange them in such a way that the circle graph puts the elements in the correct place in the complex plane, as was done for the eighth roots of unity.
§11.3 #20)
Use Sage to plot the real part of log(x + i y), the companion of Figure 11.3. Would this surface be multi-valued, as was Figure 11.3?
§11.4 #19)
Follow the example of ℤ[∛2] to define the integral domain ℤ[√5] in Sage. Then define F to be a nontrivial ring automorphism for this domain.
§11.4 #20)
Use Sage to show that numbers of the form
x + y cos(π/7) + z cos(2π/7)
is closed under multiplication, using TrigReduce. Assuming that this forms a field, find a nontrivial ring automorphism on this field.