Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
Download

📚 The CoCalc Library - books, templates and other resources

132928 views
License: OTHER
Kernel:
%%html <link href="http://mathbook.pugetsound.edu/beta/mathbook-content.css" rel="stylesheet" type="text/css" /> <link href="https://aimath.org/mathbook/mathbook-add-on.css" rel="stylesheet" type="text/css" /> <style>.subtitle {font-size:medium; display:block}</style> <link href="https://fonts.googleapis.com/css?family=Open+Sans:400,400italic,600,600italic" rel="stylesheet" type="text/css" /> <link href="https://fonts.googleapis.com/css?family=Inconsolata:400,700&subset=latin,latin-ext" rel="stylesheet" type="text/css" /><!-- Hide this cell. --> <script> var cell = $(".container .cell").eq(0), ia = cell.find(".input_area") if (cell.find(".toggle-button").length == 0) { ia.after( $('<button class="toggle-button">Toggle hidden code</button>').click( function (){ ia.toggle() } ) ) ia.hide() } </script>

Important: to view this notebook properly you will need to execute the cell above, which assumes you have an Internet connection. It should already be selected, or place your cursor anywhere above to select. Then press the "Run" button in the menu bar above (the right-pointing arrowhead), or press Shift-Enter on your keyboard.

ParseError: KaTeX parse error: \newcommand{\lt} attempting to redefine \lt; use \renewcommand

Section8.4Efficient Decoding

¶

We are now at the stage where we are able to generate linear codes that detect and correct errors fairly easily, but it is still a time-consuming process to decode a received nn-tuple and determine which is the closest codeword, because the received nn-tuple must be compared to each possible codeword to determine the proper decoding. This can be a serious impediment if the code is very large.

Example8.35

Given the binary matrix

H=(111000101010001)\begin{equation*} H = \begin{pmatrix} 1 & 1 & 1 & 0 & 0 \\ 0 & 1 & 0 & 1 & 0 \\ 1 & 0 & 0 & 0 & 1 \end{pmatrix} \end{equation*}

and the 5-tuples x=(11011)t{\mathbf x} = (11011)^{\rm t} and y=(01011)t,{\mathbf y} = (01011)^{\rm t}\text{,} we can compute

Hx=(000)andHy=(101).\begin{equation*} H{\mathbf x} = \begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix} \qquad \text{and} \qquad H{\mathbf y} = \begin{pmatrix} 1 \\ 0 \\ 1 \end{pmatrix}. \end{equation*}

Hence, x{\mathbf x} is a codeword and y{\mathbf y} is not, since x{\mathbf x} is in the null space and y{\mathbf y} is not. Notice that HyH{\mathbf y} is identical to the first column of H.H\text{.} In fact, this is where the error occurred. If we flip the first bit in y{\mathbf y} from 0 to 1, then we obtain x.{\mathbf x}\text{.}

If HH is an m×nm \times n matrix and x∈Z2n,{\mathbf x} \in {\mathbb Z}_2^n\text{,} then we say that the of x{\mathbf x} is Hx.H{\mathbf x}\text{.} The following proposition allows the quick detection and correction of errors.

Proposition8.36

Let the m×nm \times n binary matrix HH determine a linear code and let x{\mathbf x} be the received nn-tuple. Write x{\mathbf x} as x=c+e,{\mathbf x} = {\mathbf c} +{\mathbf e}\text{,} where c{\mathbf c} is the transmitted codeword and e{\mathbf e} is the transmission error. Then the syndrome HxH{\mathbf x} of the received codeword x{\mathbf x} is also the syndrome of the error e.{\mathbf e}\text{.}

Proof

The proof follows from the fact that

Hx=H(c+e)=Hc+He=0+He=He.\begin{equation*} H{\mathbf x} = H({\mathbf c} +{\mathbf e}) = H{\mathbf c} + H{\mathbf e} = {\mathbf 0} + H{\mathbf e} = H{\mathbf e}. \end{equation*}

This proposition tells us that the syndrome of a received word depends solely on the error and not on the transmitted codeword. The proof of the following theorem follows immediately from Proposition 8.36 and from the fact that HeH{\mathbf e} is the iith column of the matrix H.H\text{.}

Theorem8.37

Let H∈Mm×n(Z2)H \in {\mathbb M}_{ m \times n} ( {\mathbb Z}_2) and suppose that the linear code corresponding to HH is single error-correcting. Let r{\mathbf r} be a received nn-tuple that was transmitted with at most one error. If the syndrome of r{\mathbf r} is 0,{\mathbf 0}\text{,} then no error has occurred; otherwise, if the syndrome of r{\mathbf r} is equal to some column of H,H\text{,} say the iith column, then the error has occurred in the iith bit.

Example8.38

Consider the matrix

H=(101100011010111001)\begin{equation*} H = \begin{pmatrix} 1 & 0 & 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 0 & 1 & 0 \\ 1 & 1 & 1 & 0 & 0 & 1 \end{pmatrix} \end{equation*}

and suppose that the 6-tuples x=(111110)t,{\mathbf x} = (111110)^{\rm t}\text{,} y=(111111)t,{\mathbf y} = (111111)^{\rm t}\text{,} and z=(010111)t{\mathbf z} = (010111)^{\rm t} have been received. Then

Hx=(111),Hy=(110),Hz=(100).\begin{equation*} H{\mathbf x} = \begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix}, H{\mathbf y} = \begin{pmatrix} 1 \\ 1 \\ 0 \end{pmatrix}, H{\mathbf z} = \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}. \end{equation*}

Hence, x{\mathbf x} has an error in the third bit and z{\mathbf z} has an error in the fourth bit. The transmitted codewords for x{\mathbf x} and z{\mathbf z} must have been (110110)(110110) and (010011),(010011)\text{,} respectively. The syndrome of y{\mathbf y} does not occur in any of the columns of the matrix H,H\text{,} so multiple errors must have occurred to produce y.{\mathbf y}\text{.}

SubsectionCoset Decoding

¶

We can use group theory to obtain another way of decoding messages. A linear code CC is a subgroup of Z2n.{\mathbb Z}_2^n\text{.} or uses the cosets of CC in Z2n{\mathbb Z}_2^n to implement maximum-likelihood decoding. Suppose that CC is an (n,m)(n,m)-linear code. A coset of CC in Z2n{\mathbb Z}_2^n is written in the form x+C,{\mathbf x} + C\text{,} where x∈Z2n.{\mathbf x} \in {\mathbb Z}_2^n\text{.} By Lagrange's Theorem (Theorem 6.10), there are 2n−m2^{n-m} distinct cosets of CC in Z2n.{\mathbb Z}_2^n\text{.}

Example8.39

Let CC be the (5,3)(5,3)-linear code given by the parity-check matrix

H=(011001001011001).\begin{equation*} H = \begin{pmatrix} 0 & 1 & 1 & 0 & 0 \\ 1 & 0 & 0 & 1 & 0 \\ 1 & 1 & 0 & 0 & 1 \end{pmatrix}. \end{equation*}

The code consists of the codewords

(00000)(01101)(10011)(11110).\begin{equation*} (00000) \quad (01101) \quad (10011) \quad (11110). \end{equation*}

There are 25−2=232^{5-2} = 2^3 cosets of CC in Z25,{\mathbb Z}_2^5\text{,} each with order 22=4.2^2 =4\text{.} These cosets are listed in Table 8.40.

CosetCoset
Representative
CC(00000) (01101) (10011) (11110)
(10000)+C(10000) + C(10000) (11101) (00011) (01110)
(01000)+C(01000) + C(01000) (00101) (11011) (10110)
(00100)+C(00100) + C(00100) (01001) (10111) (11010)
(00010)+C(00010) + C(00010) (01111) (10001) (11100)
(00001)+C(00001) + C(00001) (01100) (10010) (11111)
(10100)+C(10100) + C(00111) (01010) (10100) (11001)
(00110)+C(00110) + C(00110) (01011) (10101) (11000)
Table8.40Cosets of CC

Our task is to find out how knowing the cosets might help us to decode a message. Suppose that x{\mathbf x} was the original codeword sent and that r{\mathbf r} is the nn-tuple received. If e{\mathbf e} is the transmission error, then r=e+x{\mathbf r} = {\mathbf e} + {\mathbf x} or, equivalently, x=e+r.{\mathbf x} = {\mathbf e} + {\mathbf r}\text{.} However, this is exactly the statement that r{\mathbf r} is an element in the coset e+C.{\mathbf e} + C\text{.} In maximum-likelihood decoding we expect the error e{\mathbf e} to be as small as possible; that is, e{\mathbf e} will have the least weight. An nn-tuple of least weight in a coset is called a . Once we have determined a coset leader for each coset, the decoding process becomes a task of calculating r+e{\mathbf r} + {\mathbf e} to obtain x.{\mathbf x}\text{.}

Example8.41

In Table 8.40, notice that we have chosen a representative of the least possible weight for each coset. These representatives are coset leaders. Now suppose that r=(01111){\mathbf r} = (01111) is the received word. To decode r,{\mathbf r}\text{,} we find that it is in the coset (00010)+C;(00010) + C\text{;} hence, the originally transmitted codeword must have been (01101)=(01111)+(00010).(01101) = (01111) + (00010)\text{.}

A potential problem with this method of decoding is that we might have to examine every coset for the received codeword. The following proposition gives a method of implementing coset decoding. It states that we can associate a syndrome with each coset; hence, we can make a table that designates a coset leader corresponding to each syndrome. Such a list is called a .

SyndromeCoset Leader
(000)(00000)
(001)(00001)
(010)(00010)
(011)(10000)
(100)(00100)
(101)(01000)
(110)(00110)
(111)(10100)
Table8.42Syndromes for each coset
Proposition8.43

Let CC be an (n,k)(n,k)-linear code given by the matrix HH and suppose that x{\mathbf x} and y{\mathbf y} are in Z2n.{\mathbb Z}_2^n\text{.} Then x{\mathbf x} and y{\mathbf y} are in the same coset of CC if and only if Hx=Hy.H{\mathbf x} = H{\mathbf y}\text{.} That is, two nn-tuples are in the same coset if and only if their syndromes are the same.

Proof

Two nn-tuples x{\mathbf x} and y{\mathbf y} are in the same coset of CC exactly when x−y∈C;{\mathbf x} - {\mathbf y} \in C\text{;} however, this is equivalent to H(x−y)=0H({\mathbf x} - {\mathbf y}) = 0 or Hx=Hy.H {\mathbf x} = H{\mathbf y}\text{.}

Example8.44

Table 8.42 is a decoding table for the code CC given in Example 8.39. If x=(01111){\mathbf x} = (01111) is received, then its syndrome can be computed to be

Hx=(011).\begin{equation*} H {\mathbf x} = \begin{pmatrix} 0 \\ 1 \\ 1 \end{pmatrix}. \end{equation*}

Examining the decoding table, we determine that the coset leader is (00010).(00010)\text{.} It is now easy to decode the received codeword.

Given an (n,k)(n,k)-block code, the question arises of whether or not coset decoding is a manageable scheme. A decoding table requires a list of cosets and syndromes, one for each of the 2n−k2^{n - k} cosets of C.C\text{.} Suppose that we have a (32,24)(32, 24)-block code. We have a huge number of codewords, 224,2^{24}\text{,} yet there are only 232−24=28=2562^{32 - 24} = 2^{8} = 256 cosets.