**Home** / **augmented matrix calculator** / 5. Gauss Jordan Elimination - Massachusetts …

5. Gauss Jordan Elimination

Gauss Jordan elimination is very similar to Gaussian elimination,

except that one "keeps going". To apply Gauss Jordan elimination,

first apply Gaussian elimination until A is in echelon form. Then pick

the pivot furthest to the right (which is the last pivot created). If there

is a non-zero entry lying above the pivot (after all, by definition of

echelon form there are no non-zero entries below the pivot) then apply

elimination to create a zero in that spot, in the usual way. Repeat this

until there are no more non-zero entries in that column. Once we are

finished with this pivot then move left to the next closest pivot and

keep going. An example will hopefully make this clear. Let us start

with a matrix in echelon form:

1 -1 2 3

0 0 1 -2 .

0001

The pivot furthest to the right is the (3, 4) entry. So we take the third

row multiply by 2 and by -3 and add it to the second and first rows,

to get

1 -1 2 0

0 0 1 0 .

0 0 01

So much for the pivot furthest to the right. The next pivot (reading

right to left) is at position (2, 3). Multiply the second row by -2 and

add it to the first row, to get

1 -1 0 0

0 0 1 0 .

0 0 01

This completes Gauss Jordan elimination.

Definition 5.1. Let A be an m ? n matrix.

We say that A is in reduced row echelon form if A in echelon

form and in addition

? every other entry of a column which contains a pivot is zero.

The end product of Gauss Jordan elimination is a matrix in reduced

row echelon form. Note that if one has a matrix in reduced row echelon

form, then it is very easy to solve equations. One can read off the

solutions with almost no work.

We can exploit this fact to come up with a very pretty way to com-

pute the inverse of a matrix. First a couple of easy (but important):

1

Lemma 5.2. Let A be a square matrix.

If A is invertible then every equation Ax = b has a unique solution.

Proof. Let B be the inverse of A. Suppose that v is a solution of the

equation Ax = b. Then

b = Av.

Multiply both sides by B, to get

Bb = B(Av)

= (BA)v

= Inv = v.

Thus v = Bb. It follows that there is at most one solution to the

equation Ax = b. On the other hand let v = Bb. Then

Av = A(Bb) = (AB)b = Inb = b,

so that v = Bb is a solution to the equation.

Suppose that we are given a square n ? n invertible matrix A and

we want to compute its inverse matrix B. We can view B as be-

ing composed of n column vectors, say v1, v2, . . . , vn Rn. Suppose

that we also view the identity matrix as being composed of columns

e1, e2, . . . , en. Then ei is the column vector whose ith entry is 1 and

every other entry is zero. The way that matrix multiplication works is

that

Avi = ei.

Since we are trying to determine vi, we can think of vi as being the

solution to the linear equation

Ax = ei.

How can we solve this equation? Well create the augmented matrix

(A|ei).

If we apply Gaussian elimination then we get to a matrix U in echelon

form

(U |fi).

Here fi is the result of applying the steps of Gaussian elimination to ei.

Now if we continue we get a matrix, call it R, in reduced row echelon

form and another column vector gi,

(R|gi).

Now let us make three clever observations. The first is that the ma-

trices U and R do not depend on i. That is to say the steps of both

2

Gaussian elimination and Gauss Jordan elimination only depend on

the coefficient matrix A and not on ei. The second is that the matrix

R must be the identity matrix. Indeed we cannot get a row of zeroes

when we apply Gaussian elimination, since we know that every equa-

tion has a solution. It follows that every row contains a pivot and so

every column contains a pivot (R is a square matrix). Since R is a

square matrix in reduced row echelon form it follows that R = In. But

now if we apply back substitution the equations immediately tell us

that vi = gi.

So in principle we could compute the inverse matrix B, column by

column, by solving n sets of linear equations. But this sounds like a lot

of hard work. The final observation is that for each one of these n sets

of linear equations the coefficient matrix is the same, namely A, and

the steps of Gauss Jordan are the same. So why not put all n sets of

equations into one super augmented matrix? If one puts the n column

vectors e1, e2, . . . , en together side by side, then one will automatically

get the identity matrix In,

(A|In).

Let us see an example. Suppose that

1 2 -3

A = 2 5 -8 .

-3 -5 8

First we make the super augmented matrix

1 2 -3 | 1 0 0

2 5 -8 | 0 1 0 .

-3 -5 8 | 0 0 1

Now apply Gaussian elimination. Multiply the first row by -2 and

3 and add them to the second and third row.

1 2 -3 | 1 0 0

0 1 -2 | -2 1 0 .

0 1 -1 | 3 0 1

Now multiply the second row by -1 and add it to the third row to get

1 2 -3 | 1 0 0

0 1 -2 | -2 1 0 .

0 0 1 | 5 -1 1

This completes the Gaussian elimination. Now we continue Gauss Jor-

dan elimination Multiply the third row by 2 and 3 and add it to the

3

How do I solve an augmented matrix? An augmented matrix contains the coefficients of the unknowns and the "pure" coefficients. You can manipulate the rows of this matrix (elementary row operations) to transform the coefficients and to "read", at the end, the solutions of your system. The two row operations allowed are: 1) swap rows;

Title:

Subject:

Keywords:

Author:

Creator: LaTeX with hyperref package

Producer: pdfTeX-1.40.3

CreationDate: Mon Oct 6 11:39:21 2008

ModDate: Mon Oct 6 11:39:21 2008

Tagged: no

Form: none

Pages: 6

Encrypted: no

Page size: 612 x 792 pts (letter) (rotated 0 degrees)

File size: 88218 bytes

Optimized: no

PDF version: 1.4