Recent Changes - Search:

# Laplace Expansion

(options)

#### 1.1 Example

In this permutation expansion

we can, for instance, factor out the entries from the first row

and swap rows in the permutation matrices to get this.

The point of the swapping (one swap to each of the permutation matrices on the second line and two swaps to each on the third line) is that the three lines simplify to three terms.

The formula given in nearbytheorem, which generalizes this example, is a recurrence — the determinant is expressed as a combination of determinants. This formula isn’t circular because, as here, the determinant is expressed in terms of determinants of matrices of smaller size.

#### 1.2 Definition

For any matrix , the matrix formed by deleting row and column of is the (1) minor %

of . The (2) cofactor %

of is
times the determinant of the (3) minor of .

#### 1.3 Example

The cofactor of the matrix from nearbyexample is the negative of the second determinant.

#### 1.4 Example

Where

these are the and cofactors.

#### 1.5 Theorem[Laplace Expansion of Determinants]

Where is an matrix, the determinant can be found by expanding by cofactors on row or column .

##### PROOF:

nearbyexercise. QED

#### 1.6 Example

We can compute the determinant

by expanding along the first row, as in nearbyexample.

Alternatively, we can expand down the second column.

#### 1.7 Example

A row or column with many zeroes suggests a Laplace expansion.

We finish by applying this result to derive a new formula for the inverse of a matrix. With nearbytheorem, the determinant of an matrix can be calculated by taking linear combinations of entries from a row and their associated cofactors.

Recall that a matrix with two identical rows has a zero determinant. Thus, for any matrix , weighing the cofactors by entries from the wrong row — row with — gives zero

because it represents the expansion along the row of a matrix with row equal to row . This equation summarizes () and ().

Note that the order of the subscripts in the matrix of cofactors is opposite to the order of subscripts in the other matrix; e.g., along the first row of the matrix of cofactors the subscripts are then , etc.

#### 1.8 Definition

The matrix adjoint to the square matrix is

where is the (4) cofactor.

#### 1.9 Theorem

Where is a square matrix,

.

##### PROOF:

Equations () and (). QED

#### 1.10 Example

If

and taking the product with gives the diagonal matrix .

If then

.

#### 1.12 Example

The inverse of the matrix from nearbyexample is .

The formulas from this section are often used for by-hand calculation and are sometimes useful with special types of matrices. However, they are not the best choice for computation with arbitrary matrices because they require more arithmetic than, for instance, the Gauss-Jordan method.