In this permutation expansion
we can, for instance, factor out the entries from the first row
and swap rows in the permutation matrices to get this.
The point of the swapping (one swap to each of the permutation matrices on the second line and two swaps to each on the third line) is that the three lines simplify to three terms.
The formula given in nearbytheorem, which generalizes this example, is a recurrence — the determinant is expressed as a combination of determinants. This formula isn’t circular because, as here, the determinant is expressed in terms of determinants of matrices of smaller size.
The cofactor of the matrix from nearbyexample is the negative of the second determinant.
these are the and cofactors.
1.5 Theorem[Laplace Expansion of Determinants]
Where is an matrix, the determinant can be found by expanding by cofactors on row or column .
We can compute the determinant
by expanding along the first row, as in nearbyexample.
Alternatively, we can expand down the second column.
A row or column with many zeroes suggests a Laplace expansion.
We finish by applying this result to derive a new formula for the inverse of a matrix. With nearbytheorem, the determinant of an matrix can be calculated by taking linear combinations of entries from a row and their associated cofactors.
Recall that a matrix with two identical rows has a zero determinant. Thus, for any matrix , weighing the cofactors by entries from the wrong row — row with — gives zero
because it represents the expansion along the row of a matrix with row equal to row . This equation summarizes () and ().
Note that the order of the subscripts in the matrix of cofactors is opposite to the order of subscripts in the other matrix; e.g., along the first row of the matrix of cofactors the subscripts are then , etc.
Equations () and (). QED
then the adjoint is
and taking the product with gives the diagonal matrix .
The inverse of the matrix from nearbyexample is .
The formulas from this section are often used for by-hand calculation and are sometimes useful with special types of matrices. However, they are not the best choice for computation with arbitrary matrices because they require more arithmetic than, for instance, the Gauss-Jordan method.