5. Inverses and Determinants

Cofactor Expansion

Cofactor Expansion in Determinants and Inverses

students, one of the most important ideas in Linear Algebra is that a complicated matrix can sometimes be handled by breaking it into smaller pieces. Cofactor expansion does exactly that. It gives us a way to compute determinants by expanding along a row or column, and those determinants help us understand whether a matrix is invertible, how to find an inverse, and how matrix systems behave. 🔍

What Cofactor Expansion Means

A determinant is a single number built from a square matrix. That number tells us important things, especially whether a matrix has an inverse. For a square matrix $A$, if $\det(A) \neq 0$, then $A$ is invertible. If $\det(A) = 0$, then $A$ is not invertible.

Cofactor expansion is a method for finding $\det(A)$ by using smaller determinants. It works by choosing one row or one column, then combining each entry with a signed smaller determinant called a cofactor.

Suppose the matrix is $A = [a_{ij}]$. The minor of an entry $a_{ij}$ is the determinant of the matrix you get by deleting row $i$ and column $j$. The cofactor is

$$

$C_{ij} = (-1)^{i+j} M_{ij},$

$$

where $M_{ij}$ is the minor of $a_{ij}$. The sign $(-1)^{i+j}$ creates the alternating checkerboard pattern of plus and minus signs.

For a $3 \times 3$ matrix, the signs look like this:

$$

$\begin{bmatrix}$

+ & - & + \\

  • & + & - \\

+ & - & +

$\end{bmatrix}$

$$

This pattern is why some terms get added and others get subtracted. It is not random; it comes directly from the definition of the determinant.

The Cofactor Expansion Formula

If you expand along row $i$, the determinant is

$$

$\det(A) = \sum_{j=1}^{n} a_{ij} C_{ij}.$

$$

If you expand along column $j$, the determinant is

$$

$\det(A) = \sum_{i=1}^{n} a_{ij} C_{ij}.$

$$

These formulas mean the same determinant can be found from any row or any column. That flexibility is very useful. In practice, people often choose a row or column with many zeros because that makes the work easier. ✨

Let’s see why that matters. If an entry is $0$, then its whole term in the sum becomes $0$, so it disappears. That means fewer minors to compute and less arithmetic.

A Worked Example with a $3 \times 3$ Matrix

Consider

$$

$A = \begin{bmatrix}$

2 & 0 & 1 \\

3 & -1 & 2 \\

4 & 1 & 0

$\end{bmatrix}.$

$$

A smart choice is to expand along the first row because it contains a zero. Using row $1$,

$$

$\det($A) = 2C_{11} + 0C_{12} + 1C_{13}.

$$

Now compute the cofactors.

For $C_{11}$, delete row $1$ and column $1$:

$$

$M_{11} = \det\begin{bmatrix}$

-1 & 2 \\

1 & 0

$\end{bmatrix}$ = (-1)(0) - (2)(1) = -2.

$$

Since $(-1)^{1+1} = 1$,

$$

$C_{11} = -2.$

$$

For $C_{13}$, delete row $1$ and column $3$:

$$

$M_{13} = \det\begin{bmatrix}$

3 & -1 \\

4 & 1

$\end{bmatrix}$ = (3)(1) - (-1)(4) = 7.

$$

Since $(-1)^{1+3} = 1$,

$$

$C_{13} = 7.$

$$

So

$$

$\det($A) = 2(-2) + 1(7) = 3.

$$

Because $\det(A) \neq 0$, the matrix is invertible. That is a major connection between cofactors and inverses: the determinant tells us whether an inverse exists at all.

Why Cofactor Expansion Works

Cofactor expansion may look like a trick, but it is actually built into the structure of determinants. Determinants have properties like linearity in each row, sign changes when rows are swapped, and being zero when two rows are equal. Cofactor expansion is one way to organize those properties into a calculation method.

The important idea is that each term in the expansion isolates one entry of the row or column and pairs it with a smaller determinant from the remaining matrix. In this way, a large determinant is reduced to smaller ones. If you keep applying cofactor expansion to the smaller determinants, you eventually reach $2 \times 2$ determinants, which are easy to compute directly.

For a $2 \times 2$ matrix

$$

$\begin{bmatrix}$

a & b \\

c & d

$\end{bmatrix},$

$$

the determinant is

$$

ad - bc.

$$

This is the base case that makes the expansion process possible for larger matrices.

Choosing the Best Row or Column

students, when using cofactor expansion, the row or column choice matters for efficiency. The determinant will be the same no matter which row or column you choose, but the amount of work can change a lot.

A good strategy is to pick a row or column with:

  • many zeros,
  • simple numbers like $1$ or $-1$,
  • or a structure that makes the minors easy to compute.

For example, if a matrix has a column like

$$

$\begin{bmatrix}$

0 \\

5 \\

0 \\

0

$\end{bmatrix},$

$$

then expanding along that column is efficient because only one term survives. This is especially helpful in $4 \times 4$ or larger matrices, where direct determinant formulas are not practical.

The calculation still requires care with signs. A common mistake is forgetting the factor $(-1)^{i+j}$. Another common mistake is deleting the wrong row or column when finding the minor.

Connection to Inverses

Cofactor expansion connects directly to inverses in two important ways.

First, it helps compute the determinant, and the determinant tells us whether a matrix has an inverse. The rule is:

$$

A \text{ is invertible } $\iff$ $\det($A) $\neq 0$.

$$

Second, cofactor matrices are used in the formula for the inverse of a matrix. If $A$ is an invertible square matrix, then

$$

$A^{-1} = \frac{1}{\det(A)} \operatorname{adj}(A),$

$$

where $\operatorname{adj}(A)$ is the adjugate matrix, which is the transpose of the cofactor matrix.

This means cofactors are not just for computing determinants. They are also part of a formula for finding the inverse itself. That is why cofactor expansion belongs in the larger unit on Inverses and Determinants.

Let’s connect this to the earlier example. Since

$$

$\det(A) = 3,$

$$

we know $A^{-1}$ exists. If $\det(A)$ had been $0$, there would be no inverse, and the adjugate formula would not produce a valid inverse.

A Quick Example of a Zero Determinant

Now consider

$$

$B = \begin{bmatrix}$

1 & 2 & 3 \\

2 & 4 & 6 \\

0 & 1 & 5

$\end{bmatrix}.$

$$

The second row is exactly $2$ times the first row. That means the rows are dependent, so the determinant must be $0$. Cofactor expansion would confirm this result.

If you expand along the first row, you would find that the terms cancel in a way that gives

$$

$\det(B) = 0.$

$$

This shows an important practical idea: determinants detect dependence among rows or columns. When rows are dependent, the matrix cannot be inverted.

Common Errors to Avoid

Cofactor expansion is straightforward once the pattern is learned, but several errors happen often:

  • forgetting the sign pattern,
  • using the wrong row or column in the minor,
  • mixing up a minor with a cofactor,
  • arithmetic mistakes in small determinants,
  • stopping too early before simplifying correctly.

A reliable method is to write each step clearly:

  1. Choose a row or column.
  2. Write the sign for each term.
  3. Delete the correct row and column.
  4. Compute each minor.
  5. Multiply by the entry and the sign.
  6. Add all terms carefully.

This step-by-step process helps keep work organized and makes it easier to check mistakes. 🧠

Conclusion

Cofactor expansion is a powerful way to compute determinants by breaking a matrix into smaller parts. The method uses minors, cofactors, and alternating signs to turn a large determinant into simpler calculations. It is especially useful when a row or column contains zeros.

Most importantly, cofactor expansion is connected to the bigger ideas in Inverses and Determinants. It helps determine whether $A^{-1}$ exists, and it appears in the formula for the inverse through the adjugate matrix. Understanding cofactor expansion gives students a stronger foundation for solving systems, studying invertibility, and working with matrix formulas.

Study Notes

  • A minor $M_{ij}$ is the determinant found by deleting row $i$ and column $j$.
  • A cofactor is $C_{ij} = (-1)^{i+j} M_{ij}$.
  • Cofactor expansion along row $i$ is $\det(A) = \sum_{j=1}^{n} a_{ij} C_{ij}$.
  • Cofactor expansion along column $j$ is $\det(A) = \sum_{i=1}^{n} a_{ij} C_{ij}$.
  • The sign pattern alternates like a checkerboard: $+$, $-$, $+$, and so on.
  • Choose a row or column with many zeros to reduce work.
  • For a $2 \times 2$ matrix $\begin{bmatrix} a & b \\ c & d \end{bmatrix}$, the determinant is $ad - bc$.
  • A matrix is invertible exactly when $\det(A) \neq 0$.
  • The inverse formula uses cofactors through $A^{-1} = \frac{1}{\det(A)}\operatorname{adj}(A)$.
  • Cofactor expansion is useful for determinants, inverses, and understanding matrix dependence.

Practice Quiz

5 questions to test your understanding