Diagonalization Criteria
students, imagine a complicated matrix as a machine with many moving parts 🤖. Sometimes the machine is hard to understand in its original form, but if we can change our point of view, the same machine may become much easier to analyze. In linear algebra, that idea is called diagonalization. This lesson explains when a matrix can be diagonalized and why that matters.
What diagonalization means
A square matrix $A$ is diagonalizable if it can be written in the form
$$A=PDP^{-1}$$
where $P$ is an invertible matrix and $D$ is a diagonal matrix. A diagonal matrix has all its off-diagonal entries equal to $0$.
Why is this useful? Because diagonal matrices are much easier to work with. For example, if $D$ is diagonal, then powers like $D^5$ are simple to compute: you just raise each diagonal entry to the fifth power. This is helpful in many areas, including computer graphics, differential equations, and systems of repeated transformations 📈.
The key idea is that diagonalization uses eigenvalues and eigenvectors. The columns of $P$ are eigenvectors of $A$, and the diagonal entries of $D$ are the matching eigenvalues.
The central criteria for diagonalization
The most important question is: when is a matrix diagonalizable? The main criterion is this:
A square $n\times n$ matrix $A$ is diagonalizable if and only if it has $n$ linearly independent eigenvectors.
That statement is the heart of diagonalization criteria.
Let’s unpack it carefully.
- A matrix can only be diagonalized if there are enough eigenvectors to form a basis of $\mathbb{R}^n$ or $\mathbb{C}^n$.
- If you can find $n$ independent eigenvectors, you can place them as the columns of $P$.
- Then the diagonal matrix $D$ contains the eigenvalues in the same order as those eigenvectors.
In short, diagonalization is possible when eigenvectors give a full coordinate system for the space.
A practical test using eigenspaces
Another useful way to check diagonalizability is to compare algebraic multiplicity and geometric multiplicity.
- The algebraic multiplicity of an eigenvalue is how many times it appears as a root of the characteristic polynomial.
- The geometric multiplicity is the dimension of its eigenspace, meaning the number of independent eigenvectors for that eigenvalue.
For a matrix to be diagonalizable, the sum of the dimensions of all eigenspaces must equal $n$.
Equivalently, for every eigenvalue, the geometric multiplicity must be at least $1$, and the total number of independent eigenvectors must add up to $n$.
A very important fact is that for each eigenvalue,
$$1 \leq \text{geometric multiplicity} \leq \text{algebraic multiplicity}$$
If an eigenvalue appears many times but does not have enough independent eigenvectors, diagonalization fails.
How repeated eigenvalues affect diagonalization
Repeated eigenvalues often cause confusion, students, but repetition alone does not decide the answer. A matrix may still be diagonalizable even if an eigenvalue repeats.
The key is whether there are enough independent eigenvectors.
Example of a diagonalizable matrix with a repeated eigenvalue
Consider
$$A=\begin{pmatrix}2&0\\0&2\end{pmatrix}$$
This matrix is already diagonal. Its only eigenvalue is $2$, and it has algebraic multiplicity $2$. The eigenspace is all of $\mathbb{R}^2$, so the geometric multiplicity is also $2$.
Because there are $2$ independent eigenvectors, the matrix is diagonalizable.
Example of a matrix that is not diagonalizable
Consider
$$B=\begin{pmatrix}1&1\\0&1\end{pmatrix}$$
The characteristic polynomial is
$$\det(B-\lambda I)=(1-\lambda)^2$$
So the only eigenvalue is $\lambda=1$, with algebraic multiplicity $2$.
Now solve
$$\left(B-I\right)\mathbf{x}=\mathbf{0}$$
which gives
$$\begin{pmatrix}0&1\\0&0\end{pmatrix}\begin{pmatrix}x\y\end{pmatrix}=\begin{pmatrix}0\\0\end{pmatrix}$$
This means $y=0$, while $x$ is free. So the eigenspace has dimension $1$.
There is only one independent eigenvector, not two. Therefore, $B$ is not diagonalizable.
This example shows an important rule: a repeated eigenvalue is not a problem by itself; the problem is having too few independent eigenvectors.
Ways to know a matrix is diagonalizable
There are several helpful criteria students can use to recognize diagonalizable matrices.
Criterion 1: Enough independent eigenvectors
This is the main test. If an $n\times n$ matrix has $n$ linearly independent eigenvectors, then it is diagonalizable.
Criterion 2: Distinct eigenvalues
If an $n\times n$ matrix has $n$ distinct eigenvalues, then it is automatically diagonalizable.
Why? Because eigenvectors belonging to distinct eigenvalues are linearly independent. So if all $n$ eigenvalues are different, you immediately get $n$ independent eigenvectors.
For example, if a $3\times 3$ matrix has eigenvalues $1$, $2$, and $5$, then it is diagonalizable.
Criterion 3: Multiplicity check
If an eigenvalue has algebraic multiplicity $m$, then you need $m$ independent eigenvectors associated with that eigenvalue only if you want the total count to work out. More generally, the sum of the dimensions of all eigenspaces must be $n$.
This is often checked by finding the null space of $A-\lambda I$ for each eigenvalue $\lambda$.
Why diagonalization matters in real life
Diagonalization is not just a theoretical idea. It makes difficult problems easier by turning a matrix into a diagonal one.
Suppose a matrix $A$ is diagonalizable as
$$A=PDP^{-1}$$
Then powers of $A$ become
$$A^k=PD^kP^{-1}$$
This matters because $D^k$ is easy to compute. In a repeated process, such as modeling population growth, electrical networks, or transformations in computer animation, this saves a lot of work.
For example, if a transformation stretches space in one direction by factor $3$ and in another by factor $-1$, then the eigenvectors show the special directions that do not change direction, only scale. Those directions are the “best coordinates” for understanding the transformation 🧭.
Diagonalization also helps with solving systems of linear differential equations, because the system can often be separated into simpler independent parts after changing to the eigenvector basis.
Common mistakes to avoid
students, here are some errors students often make:
- Thinking that having repeated eigenvalues means a matrix is not diagonalizable. That is false.
- Thinking that every square matrix is diagonalizable. That is also false.
- Confusing eigenvalues with eigenvectors. Eigenvalues are numbers, eigenvectors are nonzero vectors.
- Forgetting that the eigenvectors must be linearly independent.
- Assuming that one eigenvector for each eigenvalue is always enough. If an eigenvalue repeats, you may need several eigenvectors for that same value.
A good habit is to check diagonalizability in this order:
- Find the eigenvalues.
- Determine their algebraic multiplicities.
- Find each eigenspace by solving $\left(A-\lambda I\right)\mathbf{x}=\mathbf{0}$.
- Count the independent eigenvectors.
- Decide whether the total is $n$.
A full example of the criterion
Let
$$A=\begin{pmatrix}4&1\\0&2\end{pmatrix}$$
The characteristic polynomial is
$$\det(A-\lambda I)=(4-\lambda)(2-\lambda)$$
So the eigenvalues are $\lambda=4$ and $\lambda=2$, which are distinct.
Because a $2\times 2$ matrix with two distinct eigenvalues has two linearly independent eigenvectors, this matrix is diagonalizable.
If we found an eigenvector $\mathbf{v}_1$ for $\lambda=4$ and an eigenvector $\mathbf{v}_2$ for $\lambda=2$, then we could form
$$P=\begin{pmatrix}\,| & |\\ \mathbf{v}_1 & \mathbf{v}_2\\ | & |\end{pmatrix}$$
and
$$D=\begin{pmatrix}4&0\\0&2\end{pmatrix}$$
Then
$$A=PDP^{-1}$$
This example shows how the criterion works in practice: distinct eigenvalues make the diagonalization process straightforward.
Conclusion
Diagonalization criteria tell us when a matrix can be rewritten in a much simpler diagonal form. The most important rule is that a square matrix is diagonalizable exactly when it has enough linearly independent eigenvectors to form a basis. Distinct eigenvalues guarantee diagonalization, but repeated eigenvalues require careful checking of eigenspace dimensions.
students, this topic connects directly to eigenvalues and eigenvectors because diagonalization is built from them. If you understand how to find eigenvalues, solve for eigenspaces, and count independent eigenvectors, you have the main tools needed to decide whether a matrix is diagonalizable and to use that fact in bigger linear algebra problems ✅.
Study Notes
- A matrix is diagonalizable if it can be written as $A=PDP^{-1}$.
- The columns of $P$ are eigenvectors, and the diagonal entries of $D$ are the matching eigenvalues.
- A matrix is diagonalizable if and only if it has $n$ linearly independent eigenvectors.
- Distinct eigenvalues always give linearly independent eigenvectors.
- Repeated eigenvalues do not prevent diagonalization, but there must still be enough independent eigenvectors.
- The algebraic multiplicity of an eigenvalue is its multiplicity as a root of the characteristic polynomial.
- The geometric multiplicity is the dimension of its eigenspace.
- For diagonalization, the total number of independent eigenvectors must equal the size $n$ of the matrix.
- To check diagonalizability, find eigenvalues, solve $\left(A-\lambda I\right)\mathbf{x}=\mathbf{0}$ for each eigenvalue, and count the independent eigenvectors.
- Diagonalization makes powers like $A^k$ much easier to compute.
