Determining Diagonalizability
students, in this lesson you will learn how to tell whether a matrix can be diagonalized and why that matters in linear algebra 📘. Diagonalizability is one of the most useful ideas in the study of eigenvalues and eigenvectors because it can turn a difficult matrix problem into a much simpler one. By the end of this lesson, you should be able to explain the key terms, test whether a matrix is diagonalizable, and use eigenvectors to justify your answer.
What diagonalizability means
A square matrix $A$ is diagonalizable if it can be written in the form
$$A=PDP^{-1}$$
where $P$ is an invertible matrix and $D$ is a diagonal matrix. The diagonal entries of $D$ are eigenvalues of $A$, and the columns of $P$ are eigenvectors of $A$.
This matters because diagonal matrices are much easier to work with than most matrices. For example, if $A=PDP^{-1}$, then powers of $A$ can be found using $A^n=PD^nP^{-1}$, and powers of a diagonal matrix are simple because each diagonal entry is just raised to the $n$th power. This is useful in areas such as population growth models, repeated transformations in computer graphics, and systems of differential equations 💡.
The big idea is simple: a matrix is diagonalizable when it has enough independent eigenvectors to form a basis for the space.
Key terminology
- Eigenvalue: a number $\lambda$ such that $Av=\lambda v$ for some nonzero vector $v$.
- Eigenvector: a nonzero vector $v$ that satisfies $Av=\lambda v$ for some scalar $\lambda$.
- Eigenspace: the set of all eigenvectors for a given eigenvalue, together with the zero vector.
- Algebraic multiplicity: the number of times an eigenvalue appears as a root of the characteristic polynomial.
- Geometric multiplicity: the dimension of the eigenspace for that eigenvalue.
students, these words are important because diagonalizability depends on how many independent eigenvectors a matrix has, not just on whether it has eigenvalues.
How to test whether a matrix is diagonalizable
There are several reliable ways to determine diagonalizability. The most common method uses eigenvalues and eigenvectors.
Method 1: Compare the number of independent eigenvectors to the size of the matrix
An $n\times n$ matrix is diagonalizable if and only if it has $n$ linearly independent eigenvectors. That means you need enough eigenvectors to make a basis for the whole space.
If you can find $n$ independent eigenvectors, then you can place them as columns of $P$. The corresponding eigenvalues go on the diagonal of $D$, and the matrix is diagonalizable.
If you cannot find enough independent eigenvectors, then the matrix is not diagonalizable.
Method 2: Use algebraic and geometric multiplicities
For each eigenvalue $\lambda$:
- the geometric multiplicity is always less than or equal to the algebraic multiplicity,
- a matrix is diagonalizable when, for every eigenvalue, the geometric multiplicity equals the algebraic multiplicity,
- the sum of the dimensions of all eigenspaces must equal $n$.
A helpful fact is that if a matrix has $n$ distinct eigenvalues, then it is automatically diagonalizable. Why? Because eigenvectors belonging to distinct eigenvalues are linearly independent.
Method 3: Use the characteristic polynomial and eigenspaces
First, find the characteristic polynomial
$$\det(A-\lambda I)=0$$
Then find each eigenvalue and its eigenspace by solving
$$\left(A-\lambda I\right)v=0$$
for each eigenvalue $\lambda$. Count the number of independent eigenvectors you get. If the total is $n$, then the matrix is diagonalizable.
Worked examples
Example 1: A diagonalizable matrix
Consider
$$A=\begin{bmatrix}2&0\\0&3\end{bmatrix}$$
This matrix is already diagonal, so it is diagonalizable. In fact, it is diagonalized by the identity matrix:
$$A=I\,A\,I^{-1}$$
Here the eigenvalues are $2$ and $3$, and the standard basis vectors are eigenvectors. Because a $2\times2$ matrix needs $2$ independent eigenvectors and this matrix has them, it is diagonalizable ✅.
This example is simple, but it shows the definition clearly. A diagonal matrix is always diagonalizable, because it already has the desired form.
Example 2: A matrix that is not diagonalizable
Consider
$$B=\begin{bmatrix}1&1\\0&1\end{bmatrix}$$
To check diagonalizability, first find the eigenvalues. The characteristic polynomial is
$$\det(B-\lambda I)=\det\begin{bmatrix}1-\lambda&1\\0&1-\lambda\end{bmatrix}=(1-\lambda)^2$$
So the only eigenvalue is $\lambda=1$, and its algebraic multiplicity is $2$.
Now find the eigenspace:
$$\left(B-I\right)v=0$$
which gives
$$\begin{bmatrix}0&1\\0&0\end{bmatrix}\begin{bmatrix}x\y\end{bmatrix}=\begin{bmatrix}0\\0\end{bmatrix}$$
This forces $y=0$, while $x$ can be any number. So the eigenvectors are all multiples of
$$\begin{bmatrix}1\\0\end{bmatrix}$$
The eigenspace has dimension $1$, so the geometric multiplicity is $1$. Since the algebraic multiplicity is $2$ but the geometric multiplicity is only $1$, the matrix is not diagonalizable.
This is a classic example of a matrix with too few independent eigenvectors. It has an eigenvalue, but not enough directions to build a basis.
Example 3: A matrix with repeated eigenvalues that is still diagonalizable
Now consider
$$C=\begin{bmatrix}4&0\\0&4\end{bmatrix}$$
This matrix has one eigenvalue, $\lambda=4$, with algebraic multiplicity $2$. But its eigenspace is all of $\mathbb{R}^2$, because every nonzero vector satisfies
$$Cv=4v$$
So the geometric multiplicity is also $2$. Since the two multiplicities match and there are $2$ independent eigenvectors, the matrix is diagonalizable.
This example is important because it shows that repeated eigenvalues do not automatically mean a matrix is not diagonalizable. What matters is whether there are enough eigenvectors.
Why the idea is useful
Diagonalizability is not just a theoretical test. It makes calculations easier and reveals the structure of a transformation.
When $A$ is diagonalizable, repeated multiplication becomes manageable:
$$A^n=PD^nP^{-1}$$
This is much easier than multiplying $A$ by itself $n$ times. For example, if a transformation models a situation that happens over many steps, diagonalization can simplify long-term predictions.
Diagonalizability also helps explain a matrix geometrically. The eigenvectors are directions that do not change direction under the transformation. If there are enough such directions to create a basis, then the matrix can be understood as scaling along independent axes after a change of coordinates.
This connection is one reason diagonalizability is central in eigenvalues and eigenvectors. It links algebraic information, such as the characteristic polynomial, with geometric information, such as invariant directions.
Common mistakes to avoid
students, here are a few mistakes students often make:
- Thinking that every matrix with eigenvalues is diagonalizable. This is false.
- Confusing algebraic multiplicity with geometric multiplicity. They are related, but not the same.
- Forgetting that diagonalizability requires enough independent eigenvectors, not just enough eigenvalues.
- Assuming that repeated eigenvalues prevent diagonalization. This is also false.
- Not checking linear independence of eigenvectors when building $P$.
A good habit is to always ask two questions: How many eigenvalues does the matrix have, and how many independent eigenvectors can I actually find?
A quick process to follow
Here is a practical checklist you can use:
- Find the characteristic polynomial using
$$\det(A-\lambda I)=0$$
- Solve for the eigenvalues.
- For each eigenvalue, find the eigenspace by solving
$$\left(A-\lambda I\right)v=0$$
- Count the number of independent eigenvectors.
- If the total equals the size of the matrix, then the matrix is diagonalizable.
- If not, it is not diagonalizable.
This process works well for most textbook problems and gives a clear reason for your conclusion.
Conclusion
Diagonalizability tells us whether a matrix can be rewritten in the simpler form $A=PDP^{-1}$. A matrix is diagonalizable exactly when it has enough independent eigenvectors to form a basis. students, the main tools are eigenvalues, eigenspaces, and the comparison of algebraic and geometric multiplicities. Diagonalizability is powerful because it simplifies computation and helps reveal the structure of a linear transformation. Once you can determine whether a matrix is diagonalizable, you are using one of the most important ideas in the study of eigenvalues and eigenvectors 🌟.
Study Notes
- A matrix $A$ is diagonalizable if $A=PDP^{-1}$ for some invertible matrix $P$ and diagonal matrix $D$.
- The columns of $P$ are eigenvectors, and the diagonal entries of $D$ are eigenvalues.
- An $n\times n$ matrix is diagonalizable if and only if it has $n$ linearly independent eigenvectors.
- If a matrix has $n$ distinct eigenvalues, it is diagonalizable.
- For each eigenvalue, the geometric multiplicity is the dimension of its eigenspace.
- For every eigenvalue, geometric multiplicity is less than or equal to algebraic multiplicity.
- A matrix is diagonalizable when the sum of the dimensions of its eigenspaces is $n$.
- Repeated eigenvalues do not automatically mean a matrix is not diagonalizable.
- The characteristic polynomial is found from $\det(A-\lambda I)=0$.
- Diagonalizability makes powers of matrices easier to compute using $A^n=PD^nP^{-1}$.
