Eigenspaces
students, by the end of this lesson you will be able to explain what an eigenspace is, find one from a matrix, and connect it to eigenvalues and eigenvectors 🔍. You will see that eigenspaces are not just a new word to memorize—they are a way to organize all the vectors that behave the same way under a linear transformation.
Objectives
- Explain the main ideas and terminology behind eigenspaces.
- Apply linear algebra procedures related to eigenspaces.
- Connect eigenspaces to eigenvalues and eigenvectors.
- Summarize how eigenspaces fit into the bigger picture of linear algebra.
- Use examples to show what eigenspaces look like in practice.
Imagine a machine that changes vectors by stretching, shrinking, or flipping them. Most vectors turn and move to a different direction. But some special vectors keep their direction after the transformation. Those are eigenvectors. The collection of all vectors tied to the same eigenvalue forms an eigenspace. This idea is powerful because it helps us group together all the vectors that share one stretching factor.
What is an eigenspace?
For a square matrix $A$, an eigenvector is a nonzero vector $\mathbf{v}$ such that $A\mathbf{v}=\lambda\mathbf{v}$ for some scalar $\lambda$. The scalar $\lambda$ is called an eigenvalue. The eigenspace for the eigenvalue $\lambda$ is the set of all vectors that satisfy that equation, plus the zero vector.
More formally, the eigenspace of $A$ for $\lambda$ is
$$E_\lambda=\{\mathbf{v}\mid A\mathbf{v}=\lambda\mathbf{v}\}.$$
This set includes the zero vector because it is a subspace, even though the zero vector is not an eigenvector. Every nonzero vector in $E_\lambda$ is an eigenvector with eigenvalue $\lambda$.
Why is this called a space? Because it is a subspace of the original vector space. That means it is closed under vector addition and scalar multiplication. If $\mathbf{u}$ and $\mathbf{w}$ are in $E_\lambda$, then $\mathbf{u}+\mathbf{w}$ is also in $E_\lambda$, and if $c$ is any scalar, then $c\mathbf{u}$ is in $E_\lambda$ as well. This is one of the big reasons eigenspaces are useful: they are not just random sets of vectors, but structured geometric objects.
A quick way to think about it is this 🎯: an eigenvalue gives the stretch factor, eigenvectors give the directions, and the eigenspace gives the whole collection of vectors that behave that way.
How do you find an eigenspace?
To find the eigenspace for a given eigenvalue $\lambda$, start with the equation
$$A\mathbf{v}=\lambda\mathbf{v}.$$
Then move everything to one side:
$$A\mathbf{v}-\lambda\mathbf{v}=\mathbf{0}.$$
Since $\lambda\mathbf{v}=\lambda I\mathbf{v}$, where $I$ is the identity matrix, this becomes
$$\left(A-\lambda I\right)\mathbf{v}=\mathbf{0}.$$
So the eigenspace is the null space of $A-\lambda I$:
$$E_\lambda=\text{Null}(A-\lambda I).$$
That means finding an eigenspace is really a system-solving problem. You subtract $\lambda I$ from $A$, then solve the homogeneous system.
Example 1
Let
$$A=\begin{pmatrix}4 & 1\\0 & 2\end{pmatrix}.$$
Suppose we want the eigenspace for $\lambda=4$.
First compute
$$A-4I=\begin{pmatrix}4 & 1\\0 & 2\end{pmatrix}-\begin{pmatrix}4 & 0\\0 & 4\end{pmatrix}=\begin{pmatrix}0 & 1\\0 & -2\end{pmatrix}.$$
Now solve
$$\begin{pmatrix}0 & 1\\0 & -2\end{pmatrix}\mathbf{v}=\mathbf{0}.$$
If $\mathbf{v}=\begin{pmatrix}x\y\end{pmatrix}$, then the equations are
$$y=0$$
and
$$-2y=0,$$
which gives the same result. So $x$ is free and $y=0$. Therefore all vectors in the eigenspace have the form
$$\begin{pmatrix}x\\0\end{pmatrix}=x\begin{pmatrix}1\\0\end{pmatrix}.$$
So
$$E_4=\text{span}\left\{\begin{pmatrix}1\\0\end{pmatrix}\right\}.$$
This means the eigenspace is the $x$-axis. Every vector on that line is stretched by a factor of $4$ under the transformation. Nice and clean ✨.
What does an eigenspace look like geometrically?
Geometrically, an eigenspace is the set of all directions and combinations of directions that stay on the same line or plane after the transformation, up to scaling.
In $\mathbb{R}^2$, an eigenspace is usually one of these:
- a line through the origin, or
- the whole plane $\mathbb{R}^2$ in special cases.
In $\mathbb{R}^3$, an eigenspace can be:
- a line through the origin,
- a plane through the origin, or
- all of $\mathbb{R}^3$ in a special case.
The dimension of the eigenspace is called the geometric multiplicity of the eigenvalue. If the eigenspace is a line, its dimension is $1$. If it is a plane, its dimension is $2$. If every vector is an eigenvector for the same eigenvalue, the eigenspace is the whole space.
Example 2
Consider the identity matrix
$$I=\begin{pmatrix}1 & 0\\0 & 1\end{pmatrix}.$$
For the eigenvalue $\lambda=1$,
$$I-1I=\begin{pmatrix}0 & 0\\0 & 0\end{pmatrix}.$$
Then
$$\left(I-I\right)\mathbf{v}=\mathbf{0}$$
is true for every vector $\mathbf{v}$. So
$$E_1=\mathbb{R}^2.$$
This means every vector is an eigenvector of the identity matrix with eigenvalue $1$. The identity transformation changes nothing, so every direction is preserved. That is an extreme case, but it helps show what an eigenspace can look like.
Why eigenspaces matter in eigenvalues and eigenvectors
Eigenspaces connect individual eigenvectors into a bigger structure. Instead of listing many separate eigenvectors one by one, eigenspaces collect all of them for a fixed eigenvalue.
This matters because one eigenvalue can have infinitely many eigenvectors. For example, if $\mathbf{v}$ is an eigenvector for $\lambda$, then any nonzero multiple $c\mathbf{v}$ is also an eigenvector for the same eigenvalue. All those vectors lie in the same eigenspace.
That is why the eigenspace is more informative than just one eigenvector. It tells you the whole set of directions that behave the same way under the matrix.
Eigenspaces are also important when a matrix is diagonalizable. A matrix is diagonalizable when the space has enough linearly independent eigenvectors to form a basis. In that case, the eigenspaces together help build a basis of the whole space. This makes the matrix easier to work with, because diagonal matrices are much simpler than general matrices.
A real-world example is vibration analysis 🎵. In a structure like a bridge or building, certain eigenvectors represent vibration modes. The corresponding eigenspaces represent all motions that share the same natural frequency. Engineers study these spaces to understand how a system responds to force.
Another worked example
Let
$$B=\begin{pmatrix}2 & 0\\0 & 5\end{pmatrix}.$$
Find the eigenspace for $\lambda=2$.
Compute
$$B-2I=\begin{pmatrix}2 & 0\\0 & 5\end{pmatrix}-\begin{pmatrix}2 & 0\\0 & 2\end{pmatrix}=\begin{pmatrix}0 & 0\\0 & 3\end{pmatrix}.$$
Now solve
$$\begin{pmatrix}0 & 0\\0 & 3\end{pmatrix}\mathbf{v}=\mathbf{0}.$$
Let $\mathbf{v}=\begin{pmatrix}x\y\end{pmatrix}$. Then
$$3y=0,$$
so
$$y=0.$$
The variable $x$ is free, so the eigenspace is
$$E_2=\left\{\begin{pmatrix}x\\0\end{pmatrix}:x\in\mathbb{R}\right\}=\text{span}\left\{\begin{pmatrix}1\\0\end{pmatrix}\right\}.$$
Now look at $\lambda=5$.
$$B-5I=\begin{pmatrix}-3 & 0\\0 & 0\end{pmatrix},$$
so solving
$$\begin{pmatrix}-3 & 0\\0 & 0\end{pmatrix}\mathbf{v}=\mathbf{0}$$
gives $x=0$ and $y$ free. Thus
$$E_5=\text{span}\left\{\begin{pmatrix}0\\1\end{pmatrix}\right\}.$$
These two eigenspaces are different lines through the origin. Together they show that the matrix stretches the $x$-direction by $2$ and the $y$-direction by $5$.
Common ideas and mistakes to remember
A few facts are especially important:
- The eigenspace always includes the zero vector.
- Eigenvectors are the nonzero vectors in the eigenspace.
- The eigenspace for $\lambda$ is the null space of $A-\lambda I$.
- Different eigenvalues have different eigenspaces, and their intersection is only the zero vector.
- Not every vector in the space is an eigenvector; only vectors in an eigenspace have that special property.
A common mistake is forgetting to subtract $\lambda I$ before solving. Another mistake is calling the zero vector an eigenvector. It is not. It belongs to the eigenspace, but it does not count as an eigenvector because eigenvectors must be nonzero.
Conclusion
students, eigenspaces are the organized collections of all eigenvectors for a given eigenvalue, plus the zero vector. They are found by solving $\left(A-\lambda I\right)\mathbf{v}=\mathbf{0}$, which turns the problem into finding a null space. Geometrically, an eigenspace can be a line, a plane, or even the whole space. Conceptually, eigenspaces help explain how a matrix transforms space by grouping together all vectors that are stretched by the same factor. That makes them a central idea in eigenvalues and eigenvectors, and a major tool for understanding linear transformations.
Study Notes
- An eigenspace for eigenvalue $\lambda$ is $E_\lambda=\{\mathbf{v}\mid A\mathbf{v}=\lambda\mathbf{v}\}$.
- The zero vector is always in an eigenspace, but it is not an eigenvector.
- To find an eigenspace, solve $\left(A-\lambda I\right)\mathbf{v}=\mathbf{0}$.
- So, $E_\lambda=\text{Null}(A-\lambda I)$.
- Every nonzero vector in an eigenspace is an eigenvector for that eigenvalue.
- In $\mathbb{R}^2$, an eigenspace is usually a line through the origin or all of $\mathbb{R}^2$.
- In $\mathbb{R}^3$, an eigenspace can be a line, a plane, or all of $\mathbb{R}^3$.
- The dimension of an eigenspace is the geometric multiplicity of the eigenvalue.
- Eigenspaces help show how a matrix stretches space in special directions.
- If a matrix is diagonalizable, its eigenspaces help build a basis of eigenvectors.
