9. Linear Transformations

Rank And Nullity

Rank and Nullity in Linear Transformations

students, imagine a machine that takes an input vector and turns it into an output vector 🎯. In Linear Algebra, this machine is called a linear transformation. Some inputs get sent to different outputs, some inputs collapse to the same output, and some inputs may even get sent to zero. The ideas of rank and nullity help us measure exactly how much information a transformation keeps and how much it loses.

What rank and nullity mean

For a linear transformation $T: V \to W$, two important subspaces tell the story:

  • The kernel of $T$, written $\ker(T)$, is the set of all vectors in $V$ that are sent to the zero vector in $W$.
  • The range or image of $T$, written $\operatorname{იმ}(T)$ or more commonly $\operatorname{Im}(T)$, is the set of all outputs that actually occur.

The nullity of $T$ is the dimension of the kernel:

$$\operatorname{nullity}(T)=\dim(\ker(T))$$

The rank of $T$ is the dimension of the image:

$$\operatorname{rank}(T)=\dim(\operatorname{Im}(T))$$

These are not just words to memorize. They describe how a transformation behaves. If the nullity is large, many different inputs are crushed into zero. If the rank is large, the transformation reaches many directions in the output space.

A simple example is the projection $T(x,y)=(x,0)$ from $\mathbb{R}^2$ to $\mathbb{R}^2$. The output always lies on the $x$-axis. So the image is all vectors of the form $(a,0)$, which is 1-dimensional, so the rank is $1$. The kernel consists of all vectors $(0,y)$, which is also 1-dimensional, so the nullity is $1$.

Why rank and nullity matter

Rank and nullity help us understand what a linear transformation does without checking every input one by one 📌. They answer two big questions:

  1. How much of the output space is reached? That is the rank.
  2. How many independent directions disappear into zero? That is the nullity.

This matters in real situations. For example, in computer graphics, a transformation might flatten a 3D object onto a 2D screen. The rank tells us how many output directions remain visible, while the nullity tells us how many dimensions were lost. In data science, a matrix transformation may combine variables. Rank helps show whether the transformation preserves enough information to recover the original data.

A useful connection is that vectors in the kernel are exactly the inputs that the transformation cannot distinguish from zero. If two vectors $\mathbf{u}$ and $\mathbf{v}$ satisfy $T(\mathbf{u})=T(\mathbf{v})$, then

$$T(\mathbf{u}-\mathbf{v})=\mathbf{0}$$

so $\mathbf{u}-\mathbf{v}\in \ker(T)$. This means different inputs can become the same output whenever their difference lies in the kernel.

The Rank-Nullity Theorem

One of the most important results in Linear Algebra is the Rank-Nullity Theorem:

$$\dim(V)=\operatorname{rank}(T)+\operatorname{nullity}(T)$$

for any linear transformation $T:V\to W$ where $V$ is finite-dimensional.

This theorem says the dimension of the input space is split into two parts:

  • the part that survives as meaningful output directions, and
  • the part that collapses to zero.

Think of a classroom with $20$ students. If a survey groups students by only two categories, there may be many different students, but the final summary has fewer independent pieces of information. Rank-nullity is the math version of that idea: the input dimension is divided between what remains visible and what gets lost.

Here is the key intuition:

  • The rank counts independent output directions.
  • The nullity counts independent input directions that vanish.
  • Together they always add up to the full dimension of the domain.

For a transformation $T:\mathbb{R}^n\to W$, the theorem becomes

$$n=\operatorname{rank}(T)+\operatorname{nullity}(T)$$

This formula is extremely useful because if you know one quantity, you can find the other.

Working with matrices

Most linear transformations are represented by matrices. If a matrix $A$ represents a transformation $T(\mathbf{x})=A\mathbf{x}$, then:

  • the rank of $A$ is the dimension of its column space,
  • the nullity of $A$ is the dimension of its solution space to $A\mathbf{x}=\mathbf{0}$.

The column space is the set of all linear combinations of the columns of $A$. The number of pivot columns after row reduction gives the rank. The number of free variables in the equation $A\mathbf{x}=\mathbf{0}$ gives the nullity.

Example: let

$$A=\begin{bmatrix}1 & 2 & 3\\ 2 & 4 & 6\end{bmatrix}$$

The second row is just $2$ times the first row, so there is only one pivot. Therefore,

$$\operatorname{rank}(A)=1$$

Since $A$ has $3$ columns, the Rank-Nullity Theorem gives

$$3=1+\operatorname{nullity}(A)$$

so

$$\operatorname{nullity}(A)=2$$

This means the equation $A\mathbf{x}=\mathbf{0}$ has two independent directions of solutions. In other words, many different input vectors collapse to the zero vector.

A real-world style interpretation is that this matrix compresses three input features into only one independent output direction. Two pieces of input information are lost.

How to find rank and nullity in practice

Here is a standard procedure students can use when given a matrix:

  1. Row-reduce the matrix to echelon form or reduced echelon form.
  2. Count the pivot columns. That number is the rank.
  3. Count the free variables in the homogeneous system $A\mathbf{x}=\mathbf{0}$. That number is the nullity.
  4. Check with Rank-Nullity using the number of columns $n$:

$$n=\operatorname{rank}(A)+\operatorname{nullity}(A)$$

Example with a $3\times 3$ matrix:

$$B=\begin{bmatrix}1 & 0 & 2\\ 2 & 1 & 5\\ 0 & 1 & 1\end{bmatrix}$$

After row reduction, suppose we obtain a row-echelon form with $3$ pivots. Then

$$\operatorname{rank}(B)=3$$

Since there are $3$ columns,

$$3=3+\operatorname{nullity}(B)$$

so

$$\operatorname{nullity}(B)=0$$

This means only the zero vector maps to zero. The transformation is injective, or one-to-one.

By contrast, if a matrix has rank smaller than the number of columns, then the nullity is positive, and nonzero vectors exist in the kernel. That means the transformation is not one-to-one.

Connections to injective and surjective transformations

Rank and nullity also explain whether a transformation is one-to-one or onto.

  • A transformation $T$ is injective if different inputs always give different outputs.
  • A transformation $T$ is surjective if every vector in the codomain is reached by some input.

A key fact is:

$$T \text{ is injective } \iff \ker(T)=\{\mathbf{0}\}$$

So $T$ is injective exactly when

$$\operatorname{nullity}(T)=0$$

If $T:V\to W$ and $\dim(V)=\dim(W)$, then injective and surjective are equivalent for linear transformations. Rank helps detect surjectivity because if the image fills all of $W$, then

$$\operatorname{rank}(T)=\dim(W)$$

If the rank is smaller than $\dim(W)$, then the transformation misses some output directions.

Example: the transformation $T(x,y,z)=(x+y, y+z)$ maps $\mathbb{R}^3$ to $\mathbb{R}^2$. Since the output space has dimension $2$, the rank can be at most $2$. If the two component expressions are independent, the rank is $2$, and then by rank-nullity,

$$3=2+\operatorname{nullity}(T)$$

so

$$\operatorname{nullity}(T)=1$$

That means one independent direction in the input gets lost.

Conclusion

Rank and nullity give a clear picture of how linear transformations work. Rank measures how many independent output directions are reached, and nullity measures how many independent input directions disappear into zero. The Rank-Nullity Theorem ties them together with the dimension of the domain:

$$\dim(V)=\operatorname{rank}(T)+\operatorname{nullity}(T)$$

For students, the most important skill is to use this theorem to move confidently between matrices, kernels, images, and dimensions. Once you can find rank and nullity, you can tell whether a transformation preserves information, loses information, is one-to-one, or reaches the whole output space ✅.

Study Notes

  • Rank is the dimension of the image: $$\operatorname{rank}(T)=\dim(\operatorname{Im}(T))$$
  • Nullity is the dimension of the kernel: $$\operatorname{nullity}(T)=\dim(\ker(T))$$
  • Rank and nullity are linked by the Rank-Nullity Theorem:

$$\dim(V)=\operatorname{rank}(T)+\operatorname{nullity}(T)$$

  • For a matrix $A$, rank is the number of pivot columns.
  • For a matrix $A$, nullity is the number of free variables in $A\mathbf{x}=\mathbf{0}$.
  • If $\operatorname{nullity}(T)=0$, then $T$ is injective.
  • If $\operatorname{rank}(T)=\dim(W)$, then $T$ is surjective.
  • Rank shows how many independent output directions are reached.
  • Nullity shows how many independent input directions are sent to zero.
  • Rank and nullity are central tools for understanding linear transformations in matrices, geometry, and applications.

Practice Quiz

5 questions to test your understanding