13. Inner Products and Orthogonality

Orthonormal Bases

Orthonormal Bases

students, imagine trying to describe a location on a map using the cleanest possible directions 📍. Instead of messy arrows that overlap or point in awkward ways, what if we had directions that were perfectly independent and all the same length? That is the idea behind an orthonormal basis. In Linear Algebra, orthonormal bases make calculations easier, clearer, and more reliable.

By the end of this lesson, you should be able to:

  • explain what an orthonormal basis is,
  • tell how it relates to inner products and orthogonality,
  • use an orthonormal basis to find coordinates of vectors,
  • recognize why orthonormal bases are so useful in real problems.

These ideas appear in computer graphics, signal processing, physics, and data science. A smart choice of basis can turn a hard problem into a simple one ✨.

What Makes a Basis Orthonormal?

Start with the word basis. A basis for a vector space is a set of vectors that does two things:

  1. spans the space,
  2. is linearly independent.

That means every vector in the space can be written in exactly one way as a combination of the basis vectors.

Now add the prefix ortho-, which means perpendicular, and normal, which means unit length. An orthonormal set is a set of vectors where:

  • each vector has length $1$,
  • every pair of different vectors is orthogonal.

If $\mathbf{u}_1, \mathbf{u}_2, \dots, \mathbf{u}_n$ is an orthonormal set, then

$$\mathbf{u}_i \cdot \mathbf{u}_j = 0 \text{ for } i \ne j,$$

and

$$\mathbf{u}_i \cdot \mathbf{u}_i = 1.$$

Here $\mathbf{u}_i \cdot \mathbf{u}_j$ is the inner product. In ordinary Euclidean space, this is the dot product.

If an orthonormal set also spans the whole space, then it is an orthonormal basis.

Example in $\mathbb{R}^2$

The vectors

$$\mathbf{e}_1 = \begin{bmatrix}1\\0\end{bmatrix}, \quad \mathbf{e}_2 = \begin{bmatrix}0\\1\end{bmatrix}$$

form an orthonormal basis for $\mathbb{R}^2$.

Check it:

  • $\mathbf{e}_1 \cdot \mathbf{e}_1 = 1$
  • $\mathbf{e}_2 \cdot \mathbf{e}_2 = 1$
  • $\mathbf{e}_1 \cdot \mathbf{e}_2 = 0$

They are perpendicular and both have length $1$. They also span all of $\mathbb{R}^2$.

This basis is so common that it is called the standard basis.

Why Orthogonality Helps

Orthogonality is powerful because it removes interference between directions. Think of two people pushing a cart. If they push in the same direction, their efforts overlap. If they push at right angles, each contribution can be measured separately. That is what orthogonality does for vectors 🔍.

For a general basis, finding coordinates can require solving a system of equations. But for an orthonormal basis, coordinates come from simple inner products.

Suppose $\mathbf{u}_1, \dots, \mathbf{u}_n$ is an orthonormal basis and $\mathbf{x}$ is any vector in the space. Then

$$\mathbf{x} = c_1\mathbf{u}_1 + c_2\mathbf{u}_2 + \cdots + c_n\mathbf{u}_n,$$

where each coefficient is

$$c_i = \mathbf{x} \cdot \mathbf{u}_i.$$

This is one of the most important facts in the topic. The coefficients are found directly by taking dot products.

Why does this work?

Take the dot product of both sides with $\mathbf{u}_k$:

$$\mathbf{x} \cdot \mathbf{u}_k = c_1(\mathbf{u}_1 \cdot \mathbf{u}_k) + \cdots + c_n(\mathbf{u}_n \cdot \mathbf{u}_k).$$

Because the basis is orthonormal, all terms vanish except the one where $i=k$. So,

$$\mathbf{x} \cdot \mathbf{u}_k = c_k.$$

This is why orthonormal bases make coordinate calculations easy and elegant.

Real-world connection

In audio compression, a sound wave can be represented using orthogonal building blocks. In computer graphics, object rotations and camera directions often rely on orthonormal vectors so that scaling does not accidentally creep in. Orthogonality keeps the directions clean and stable.

Checking Whether a Set Is an Orthonormal Basis

To test whether a set is an orthonormal basis, use two checks:

  1. Are the vectors orthonormal?
  2. Do they span the space?

If you have exactly $n$ orthonormal vectors in an $n$-dimensional space, then they automatically form an orthonormal basis.

Example in $\mathbb{R}^3$

Consider

$$\mathbf{u}_1 = \begin{bmatrix}1\\0\\0\end{bmatrix}, \quad \mathbf{u}_2 = \begin{bmatrix}0\\-1\\0\end{bmatrix}, \quad \mathbf{u}_3 = \begin{bmatrix}0\\0\\1\end{bmatrix}.$$

Check the lengths:

$$\mathbf{u}_1 \cdot \mathbf{u}_1 = 1, \quad \mathbf{u}_2 \cdot \mathbf{u}_2 = 1, \quad \mathbf{u}_3 \cdot \mathbf{u}_3 = 1.$$

Check pairwise orthogonality:

$$\mathbf{u}_1 \cdot \mathbf{u}_2 = 0, \quad \mathbf{u}_1 \cdot \mathbf{u}_3 = 0, \quad \mathbf{u}_2 \cdot \mathbf{u}_3 = 0.$$

Since there are $3$ orthonormal vectors in $\mathbb{R}^3$, they form an orthonormal basis.

Non-example

Let

$$\mathbf{v}_1 = \begin{bmatrix}1\\1\end{bmatrix}, \quad \mathbf{v}_2 = \begin{bmatrix}1\\-1\end{bmatrix}.$$

These vectors are orthogonal because

$$\mathbf{v}_1 \cdot \mathbf{v}_2 = 1(1) + 1(-1) = 0.$$

But they are not yet orthonormal, because

$$\|\mathbf{v}_1\| = \sqrt{2}, \quad \|\mathbf{v}_2\| = \sqrt{2}.$$

To make them orthonormal, divide each by its length:

$$\mathbf{u}_1 = \frac{1}{\sqrt{2}}\begin{bmatrix}1\\1\end{bmatrix}, \quad \mathbf{u}_2 = \frac{1}{\sqrt{2}}\begin{bmatrix}1\\-1\end{bmatrix}.$$

Now each has length $1$, and they remain orthogonal.

Building Orthonormal Bases from Other Bases

Many times, you start with a basis that is not orthonormal. A standard method called the Gram-Schmidt process converts a linearly independent set into an orthogonal set, and then normalizes it.

Suppose you have vectors $\mathbf{v}_1, \mathbf{v}_2$. First set

$$\mathbf{u}_1 = \mathbf{v}_1.$$

Then remove the part of $\mathbf{v}_2$ that points in the direction of $\mathbf{u}_1$:

$$\mathbf{u}_2 = \mathbf{v}_2 - \operatorname{proj}_{\mathbf{u}_1}(\mathbf{v}_2),$$

where the projection formula is

$$\operatorname{proj}_{\mathbf{u}}(\mathbf{v}) = \frac{\mathbf{v} \cdot \mathbf{u}}{\mathbf{u} \cdot \mathbf{u}}\mathbf{u}.$$

After that, normalize the vectors:

$$\mathbf{e}_1 = \frac{\mathbf{u}_1}{\|\mathbf{u}_1\|}, \quad \mathbf{e}_2 = \frac{\mathbf{u}_2}{\|\mathbf{u}_2\|}.$$

The result is an orthonormal basis for the same subspace.

Simple geometric idea

Gram-Schmidt is like cleaning up directions in a room. If one vector already points one way, the next vector is adjusted so it does not borrow any of that same direction. That way, each new direction adds something truly new.

Coordinates and Projections in an Orthonormal Basis

One major use of orthonormal bases is finding the best approximation to a vector in a subspace.

If $W$ is a subspace with orthonormal basis $\mathbf{u}_1, \dots, \mathbf{u}_k$, then the projection of $\mathbf{x}$ onto $W$ is

$$\operatorname{proj}_W(\mathbf{x}) = (\mathbf{x} \cdot \mathbf{u}_1)\mathbf{u}_1 + \cdots + (\mathbf{x} \cdot \mathbf{u}_k)\mathbf{u}_k.$$

This formula is simpler than the general projection formula because the basis vectors are orthonormal.

Example

Let

$$\mathbf{x} = \begin{bmatrix}2\\3\end{bmatrix}$$

and use the orthonormal basis

$$\mathbf{u}_1 = \frac{1}{\sqrt{2}}\begin{bmatrix}1\\1\end{bmatrix}, \quad \mathbf{u}_2 = \frac{1}{\sqrt{2}}\begin{bmatrix}1\\-1\end{bmatrix}.$$

The coordinates of $\mathbf{x}$ in this basis are

$$c_1 = \mathbf{x} \cdot \mathbf{u}_1 = \frac{2+3}{\sqrt{2}} = \frac{5}{\sqrt{2}},$$

$$c_2 = \mathbf{x} \cdot \mathbf{u}_2 = \frac{2-3}{\sqrt{2}} = -\frac{1}{\sqrt{2}}.$$

So

$$\mathbf{x} = \frac{5}{\sqrt{2}}\mathbf{u}_1 - \frac{1}{\sqrt{2}}\mathbf{u}_2.$$

This expresses the vector using directions that are easy to separate and measure.

Conclusion

students, orthonormal bases are a central idea in Inner Products and Orthogonality because they combine two useful features: perpendicular directions and unit length. This makes them easy to work with, easy to interpret, and extremely useful in applications. With an orthonormal basis, coordinates come from inner products, projections become simpler, and geometric structure stays clean and stable 🎯.

When you see an orthonormal basis, think: independent directions, simple coordinates, and reliable calculations. That is why this idea appears throughout Linear Algebra and in many real technologies.

Study Notes

  • A basis spans a space and is linearly independent.
  • An orthonormal set has vectors that are mutually orthogonal and each has length $1$.
  • An orthonormal basis is an orthonormal set that also spans the space.
  • For an orthonormal basis $\mathbf{u}_1, \dots, \mathbf{u}_n$, any vector $\mathbf{x}$ can be written as $\mathbf{x} = \sum_{i=1}^n c_i\mathbf{u}_i$.
  • The coordinates are found by $c_i = \mathbf{x} \cdot \mathbf{u}_i$.
  • If a space has $n$ orthonormal vectors in dimension $n$, they automatically form an orthonormal basis.
  • Gram-Schmidt turns a linearly independent set into an orthonormal basis for the same subspace.
  • The projection onto a subspace with orthonormal basis $\mathbf{u}_1, \dots, \mathbf{u}_k$ is $\operatorname{proj}_W(\mathbf{x}) = \sum_{i=1}^k (\mathbf{x} \cdot \mathbf{u}_i)\mathbf{u}_i$.
  • Orthonormal bases simplify computation in geometry, physics, computer graphics, and data analysis.
  • The key connection to inner products is that orthogonality and length are both measured using the inner product.

Practice Quiz

5 questions to test your understanding

Orthonormal Bases — Linear Algebra | A-Warded