13. Inner Products and Orthogonality

Building Orthonormal Bases

Building Orthonormal Bases

students, imagine trying to navigate a city with a map that is tilted, stretched, and awkward to read. It still shows the streets, but it is not the clearest tool for the job. In linear algebra, an orthonormal basis is like a perfectly cleaned-up map πŸ—ΊοΈ: the directions are at right angles, and each direction has length $1$. That makes calculations simpler, checking answers easier, and geometric meaning much clearer.

In this lesson, you will learn how orthonormal bases are built, why they matter, and how they connect to inner products and orthogonality. By the end, you should be able to explain the key ideas, apply the procedure for building one, and recognize why orthonormal bases are such a powerful tool in linear algebra.

What Is an Orthonormal Basis?

A basis for a vector space is a set of vectors that can be used to build every vector in that space. More precisely, the basis vectors must be linearly independent and span the space. If the basis is orthonormal, it has two extra properties:

  • The vectors are orthogonal, meaning the inner product of any two different basis vectors is $0$.
  • Each vector has length $1$, meaning $\|u\|=1$ for every basis vector $u$.

So if $\{u_1,u_2,\dots,u_n\}$ is an orthonormal basis, then

$$

$\langle u_i,u_j\rangle =$

$\begin{cases}$

1 & \text{if } i=j,\\

0 & \text{if } i$\neq$ j.

$\end{cases}$

$$

This condition means the basis vectors are all perpendicular to one another and all normalized to unit length. In $\mathbb{R}^2$, the standard basis vectors $\begin{bmatrix}1\\0\end{bmatrix}$ and $\begin{bmatrix}0\\1\end{bmatrix}$ form an orthonormal basis. In $\mathbb{R}^3$, the standard coordinate vectors do the same.

Why is this useful? Because when basis vectors are orthonormal, coordinates become much easier to compute. Instead of solving a system of equations, you can use inner products directly.

Why Orthonormal Bases Are Important

Orthonormal bases are important because they make vector calculations cleaner and more stable. They are used in geometry, computer graphics, physics, signal processing, and data analysis πŸ“Š.

Here are some key benefits:

  • Simple coordinate formulas: If $\{u_1,\dots,u_n\}$ is orthonormal and $v$ is any vector in the space, then

$$

v = \langle v,u_1\rangle u_1 + \langle v,u_2\rangle u_2 + $\cdots$ + \langle v,u_n\rangle u_n.

$$

The coefficient of each basis vector is just an inner product.

  • Easy length calculations: For an orthonormal basis, the norm of a vector can be found using its coordinates. If

$$

v = c_1u_1 + c_2u_2 + $\cdots$ + c_nu_n,

$$

then

$$

\|v\|^2 = c_1^2 + c_2^2 + $\cdots$ + c_n^2.

$$

  • Easy projection: The projection of a vector onto one basis direction is especially simple when the basis is orthonormal.

These features make orthonormal bases one of the most practical ideas in linear algebra.

Building an Orthonormal Basis from a Basis

Often, we start with a basis that is not orthonormal and turn it into one. The main tool for this is the Gram-Schmidt process. This process takes a linearly independent set of vectors and transforms it into an orthonormal set that spans the same subspace.

Suppose $\{v_1,v_2,\dots,v_n\}$ is a basis for a subspace. Gram-Schmidt builds vectors $u_1,u_2,\dots,u_n$ step by step.

Step 1: Normalize the first vector

First, set

$$

u_1 = v_1$$

and then define

$$

u_1' = $\frac{\nu_1}{\|\nu_1\|}$.$$

Now $\nu_1'$ has length $1$.

Step 2: Remove the part in the previous direction

For the second vector, subtract its projection onto the first orthonormal vector:

$$

$\nu_2 = v_2 - \operatorname{proj}_{\nu_1'}(v_2).$

$$

The projection formula is

$$

$\operatorname{proj}_{u}(v) = \langle v,u\rangle u,$

$$

when $u$ is a unit vector. Then normalize $\nu_2$:

$$

$\nu_2' = \frac{\nu_2}{\|\nu_2\|}.$

$$

Now $\nu_2'$ is orthogonal to $\nu_1'$ and has length $1$.

Step 3: Continue the pattern

For the third vector, remove the projections onto both previous unit vectors:

$$

$\nu_3$ = v_3 - \operatorname{proj}_{\nu_1'}(v_3) - \operatorname{proj}_{\nu_2'}(v_3),

$$

and then normalize it:

$$

$\nu_3' = \frac{\nu_3}{\|\nu_3\|}.$

$$

This continues until all vectors are orthonormal.

The main idea is simple: each new vector is stripped of everything that points in a previous direction, leaving only the new independent direction it contributes.

Worked Example: Making an Orthonormal Basis in $\mathbb{R}^2$

Let’s build an orthonormal basis from the vectors

$$

v_1 = $\begin{bmatrix}1$\\$1\end{bmatrix}$, \quad v_2 = $\begin{bmatrix}1$\\-$1\end{bmatrix}$.

$$

First, check that they are orthogonal:

$$

\langle v_1,v_2\rangle = $1\cdot 1$ + $1\cdot($-1) = 0.

$$

So they are already orthogonal. Now we just normalize them.

For $v_1$:

$$

$\|v_1\| = \sqrt{1^2+1^2} = \sqrt{2},$

$$

so

$$

$ u_1 = \frac{1}{\sqrt{2}}\begin{bmatrix}1\\1\end{bmatrix}.$

$$

For $v_2$:

$$

$\|v_2\| = \sqrt{1^2+(-1)^2} = \sqrt{2},$

$$

so

$$

$ u_2 = \frac{1}{\sqrt{2}}\begin{bmatrix}1\\-1\end{bmatrix}.$

$$

Now $\{u_1,u_2\}$ is an orthonormal basis for $\mathbb{R}^2$.

This example is nice because the vectors were already perpendicular. In many problems, the vectors are not orthogonal at first, so Gram-Schmidt is needed.

Worked Example: Using Gram-Schmidt in $\mathbb{R}^3$

Suppose we want an orthonormal basis for the subspace spanned by

$$

v_1 = $\begin{bmatrix}1$\\1\\$0\end{bmatrix}$, \quad v_2 = $\begin{bmatrix}1$\\0\\$1\end{bmatrix}$.

$$

First, let

$$

$ u_1 = \frac{v_1}{\|v_1\|}.$

$$

Since

$$

$\|v_1\| = \sqrt{1^2+1^2+0^2} = \sqrt{2},$

$$

we get

$$

$ u_1 = \frac{1}{\sqrt{2}}\begin{bmatrix}1\\1\\0\end{bmatrix}.$

$$

Now remove from $v_2$ the part in the direction of $u_1$:

$$

$\operatorname{proj}_{u_1}(v_2)=\langle v_2,u_1\rangle u_1.$

$$

Compute the inner product:

$$

\langle v_2,u_1\rangle = $\begin{bmatrix}1$\\0\\$1\end{bmatrix}$$\cdot$ $\frac{1}{\sqrt{2}}$$\begin{bmatrix}1$\\1\\$0\end{bmatrix}$ = $\frac{1}{\sqrt{2}}$.

$$

So

$$

\operatorname{proj}_{u_1}(v_2)=$\frac{1}{\sqrt{2}}$u_1=$\frac{1}{\sqrt{2}}$$\cdot$ $\frac{1}{\sqrt{2}}$$\begin{bmatrix}1$\\1\\$0\end{bmatrix}$=$\frac{1}{2}$$\begin{bmatrix}1$\\1\\$0\end{bmatrix}$.

$$

Subtract this from $v_2$:

$$

w_2 = v_2 - \operatorname{proj}_{u_1}(v_2)=$\begin{bmatrix}1$\\0\\$1\end{bmatrix}$-$\frac{1}{2}$$\begin{bmatrix}1$\\1\\$0\end{bmatrix}$=$\begin{bmatrix}$$\frac{1}{2}$\\-$\frac{1}{2}$\\$1\end{bmatrix}$.

$$

Now normalize $w_2$ to get $u_2$:

$$

\|w_2\|=$\sqrt{\left(\frac{1}{2}\right)^2+\left(-\frac{1}{2}\right)^2+1^2}$=$\sqrt{\frac{3}{2}}$.

$$

So

$$

$ u_2 = \frac{w_2}{\|w_2\|}.$

$$

Then $\{u_1,u_2\}$ is an orthonormal basis for the subspace spanned by $v_1$ and $v_2$.

This example shows the purpose of Gram-Schmidt: it keeps the same subspace but replaces difficult vectors with cleaner ones.

How to Check That a Set Is Orthonormal

To verify a set of vectors is orthonormal, check two things:

  1. Each vector has length $1$.
  2. The inner product of every pair of different vectors is $0$.

For vectors $u_1,u_2,\dots,u_n$, you can organize the check using the formula

$$

$\langle u_i,u_j\rangle = \delta_{ij},$

$$

where $\delta_{ij}$ is $1$ when $i=j$ and $0$ when $i\neq j$.

A quick practical tip: if the vectors are columns of a matrix $Q$, then the set is orthonormal exactly when

$$

$Q^TQ = I.$

$$

This matrix test is very useful in applications such as numerical computing and transformations.

Why This Fits into Inner Products and Orthogonality

Orthonormal bases are not separate from inner products and orthogonality; they are one of the most important results in that topic. The inner product gives us a way to measure angle, length, and projection. Orthogonality tells us when vectors are perpendicular. An orthonormal basis combines both ideas into a structure that is easy to use.

In particular, Gram-Schmidt depends on projection formulas, and projection formulas depend on the inner product. So the whole process is built from the ideas introduced in inner products and orthogonality. That is why this lesson is a natural part of the larger topic.

Conclusion

Building orthonormal bases means turning a spanning set into one that is both orthogonal and made of unit vectors. This is usually done using the Gram-Schmidt process, which subtracts projections to remove overlap between directions and then normalizes each vector. students, once you understand orthonormal bases, many linear algebra tasks become simpler: coordinates are easier to compute, lengths are easier to find, and geometry becomes more transparent ✨.

Orthonormal bases are a key bridge between abstract vector spaces and practical computation. They show how inner products, orthogonality, and basis concepts work together to make linear algebra powerful and usable.

Study Notes

  • A basis spans a vector space and is linearly independent.
  • An orthonormal basis is a basis whose vectors are orthogonal and each have length $1$.
  • For orthonormal vectors $u_i$ and $u_j$, $\langle u_i,u_j\rangle=0$ when $i\neq j$ and $\langle u_i,u_i\rangle=1$.
  • Any vector $v$ in an orthonormal basis can be written as $v=\sum_{i=1}^n \langle v,u_i\rangle u_i$.
  • The Gram-Schmidt process builds an orthonormal basis from a linearly independent set.
  • Gram-Schmidt works by subtracting projections onto earlier vectors, then normalizing.
  • Projection onto a unit vector $u$ is $\operatorname{proj}_u(v)=\langle v,u\rangle u$.
  • If vectors are columns of a matrix $Q$, orthonormality is checked by $Q^TQ=I$.
  • Orthonormal bases connect directly to inner products, orthogonality, and projections.

Practice Quiz

5 questions to test your understanding