13. Inner Products and Orthogonality

Working With Orthogonal Decompositions

Working with Orthogonal Decompositions

students, imagine trying to find the shadow of a flashlight on a wall 🌟. The shadow points in one direction, while the light that misses the wall goes in a different direction. In linear algebra, an orthogonal decomposition is a lot like splitting a vector into two parts: one part lies in a subspace, and the other part is perpendicular to that subspace. This idea is powerful because it helps us understand geometry with algebra and solve many problems cleanly.

What an Orthogonal Decomposition Means

An orthogonal decomposition of a vector $\mathbf{v}$ relative to a subspace $W$ is a way to write

$$\mathbf{v} = \mathbf{w} + \mathbf{z}$$

where $\mathbf{w} \in W$ and $\mathbf{z} \in W^\perp$.

Here, $W^\perp$ means the orthogonal complement of $W$, the set of all vectors perpendicular to every vector in $W$. The key idea is that the two parts do not overlap in direction. The vector $\mathbf{w}$ is the “part inside the subspace,” and $\mathbf{z}$ is the “leftover part” that points straight away from the subspace at a right angle.

This is important because many spaces can be broken into two complementary pieces:

$$\mathbb{R}^n = W \oplus W^\perp$$

The symbol $\oplus$ means a direct sum, which tells us every vector in $\mathbb{R}^n$ can be written in exactly one way as a sum of one vector from $W$ and one vector from $W^\perp$.

Why this matters

Orthogonal decomposition makes hard problems easier. For example, in computer graphics, a point may need to be projected onto a plane. In engineering, a force can be split into parallel and perpendicular components. In data science, one can separate a signal from noise using subspaces. These are all examples of breaking a vector into meaningful pieces.

The Orthogonal Projection

The most important tool in orthogonal decomposition is the orthogonal projection. The orthogonal projection of a vector $\mathbf{v}$ onto a subspace $W$ is the vector in $W$ that is closest to $\mathbf{v}$.

If we call that projected vector $\mathbf{w}$, then the error vector is

$$\mathbf{z} = \mathbf{v} - \mathbf{w}$$

and this error vector is orthogonal to $W$.

This means the decomposition has the form

$$\mathbf{v} = \operatorname{proj}_W(\mathbf{v}) + \bigl(\mathbf{v} - \operatorname{proj}_W(\mathbf{v})\bigr)$$

If $W$ has an orthonormal basis $\{\mathbf{u}_1, \mathbf{u}_2, \dots, \mathbf{u}_k\}$, then the projection is easy to compute:

$$\operatorname{proj}_W(\mathbf{v}) = (\mathbf{v}\cdot\mathbf{u}_1)\mathbf{u}_1 + (\mathbf{v}\cdot\mathbf{u}_2)\mathbf{u}_2 + \cdots + (\mathbf{v}\cdot\mathbf{u}_k)\mathbf{u}_k$$

This formula works because orthonormal vectors are both orthogonal and of length $1$. That makes each coefficient equal to a dot product.

Example with a line in $\mathbb{R}^2$

Suppose $W$ is the line spanned by $\mathbf{u} = \left\langle \frac{3}{5}, \frac{4}{5} \right\rangle$, which is a unit vector. Let

$$\mathbf{v} = \langle 2, 1 \rangle$$

Then

$$\mathbf{v}\cdot\mathbf{u} = 2\cdot\frac{3}{5} + 1\cdot\frac{4}{5} = \frac{10}{5} = 2$$

So the projection is

$$\operatorname{proj}_W(\mathbf{v}) = 2\mathbf{u} = \left\langle \frac{6}{5}, \frac{8}{5} \right\rangle$$

The orthogonal part is

$$\mathbf{v} - \operatorname{proj}_W(\mathbf{v}) = \left\langle 2,1 \right\rangle - \left\langle \frac{6}{5}, \frac{8}{5} \right\rangle = \left\langle \frac{4}{5}, -\frac{3}{5} \right\rangle$$

Check the orthogonality:

$$\left\langle \frac{4}{5}, -\frac{3}{5} \right\rangle \cdot \left\langle \frac{3}{5}, \frac{4}{5} \right\rangle = \frac{12}{25} - \frac{12}{25} = 0$$

So the split is correct ✅

How to Decompose a Vector Step by Step

When the subspace $W$ has an orthonormal basis, the procedure is straightforward:

  1. Identify the orthonormal basis for $W$.
  2. Compute each dot product $\mathbf{v}\cdot\mathbf{u}_i$.
  3. Multiply each basis vector by its coefficient.
  4. Add the results to get $\operatorname{proj}_W(\mathbf{v})$.
  5. Subtract to get the orthogonal component.

If the basis is not orthonormal, the method is still possible, but the calculations are more involved. You may need to first create an orthonormal basis using the Gram-Schmidt process.

Example in $\mathbb{R}^3$

Let $W$ be the subspace with orthonormal basis

$$\mathbf{u}_1 = \left\langle 1,0,0 \right\rangle, \quad \mathbf{u}_2 = \left\langle 0,1,0 \right\rangle$$

and let

$$\mathbf{v} = \left\langle 3,-2,5 \right\rangle$$

Then

$$\operatorname{proj}_W(\mathbf{v}) = (\mathbf{v}\cdot\mathbf{u}_1)\mathbf{u}_1 + (\mathbf{v}\cdot\mathbf{u}_2)\mathbf{u}_2$$

Since

$$\mathbf{v}\cdot\mathbf{u}_1 = 3 \quad \text{and} \quad \mathbf{v}\cdot\mathbf{u}_2 = -2$$

we get

$$\operatorname{proj}_W(\mathbf{v}) = 3\left\langle 1,0,0 \right\rangle - 2\left\langle 0,1,0 \right\rangle = \left\langle 3,-2,0 \right\rangle$$

The orthogonal component is

$$\left\langle 3,-2,5 \right\rangle - \left\langle 3,-2,0 \right\rangle = \left\langle 0,0,5 \right\rangle$$

This leftover vector is perpendicular to the $xy$-plane, which makes sense because the plane contains vectors with third coordinate $0$.

Why Orthogonality Guarantees the Best Approximation

One of the deepest facts about orthogonal decompositions is that the projection gives the best approximation of a vector from a subspace.

If $\mathbf{w} = \operatorname{proj}_W(\mathbf{v})$, then among all vectors $\mathbf{x} \in W$, the distance from $\mathbf{v}$ to $\mathbf{w}$ is the smallest.

In symbols, $\mathbf{w}$ minimizes

$$\|\mathbf{v} - \mathbf{x}\|$$

for all $\mathbf{x} \in W$.

Why does this happen? Because the error $\mathbf{v} - \mathbf{w}$ is orthogonal to the whole subspace. That creates a right triangle when comparing $\mathbf{v}$, $\mathbf{w}$, and any other candidate $\mathbf{x}$ in the subspace. The Pythagorean Theorem then shows the projection gives the shortest distance.

This is why orthogonal projection is used in least squares problems. If a system of equations has no exact solution, linear algebra can find the closest solution instead by projecting onto a column space.

Connection to Inner Products and Orthogonality

Orthogonal decompositions belong to the bigger topic of inner products and orthogonality because they depend on the dot product. The dot product lets us test whether vectors are perpendicular:

$$\mathbf{a}\cdot\mathbf{b} = 0$$

means $\mathbf{a}$ is orthogonal to $\mathbf{b}$.

In a general inner product space, the dot product may be replaced by another inner product, but the same ideas still work. You can still define orthogonality, orthogonal complements, and projections. So orthogonal decomposition is not just a trick for $\mathbb{R}^2$ and $\mathbb{R}^3$; it is a broad principle in linear algebra.

The big picture is this:

  • Inner products give a way to measure angles and lengths.
  • Orthogonality describes right-angle relationships.
  • Orthogonal decompositions split vectors into meaningful perpendicular parts.
  • Projections help find the closest vector in a subspace.

These ideas work together to turn geometry into algebra 📐

Common Mistakes and How to Avoid Them

A few errors happen often when working with orthogonal decompositions:

  • Forgetting that the projection must lie in the subspace $W$.
  • Assuming a basis is orthonormal when it is not.
  • Mixing up the projection $\operatorname{proj}_W(\mathbf{v})$ with the orthogonal component $\mathbf{v} - \operatorname{proj}_W(\mathbf{v})$.
  • Not checking orthogonality by computing a dot product.

A reliable habit is to always verify that

$$\bigl(\mathbf{v} - \operatorname{proj}_W(\mathbf{v})\bigr) \cdot \mathbf{u}_i = 0$$

for each basis vector $\mathbf{u}_i$ of $W$.

Conclusion

Orthogonal decomposition is the process of splitting a vector into two perpendicular parts: one inside a subspace and one inside its orthogonal complement. The key tool is the orthogonal projection, which gives the closest vector in the subspace and makes many problems easier to solve. students, this topic is central to linear algebra because it connects geometry, dot products, projections, and best approximations. Once you understand orthogonal decompositions, you can see how many real-world tasks reduce to finding the most useful part of a vector and separating out what does not belong.

Study Notes

  • An orthogonal decomposition writes $\mathbf{v} = \mathbf{w} + \mathbf{z}$, where $\mathbf{w} \in W$ and $\mathbf{z} \in W^\perp$.
  • The orthogonal complement $W^\perp$ contains all vectors perpendicular to every vector in $W$.
  • The orthogonal projection $\operatorname{proj}_W(\mathbf{v})$ is the vector in $W$ closest to $\mathbf{v}$.
  • If $\{\mathbf{u}_1, \dots, \mathbf{u}_k\}$ is an orthonormal basis for $W$, then

$$\operatorname{proj}_W(\mathbf{v}) = \sum_{i=1}^k (\mathbf{v}\cdot\mathbf{u}_i)\mathbf{u}_i$$

  • The orthogonal part is always

$$\mathbf{v} - \operatorname{proj}_W(\mathbf{v})$$

  • Orthogonal decomposition gives the best approximation of a vector from a subspace.
  • The idea depends on inner products, orthogonality, and the Pythagorean Theorem.
  • In applications, orthogonal decompositions help with projections, least squares, and separating signals from noise.

Practice Quiz

5 questions to test your understanding

Working With Orthogonal Decompositions — Linear Algebra | A-Warded