Orthogonality and Projections
Perpendicular vectors and their remarkable properties
Orthogonal Vectors
Two vectors are orthogonal (perpendicular) when their dot product is zero. This is the algebraic definition of perpendicularity.
The dot product measures how much two vectors point in the same direction. When they are perpendicular, they share no common direction—so their "overlap" is zero.
Interactive: Adjust the angle to see when vectors are orthogonal
Dot product is zero—vectors are orthogonal (perpendicular).
Orthonormal Bases
An orthonormal basis consists of vectors that are mutually orthogonal (every pair is perpendicular) and each has length 1 (unit vectors).
The standard basis is orthonormal. But there are infinitely many other orthonormal bases—any rotation of the standard basis is also orthonormal.
Why do we care? Because orthonormal bases make computations trivial. To find the coordinates of a vector in an orthonormal basis, just take dot products:
Interactive: Coordinates via dot products in different orthonormal bases
Standard basis
With an orthonormal basis, finding coordinates is just dot products. No matrix inversion needed.
No matrix inversion required. The coordinates are simply dot products with the basis vectors. This is why orthonormal bases are so computationally convenient.
Projection onto a Vector
The projection of a vector onto a direction is the component of that lies along . Think of it as the shadow of when light shines perpendicular to .
If is already a unit vector, this simplifies to .
Interactive: Projection as the 'shadow' of a vector
The projection is the "shadow" of v onto the line. The dashed line shows the perpendicular component—what gets discarded.
The vector from the projection back to is the perpendicular component. Together, the projection and perpendicular component give you back the original vector.
The Gram-Schmidt Process
Given any set of independent vectors, the Gram-Schmidt processconstructs an orthonormal basis from them. It works by iteratively removing components that overlap with vectors already processed.
Start with vectors . Normalize the first to get . Then subtract from its projection onto , leaving only the perpendicular part. Normalize that to get .
Interactive: Watch Gram-Schmidt orthogonalize two vectors
Start with two vectors v₁ and v₂ (not orthogonal).
The same process extends to any number of dimensions. Each new vector has its components along all previous orthonormal vectors removed, then gets normalized.
Projection Matrices
Projection onto a subspace can be represented as a matrix. For projection onto a line through the origin in the direction of unit vector , the projection matrix is:
Projection matrices have special properties: they are symmetric ( ) and idempotent ( ). Applying a projection twice is the same as applying it once—the shadow of a shadow on the same subspace is unchanged.
Interactive: See projection as a matrix transformation
The projection matrix P projects any vector onto the line. Applying P twice gives the same result as applying it once: P² = P.
Why Orthonormal Matrices Are Special
A square matrix whose columns form an orthonormal set is called an orthogonal matrix (despite the potentially confusing name). Such matrices have a remarkable property:
The inverse is just the transpose. Finding the inverse of an orthogonal matrix is trivial—no computation needed beyond rearranging entries.
Orthogonal matrices represent rigid transformations: rotations and reflections. They preserve lengths and angles, acting as the "shape-preserving" transformations.
Connection to Later Topics
Orthogonality appears throughout linear algebra:
- Eigenvectors: For symmetric matrices, eigenvectors corresponding to different eigenvalues are automatically orthogonal.
- SVD: The singular value decomposition produces orthonormal bases for both the input and output spaces.
- Least Squares: The best approximate solution is found by projecting onto the column space.
The computational convenience of orthonormality—coordinates via dot products, inverses via transposes—makes these concepts foundational for numerical linear algebra.
Key Takeaways
- Orthogonal means perpendicular; the dot product of orthogonal vectors is zero
- An orthonormal basis has mutually perpendicular unit vectors
- In an orthonormal basis, coordinates are found by dot products—no matrix inversion needed
- Projection finds the "shadow" of a vector onto a subspace
- Gram-Schmidt transforms any independent set into an orthonormal basis
- Projection matrices satisfy (idempotent)
- Orthogonal matrices have —the inverse is just the transpose