Eigenvectors and Eigenvalues

The axes that do not rotate under transformation

The Special Vectors

When a matrix transforms space, most vectors end up pointing in a completely different direction. They get rotated, sheared, or otherwise twisted around. But some vectors are special: they only get scaled. They stretch or shrink, but they stubbornly refuse to rotate. These are eigenvectors.

Think of it this way: if you apply a transformation to an eigenvector, the result is simply a scaled version of itself. The direction remains unchanged. The scaling factor — how much the vector stretches or shrinks — is called the eigenvalue.

Eigenvectors Stay on Their Line

The blue and red vectors stay on their dashed lines — they only stretch, never rotate.

Gray vectors rotate as well as stretch.

Watch the animation above. The gray vectors rotate as the transformation is applied, but the colored vectors stay along their dashed lines. They are eigenvectors of this transformation — the directions that the matrix does not rotate.

The Defining Equation

This special property — a vector that only gets scaled by a transformation — can be expressed as a simple equation. If AA is a matrix and v\vec{v} is an eigenvector with eigenvalue λ\lambda (lambda), then:

Av=λvA\vec{v} = \lambda\vec{v}

On the left, we apply the transformation AA to the vector. On the right, we simply scale it by λ\lambda. The equation says these give the same result. The transformation does nothing more than scale.

All Vectors vs Eigenvectors

Click to pause. Watch how most vectors rotate, but vectors along the purple dashed lines only scale.

Notice how the circle of vectors deforms. Most vectors change their angle relative to the origin. But the vectors along the eigenvector directions (the purple dashed lines) only change in length, never in direction.

Geometric Meaning

Eigenvectors reveal the natural axes of a transformation. They are the directions along which the transformation acts most simply — by pure scaling. Every other direction experiences some combination of scaling and rotation.

This insight is remarkably powerful. Many complex transformations become simple when viewed from the perspective of their eigenvectors. A complicated shear or stretch becomes just a different scaling along different axes. This is why eigenvectors appear everywhere in science and engineering: they reveal the underlying structure of linear systems.

Interactive: Explore Different Matrices

2.0
1.0
0.0
1.0
A=[2.01.00.01.0]A = \begin{bmatrix} 2.0 & 1.0 \\ 0.0 & 1.0 \end{bmatrix}

λ₁ = 2.00λ₂ = 1.00

Adjust the matrix entries above and watch how the eigenvectors change. Notice that the eigenvectors (and their transformed versions) always lie along the same dashed line — confirming they only scale. When the eigenvalues become complex (no real eigenvectors appear), it means the transformation rotates everything — no direction escapes rotation.

Eigenvalues: The Scaling Factors

The eigenvalue λ\lambda tells us exactly how much an eigenvector gets scaled. Different eigenvalues have different geometric meanings:

  • λ>1\lambda > 1: The eigenvector stretches
  • λ=1\lambda = 1: The eigenvector is unchanged
  • 0<λ<10 < \lambda < 1: The eigenvector shrinks
  • λ=0\lambda = 0: The eigenvector collapses to zero
  • λ<0\lambda < 0: The eigenvector flips direction and scales

How Eigenvalues Scale Eigenvectors

2.0

λ > 1: The eigenvector stretches

A negative eigenvalue is particularly interesting: the eigenvector flips to point in the opposite direction. It still stays on its line, but it reverses. An eigenvalue of zero means the transformation collapses that direction entirely — information is lost along that axis.

Finding Eigenvalues

How do we find these special values? We start from the defining equation and rearrange:

Av=λvA\vec{v} = \lambda\vec{v}
Avλv=0A\vec{v} - \lambda\vec{v} = \vec{0}
(AλI)v=0(A - \lambda I)\vec{v} = \vec{0}

For a nonzero eigenvector v\vec{v} to satisfy this equation, the matrix (AλI)(A - \lambda I) must be singular — it must squash some nonzero vector down to zero. This happens exactly when its determinant is zero:

det(AλI)=0\det(A - \lambda I) = 0

This is the characteristic equation. Solving it gives us the eigenvalues. For a 2×2 matrix, this produces a quadratic equation. For larger matrices, we get higher-degree polynomials.

Finding Eigenvalues: The Characteristic Equation

Matrix A

[3102]\begin{bmatrix} 3 & 1 \\ 0 & 2 \end{bmatrix}

A - λI

[2.0101.0]\begin{bmatrix} 2.0 & 1 \\ 0 & 1.0 \end{bmatrix}

det(A - λI)

2.00

1.0

Adjust λ until det(A - λI) = 0 to find eigenvalues (try λ = 2 or λ = 3)

Try adjusting λ until the determinant becomes zero. You will find the eigenvalues at λ = 2 and λ = 3 for this matrix. These are the values where the transformation has special directions that do not rotate.

The Characteristic Polynomial

For a 2×2 matrix, the characteristic equation expands to a quadratic:

det(aλbcdλ)=0\det\begin{pmatrix} a - \lambda & b \\ c & d - \lambda \end{pmatrix} = 0
(aλ)(dλ)bc=0(a - \lambda)(d - \lambda) - bc = 0
λ2(a+d)λ+(adbc)=0\lambda^2 - (a + d)\lambda + (ad - bc) = 0

Notice something elegant: the coefficient of λ\lambda is the negative of the trace (sum of diagonal entries), and the constant term is the determinant. We can write:

λ2tr(A)λ+det(A)=0\lambda^2 - \text{tr}(A)\lambda + \det(A) = 0

The trace and determinant — two fundamental properties of a matrix — completely determine its eigenvalues. This connection runs deep throughout linear algebra.

Finding Eigenvectors

Once we know an eigenvalue λ\lambda, finding its eigenvector means solving:

(AλI)v=0(A - \lambda I)\vec{v} = \vec{0}

This is a system of linear equations with infinitely many solutions (any scalar multiple of an eigenvector is also an eigenvector). We typically pick a normalized eigenvector — one with length 1 — as the canonical representative.

For example, if A=[3102]A = \begin{bmatrix} 3 & 1 \\ 0 & 2 \end{bmatrix} and λ=3\lambda = 3, we solve:

[0101][v1v2]=[00]\begin{bmatrix} 0 & 1 \\ 0 & -1 \end{bmatrix}\begin{bmatrix} v_1 \\ v_2 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}

This gives us v2=0v_2 = 0, so the eigenvector is any multiple of [10]\begin{bmatrix} 1 \\ 0 \end{bmatrix}. The horizontal direction is preserved by this transformation.

Why Eigenvectors Matter

Eigenvectors and eigenvalues are not just mathematical curiosities. They unlock some of the most powerful techniques in applied mathematics:

Principal Component Analysis (PCA) uses eigenvectors to find the directions of maximum variance in data — the "most important" directions for understanding a dataset.

Vibration analysis in engineering identifies eigenvectors as the natural modes of vibration. A bridge, a guitar string, or an airplane wing all vibrate most easily along their eigenvector directions.

Quantum mechanics is built on eigenvalue problems. The allowed energy levels of an electron are eigenvalues; the corresponding wavefunctions are eigenvectors.

Google's PageRank algorithm finds the eigenvector of a matrix representing the web's link structure. The importance of each page is determined by its eigenvector component.

Looking Ahead

In the next chapter, Diagonalization, we will see the full payoff of eigenvectors. If we use eigenvectors as our basis, the transformation becomes diagonal—pure scaling along each axis with no mixing. This makes computing matrix powers trivial and reveals the long-term behavior of dynamical systems.

Think of a spinning globe: the axis stays fixed while everything else rotates. That axis is an eigenvector of the rotation. Eigenvectors are the "stable directions" that survive repeated transformation.

Key Takeaways

  • Eigenvectors are the special directions that only get scaled (not rotated) by a transformation
  • Eigenvalues are the scaling factors: how much an eigenvector stretches or shrinks
  • The defining equation is Av=λvA\vec{v} = \lambda\vec{v} — transformation equals scaling
  • Eigenvalues are found by solving det(AλI)=0\det(A - \lambda I) = 0, the characteristic equation
  • Eigenvectors reveal the natural axes of a transformation — the directions where it acts most simply
  • Negative eigenvalues flip the eigenvector; zero eigenvalues collapse it; complex eigenvalues mean pure rotation