Eigenvectors and Eigenvalues
The axes that do not rotate under transformation
The Special Vectors
When a matrix transforms space, most vectors end up pointing in a completely different direction. They get rotated, sheared, or otherwise twisted around. But some vectors are special: they only get scaled. They stretch or shrink, but they stubbornly refuse to rotate. These are eigenvectors.
Think of it this way: if you apply a transformation to an eigenvector, the result is simply a scaled version of itself. The direction remains unchanged. The scaling factor — how much the vector stretches or shrinks — is called the eigenvalue.
Eigenvectors Stay on Their Line
The blue and red vectors stay on their dashed lines — they only stretch, never rotate.
Gray vectors rotate as well as stretch.
Watch the animation above. The gray vectors rotate as the transformation is applied, but the colored vectors stay along their dashed lines. They are eigenvectors of this transformation — the directions that the matrix does not rotate.
The Defining Equation
This special property — a vector that only gets scaled by a transformation — can be expressed as a simple equation. If is a matrix and is an eigenvector with eigenvalue (lambda), then:
On the left, we apply the transformation to the vector. On the right, we simply scale it by . The equation says these give the same result. The transformation does nothing more than scale.
All Vectors vs Eigenvectors
Click to pause. Watch how most vectors rotate, but vectors along the purple dashed lines only scale.
Notice how the circle of vectors deforms. Most vectors change their angle relative to the origin. But the vectors along the eigenvector directions (the purple dashed lines) only change in length, never in direction.
Geometric Meaning
Eigenvectors reveal the natural axes of a transformation. They are the directions along which the transformation acts most simply — by pure scaling. Every other direction experiences some combination of scaling and rotation.
This insight is remarkably powerful. Many complex transformations become simple when viewed from the perspective of their eigenvectors. A complicated shear or stretch becomes just a different scaling along different axes. This is why eigenvectors appear everywhere in science and engineering: they reveal the underlying structure of linear systems.
Interactive: Explore Different Matrices
λ₁ = 2.00 • λ₂ = 1.00
Adjust the matrix entries above and watch how the eigenvectors change. Notice that the eigenvectors (and their transformed versions) always lie along the same dashed line — confirming they only scale. When the eigenvalues become complex (no real eigenvectors appear), it means the transformation rotates everything — no direction escapes rotation.
Eigenvalues: The Scaling Factors
The eigenvalue tells us exactly how much an eigenvector gets scaled. Different eigenvalues have different geometric meanings:
- : The eigenvector stretches
- : The eigenvector is unchanged
- : The eigenvector shrinks
- : The eigenvector collapses to zero
- : The eigenvector flips direction and scales
How Eigenvalues Scale Eigenvectors
λ > 1: The eigenvector stretches
A negative eigenvalue is particularly interesting: the eigenvector flips to point in the opposite direction. It still stays on its line, but it reverses. An eigenvalue of zero means the transformation collapses that direction entirely — information is lost along that axis.
Finding Eigenvalues
How do we find these special values? We start from the defining equation and rearrange:
For a nonzero eigenvector to satisfy this equation, the matrix must be singular — it must squash some nonzero vector down to zero. This happens exactly when its determinant is zero:
This is the characteristic equation. Solving it gives us the eigenvalues. For a 2×2 matrix, this produces a quadratic equation. For larger matrices, we get higher-degree polynomials.
Finding Eigenvalues: The Characteristic Equation
Matrix A
A - λI
det(A - λI)
2.00
Adjust λ until det(A - λI) = 0 to find eigenvalues (try λ = 2 or λ = 3)
Try adjusting λ until the determinant becomes zero. You will find the eigenvalues at λ = 2 and λ = 3 for this matrix. These are the values where the transformation has special directions that do not rotate.
The Characteristic Polynomial
For a 2×2 matrix, the characteristic equation expands to a quadratic:
Notice something elegant: the coefficient of is the negative of the trace (sum of diagonal entries), and the constant term is the determinant. We can write:
The trace and determinant — two fundamental properties of a matrix — completely determine its eigenvalues. This connection runs deep throughout linear algebra.
Finding Eigenvectors
Once we know an eigenvalue , finding its eigenvector means solving:
This is a system of linear equations with infinitely many solutions (any scalar multiple of an eigenvector is also an eigenvector). We typically pick a normalized eigenvector — one with length 1 — as the canonical representative.
For example, if and , we solve:
This gives us , so the eigenvector is any multiple of . The horizontal direction is preserved by this transformation.
Why Eigenvectors Matter
Eigenvectors and eigenvalues are not just mathematical curiosities. They unlock some of the most powerful techniques in applied mathematics:
Principal Component Analysis (PCA) uses eigenvectors to find the directions of maximum variance in data — the "most important" directions for understanding a dataset.
Vibration analysis in engineering identifies eigenvectors as the natural modes of vibration. A bridge, a guitar string, or an airplane wing all vibrate most easily along their eigenvector directions.
Quantum mechanics is built on eigenvalue problems. The allowed energy levels of an electron are eigenvalues; the corresponding wavefunctions are eigenvectors.
Google's PageRank algorithm finds the eigenvector of a matrix representing the web's link structure. The importance of each page is determined by its eigenvector component.
Looking Ahead
In the next chapter, Diagonalization, we will see the full payoff of eigenvectors. If we use eigenvectors as our basis, the transformation becomes diagonal—pure scaling along each axis with no mixing. This makes computing matrix powers trivial and reveals the long-term behavior of dynamical systems.
Think of a spinning globe: the axis stays fixed while everything else rotates. That axis is an eigenvector of the rotation. Eigenvectors are the "stable directions" that survive repeated transformation.
Key Takeaways
- Eigenvectors are the special directions that only get scaled (not rotated) by a transformation
- Eigenvalues are the scaling factors: how much an eigenvector stretches or shrinks
- The defining equation is — transformation equals scaling
- Eigenvalues are found by solving , the characteristic equation
- Eigenvectors reveal the natural axes of a transformation — the directions where it acts most simply
- Negative eigenvalues flip the eigenvector; zero eigenvalues collapse it; complex eigenvalues mean pure rotation