Fourier Series
Decomposing functions into frequencies
In the previous chapters on the heat and wave equations, we encountered a recurring problem: how do we express an arbitrary initial condition as a sum of simple sine and cosine functions? The answer lies in one of the most beautiful and far-reaching ideas in mathematics: the Fourier series.
The premise is remarkable. Take any periodic function, no matter how complicated, no matter how jagged or smooth. You can write it as an infinite sum of sines and cosines. The square wave, the sawtooth, the triangle wave, even functions that jump discontinuously: all of them are secretly combinations of pure sine waves at different frequencies.
This is not just a mathematical curiosity. It is the foundation of signal processing, the key to solving partial differential equations, and a deep insight into the nature of periodic phenomena.
The Central Idea
Consider a periodic function with period . The claim is that we can write:
The constants , , and are called the Fourier coefficients. They tell us how much of each frequency is present in the original function.
The term is the average value of the function. The terms with are called the fundamental frequency. The terms with are the harmonics: oscillations at integer multiples of the fundamental frequency.
Interactive: Build a Fourier Approximation
Square Wave Fourier series with 1 term
The dashed line is the target function. The solid blue line shows the Fourier approximation. Add more terms to see the approximation improve. Notice the overshoot at discontinuities.
Watch what happens as you add more terms. The approximation gets better and better, capturing more detail of the original function. At discontinuities, something interesting happens: the approximation overshoots, never quite settling down to the jump. This is the Gibbs phenomenon, which we will discuss later.
Why Sines and Cosines?
What makes sines and cosines special? They are the eigenfunctions of the second derivative operator with periodic boundary conditions. When you solve separation of variables for the heat or wave equation, sines and cosines emerge naturally.
But there is a deeper reason. Sines and cosines are orthogonal to each other in a precise sense. Just as perpendicular vectors in space have a dot product of zero, different sine and cosine functions have an integral product of zero over a period.
Orthogonality: The Key Insight
Consider the inner product of two functions over the interval :
The remarkable fact is that the functions and satisfy:
Proof of orthogonality for sines: Use the product-to-sum identity:
With and :
When , both cosine terms integrate to zero over a full period (the positive and negative areas cancel exactly). When , the first term becomes , giving , while the second term still integrates to zero.
Interactive: Orthogonality of Sine Functions
Orthogonal: positive and negative areas cancel exactly
When n and m are different, the green and red shaded areas cancel perfectly, giving an integral of zero. This orthogonality is what makes the Fourier coefficient formulas work: each sine and cosine picks out only its own coefficient from the sum.
When , the positive and negative areas of the product cancel exactly. When , the product is always non-negative, so the integral is positive.
This orthogonality is analogous to orthogonality in linear algebra. The functions form an orthogonal basis for the space of periodic functions, just as form an orthogonal basis for .
Deriving the Coefficient Formulas
The orthogonality relations give us a direct method to compute the Fourier coefficients. If we multiply both sides of the Fourier series by and integrate from to , the orthogonality kills all terms except the one where .
Starting with:
Multiply by and integrate:
All other terms vanish due to orthogonality. Solving for :
Similarly, multiplying by and integrating:
And integrating the original equation directly gives:
These are the Euler-Fourier formulas. They tell us exactly how to decompose any periodic function into its frequency components.
Interactive: Computing Fourier Coefficients
Target function
Basis function
Product (integrate this)
= 4/(1π) ≈ 1.2732
The coefficient b_n measures how much of the sin(nx) component is in f(x). We find it by integrating f(x)sin(nx) over one period.
The coefficient measures how much the function "resembles" . It is the projection of onto the direction in function space. The integral computes this projection by multiplying and integrating, exactly as the dot product does for vectors.
The Square Wave: A Classic Example
The square wave is perhaps the most famous example. Define for and for , with period .
Since the square wave is an odd function (symmetric about the origin under reflection), all the cosine coefficients vanish: for all .
For the sine coefficients:
Working out the integral:
When is even, , so .
When is odd, , so .
The Fourier series for the square wave is:
Only odd harmonics appear. Each term has amplitude decreasing as .
Interactive: Square Wave and Gibbs Phenomenon
The Gibbs Phenomenon
At jump discontinuities, the Fourier series always overshoots by about 9%, no matter how many terms we add. The overshoot does not shrink; it just gets narrower and concentrates at the discontinuity. This is a fundamental property of Fourier series at points where the function jumps.
The Gibbs Phenomenon
No matter how many terms we include in the partial sum, the Fourier series never converges uniformly near a discontinuity. The overshoot near a jump is always about 9% of the jump height.
This is not a failure of the theory. The Fourier series does converge to the function at every point where the function is continuous. At discontinuities, it converges to the average of the left and right limits. The 9% overshoot is a fundamental property of trying to approximate a discontinuous function with continuous sine waves.
The Gibbs phenomenon was discovered by Henry Wilbraham in 1848 but is named after Josiah Willard Gibbs, who analyzed it in 1899. The overshoot ratio is:
About 8.95% overshoot above the target value.
The Sawtooth Wave
The sawtooth wave on provides another instructive example:
Unlike the square wave, all harmonics are present, with alternating signs. The amplitude still decays as , giving the same slow convergence.
Interactive: Sawtooth Wave Fourier Series
2sin(x)/π
-sin(2x)/π
2sin(3x)/(3π)
-sin(4x)/(2π)
The sawtooth wave uses all harmonics, with alternating signs. The amplitude decreases as 1/n, so convergence is slower than for the triangle wave (which decreases as 1/n²). Notice the Gibbs phenomenon at the jumps.
Convergence: How Good is the Approximation?
Fourier series convergence is a subtle topic with several types of convergence to consider.
Pointwise convergence: At each point where is continuous, the Fourier series converges to . At a jump discontinuity, it converges to the average of the left and right limits.
Uniform convergence: If is continuous and periodic, and if is piecewise continuous, then the Fourier series converges uniformly to .
convergence: The Fourier series always converges in the mean-square sense:
where is the -th partial sum. This is called convergence in norm, and it holds for any square-integrable function.
The rate of convergence depends on the smoothness of :
- If has a jump discontinuity, coefficients decay as
- If is continuous but has jumps, coefficients decay as
- If has continuous derivatives, coefficients decay as
Smoother functions have Fourier series that converge faster.
Interactive: Watch Partial Sums Converge
n = 1
Square wave: only odd harmonics (1, 3, 5, ...)
Compare the square wave (slow 1/n convergence) with the triangle wave (faster 1/n² convergence). The triangle wave's smoothness means its Fourier series catches up to the target much more quickly.
Connection to PDEs
In the heat equation chapter, we solved:
with and initial condition .
Separation of variables gave us solutions of the form . The general solution is a superposition:
The coefficients are determined by the initial condition . They are precisely the Fourier sine coefficients of :
This is why Fourier series matter for differential equations. They allow us to match arbitrary initial conditions to the eigenfunctions of the differential operator.
Parseval's Theorem
A beautiful result connects the energy of a function to its Fourier coefficients:
The left side is the average of , a measure of the function's "energy." The right side is the sum of squared coefficients. Parseval's theorem says these are equal.
This is analogous to the Pythagorean theorem: the square of the length of a vector equals the sum of squares of its components in an orthonormal basis. The Fourier coefficients are the components of in the orthogonal basis of sines and cosines.
Beyond Periodic Functions: Fourier Sine and Cosine Series
Not all functions are naturally periodic. For functions defined on a finite interval , we can use either:
Fourier sine series (for odd extensions):
Fourier cosine series (for even extensions):
The sine series is appropriate when (Dirichlet boundary conditions). The cosine series is appropriate when (Neumann boundary conditions).
Applications
Fourier series appear throughout science and engineering:
Signal processing: Any periodic signal (sound, electromagnetic waves, seismic data) can be decomposed into frequency components. This is the basis of audio compression, noise filtering, and spectrum analysis.
Quantum mechanics: The wave function of a particle in a box is a superposition of sine functions, exactly the Fourier sine series.
Heat transfer: Temperature distributions in periodic geometries are naturally expressed as Fourier series.
Music and acoustics: The timbre of a musical instrument depends on which harmonics are present and their relative amplitudes. A pure sine wave sounds like a tuning fork; a square wave sounds harsh and buzzy.
Image processing: Two-dimensional Fourier analysis is the foundation of image compression (JPEG) and medical imaging (MRI, CT scans).
Key Takeaways
- Any periodic function can be written as a sum of sines and cosines: the Fourier series
- The Fourier coefficients are found by projecting onto each basis function using integration
- Orthogonality of sines and cosines makes the coefficient formulas work: each basis function picks out only its own coefficient
- At discontinuities, Fourier series exhibit the Gibbs phenomenon: about 9% overshoot that never goes away, only gets narrower
- Smoother functions have faster-converging Fourier series; discontinuities cause slow decay
- Fourier series are the key to solving PDEs with arbitrary initial conditions by expressing them in the natural eigenbasis
- Parseval's theorem connects the energy of a function to the sum of squared coefficients
The Journey Completes
This chapter brings our study of differential equations to a close. We began with the simple question of what it means to have an equation involving derivatives. We learned techniques for first-order equations, then second-order equations. We explored systems, phase portraits, transforms, and series solutions. Finally, we stepped into partial differential equations, where functions depend on multiple variables.
Fourier series sit at the intersection of many of these ideas. They use integration (first-order calculus), they arise from eigenvalue problems (second-order equations), they connect to linear algebra through orthogonality, and they are essential for solving PDEs.
The deeper insight is this: sines and cosines are the natural basis for periodic phenomena, just as exponentials are the natural basis for growth and decay. When you decompose a function into frequencies, you are revealing its fundamental structure.
From here, the journey continues in many directions. Numerical methods for PDEs, more complex boundary conditions, nonlinear equations, applications in physics and engineering: the foundations we have built support all of them.
Mathematics has a way of revealing unexpected connections. The same Fourier series that Fourier invented to study heat conduction now underpin everything from music streaming to medical imaging to quantum mechanics. The simple building blocks of sines and cosines, combined with the elegant formulas for their coefficients, create a universal language for describing periodic phenomena.
That is the power of mathematics: simple ideas, carefully developed, reaching far beyond their origins.