Fourier Series

Decomposing functions into frequencies

In the previous chapters on the heat and wave equations, we encountered a recurring problem: how do we express an arbitrary initial condition as a sum of simple sine and cosine functions? The answer lies in one of the most beautiful and far-reaching ideas in mathematics: the Fourier series.

The premise is remarkable. Take any periodic function, no matter how complicated, no matter how jagged or smooth. You can write it as an infinite sum of sines and cosines. The square wave, the sawtooth, the triangle wave, even functions that jump discontinuously: all of them are secretly combinations of pure sine waves at different frequencies.

This is not just a mathematical curiosity. It is the foundation of signal processing, the key to solving partial differential equations, and a deep insight into the nature of periodic phenomena.

The Central Idea

Consider a periodic function f(x)f(x) with period 2L2L. The claim is that we can write:

f(x)=a02+n=1[ancos(nπxL)+bnsin(nπxL)]f(x) = \frac{a_0}{2} + \sum_{n=1}^{\infty} \left[ a_n \cos\left(\frac{n\pi x}{L}\right) + b_n \sin\left(\frac{n\pi x}{L}\right) \right]

The constants a0a_0, ana_n, and bnb_n are called the Fourier coefficients. They tell us how much of each frequency is present in the original function.

The term a02\frac{a_0}{2} is the average value of the function. The terms with n=1n=1 are called the fundamental frequency. The terms with n=2,3,4,n=2, 3, 4, \ldots are the harmonics: oscillations at integer multiples of the fundamental frequency.

Interactive: Build a Fourier Approximation

Square Wave Fourier series with 1 term

f(x)4πk=11sin((2k1)x)2k1f(x) \approx \frac{4}{\pi}\sum_{k=1}^{1} \frac{\sin((2k-1)x)}{2k-1}

The dashed line is the target function. The solid blue line shows the Fourier approximation. Add more terms to see the approximation improve. Notice the overshoot at discontinuities.

Watch what happens as you add more terms. The approximation gets better and better, capturing more detail of the original function. At discontinuities, something interesting happens: the approximation overshoots, never quite settling down to the jump. This is the Gibbs phenomenon, which we will discuss later.

Why Sines and Cosines?

What makes sines and cosines special? They are the eigenfunctions of the second derivative operator with periodic boundary conditions. When you solve separation of variables for the heat or wave equation, sines and cosines emerge naturally.

But there is a deeper reason. Sines and cosines are orthogonal to each other in a precise sense. Just as perpendicular vectors in space have a dot product of zero, different sine and cosine functions have an integral product of zero over a period.

Orthogonality: The Key Insight

Consider the inner product of two functions over the interval [L,L][-L, L]:

f,g=LLf(x)g(x)dx\langle f, g \rangle = \int_{-L}^{L} f(x) g(x) \, dx

The remarkable fact is that the functions sin(nπxL)\sin\left(\frac{n\pi x}{L}\right) and cos(mπxL)\cos\left(\frac{m\pi x}{L}\right) satisfy:

LLsin(nπxL)sin(mπxL)dx={Lif n=m0if nm\int_{-L}^{L} \sin\left(\frac{n\pi x}{L}\right) \sin\left(\frac{m\pi x}{L}\right) dx = \begin{cases} L & \text{if } n = m \\ 0 & \text{if } n \neq m \end{cases} LLcos(nπxL)cos(mπxL)dx={Lif n=m02Lif n=m=00if nm\int_{-L}^{L} \cos\left(\frac{n\pi x}{L}\right) \cos\left(\frac{m\pi x}{L}\right) dx = \begin{cases} L & \text{if } n = m \neq 0 \\ 2L & \text{if } n = m = 0 \\ 0 & \text{if } n \neq m \end{cases} LLsin(nπxL)cos(mπxL)dx=0for all n,m\int_{-L}^{L} \sin\left(\frac{n\pi x}{L}\right) \cos\left(\frac{m\pi x}{L}\right) dx = 0 \quad \text{for all } n, m

Proof of orthogonality for sines: Use the product-to-sum identity:

sinAsinB=12[cos(AB)cos(A+B)]\sin A \sin B = \frac{1}{2}[\cos(A - B) - \cos(A + B)]

With A=nπxLA = \frac{n\pi x}{L} and B=mπxLB = \frac{m\pi x}{L}:

LLsinnπxLsinmπxLdx=12LL[cos(nm)πxLcos(n+m)πxL]dx\int_{-L}^{L} \sin\frac{n\pi x}{L} \sin\frac{m\pi x}{L} \, dx = \frac{1}{2} \int_{-L}^{L} \left[ \cos\frac{(n-m)\pi x}{L} - \cos\frac{(n+m)\pi x}{L} \right] dx

When nmn \neq m, both cosine terms integrate to zero over a full period (the positive and negative areas cancel exactly). When n=mn = m, the first term becomes cos(0)=1\cos(0) = 1, giving 122L=L\frac{1}{2} \cdot 2L = L, while the second term still integrates to zero.

Interactive: Orthogonality of Sine Functions

sin(2x)
sin(3x)
sin(2x)sin(3x)
ππsin(2x)sin(3x)dx=0\int_{-\pi}^{\pi} \sin(2x)\sin(3x)\,dx = 0

Orthogonal: positive and negative areas cancel exactly

When n and m are different, the green and red shaded areas cancel perfectly, giving an integral of zero. This orthogonality is what makes the Fourier coefficient formulas work: each sine and cosine picks out only its own coefficient from the sum.

When nmn \neq m, the positive and negative areas of the product sin(nx)sin(mx)\sin(nx)\sin(mx) cancel exactly. When n=mn = m, the product sin2(nx)\sin^2(nx) is always non-negative, so the integral is positive.

This orthogonality is analogous to orthogonality in linear algebra. The functions {1,cos(x),sin(x),cos(2x),sin(2x),}\{1, \cos(x), \sin(x), \cos(2x), \sin(2x), \ldots\} form an orthogonal basis for the space of periodic functions, just as {i^,j^,k^}\{\hat{i}, \hat{j}, \hat{k}\} form an orthogonal basis for R3\mathbb{R}^3.

Deriving the Coefficient Formulas

The orthogonality relations give us a direct method to compute the Fourier coefficients. If we multiply both sides of the Fourier series by cos(mπxL)\cos\left(\frac{m\pi x}{L}\right) and integrate from L-L to LL, the orthogonality kills all terms except the one where n=mn = m.

Starting with:

f(x)=a02+n=1[ancos(nπxL)+bnsin(nπxL)]f(x) = \frac{a_0}{2} + \sum_{n=1}^{\infty} \left[ a_n \cos\left(\frac{n\pi x}{L}\right) + b_n \sin\left(\frac{n\pi x}{L}\right) \right]

Multiply by cos(mπxL)\cos\left(\frac{m\pi x}{L}\right) and integrate:

LLf(x)cos(mπxL)dx=amL\int_{-L}^{L} f(x) \cos\left(\frac{m\pi x}{L}\right) dx = a_m \cdot L

All other terms vanish due to orthogonality. Solving for ama_m:

an=1LLLf(x)cos(nπxL)dxa_n = \frac{1}{L} \int_{-L}^{L} f(x) \cos\left(\frac{n\pi x}{L}\right) dx

Similarly, multiplying by sin(mπxL)\sin\left(\frac{m\pi x}{L}\right) and integrating:

bn=1LLLf(x)sin(nπxL)dxb_n = \frac{1}{L} \int_{-L}^{L} f(x) \sin\left(\frac{n\pi x}{L}\right) dx

And integrating the original equation directly gives:

a0=1LLLf(x)dxa_0 = \frac{1}{L} \int_{-L}^{L} f(x) \, dx

These are the Euler-Fourier formulas. They tell us exactly how to decompose any periodic function into its frequency components.

Interactive: Computing Fourier Coefficients

f(x)

Target function

sin(1x)

Basis function

f(x)sin(1x)

Product (integrate this)

b1=1πππf(x)sin(1x)dx=1.2732b_{1} = \frac{1}{\pi}\int_{-\pi}^{\pi} f(x)\sin(1x)\,dx = 1.2732

= 4/(1π) ≈ 1.2732

The coefficient b_n measures how much of the sin(nx) component is in f(x). We find it by integrating f(x)sin(nx) over one period.

The coefficient bnb_n measures how much the function f(x)f(x) "resembles" sin(nx)\sin(nx). It is the projection of ff onto the sin(nx)\sin(nx) direction in function space. The integral computes this projection by multiplying and integrating, exactly as the dot product does for vectors.

The Square Wave: A Classic Example

The square wave is perhaps the most famous example. Define f(x)=1f(x) = 1 for 0<x<π0 < x < \pi and f(x)=1f(x) = -1 for π<x<0-\pi < x < 0, with period 2π2\pi.

Since the square wave is an odd function (symmetric about the origin under reflection), all the cosine coefficients vanish: an=0a_n = 0 for all nn.

For the sine coefficients:

bn=1πππf(x)sin(nx)dx=2π0πsin(nx)dxb_n = \frac{1}{\pi} \int_{-\pi}^{\pi} f(x) \sin(nx) \, dx = \frac{2}{\pi} \int_0^{\pi} \sin(nx) \, dx

Working out the integral:

bn=2π[cos(nx)n]0π=2nπ(1cos(nπ))b_n = \frac{2}{\pi} \left[ -\frac{\cos(nx)}{n} \right]_0^{\pi} = \frac{2}{n\pi} (1 - \cos(n\pi))

When nn is even, cos(nπ)=1\cos(n\pi) = 1, so bn=0b_n = 0.

When nn is odd, cos(nπ)=1\cos(n\pi) = -1, so bn=4nπb_n = \frac{4}{n\pi}.

The Fourier series for the square wave is:

f(x)=4π(sinx+sin3x3+sin5x5+sin7x7+)f(x) = \frac{4}{\pi} \left( \sin x + \frac{\sin 3x}{3} + \frac{\sin 5x}{5} + \frac{\sin 7x}{7} + \cdots \right)

Only odd harmonics appear. Each term has amplitude decreasing as 1/n1/n.

Interactive: Square Wave and Gibbs Phenomenon

The Gibbs Phenomenon

At jump discontinuities, the Fourier series always overshoots by about 9%, no matter how many terms we add. The overshoot does not shrink; it just gets narrower and concentrates at the discontinuity. This is a fundamental property of Fourier series at points where the function jumps.

The Gibbs Phenomenon

No matter how many terms we include in the partial sum, the Fourier series never converges uniformly near a discontinuity. The overshoot near a jump is always about 9% of the jump height.

This is not a failure of the theory. The Fourier series does converge to the function at every point where the function is continuous. At discontinuities, it converges to the average of the left and right limits. The 9% overshoot is a fundamental property of trying to approximate a discontinuous function with continuous sine waves.

The Gibbs phenomenon was discovered by Henry Wilbraham in 1848 but is named after Josiah Willard Gibbs, who analyzed it in 1899. The overshoot ratio is:

1π0πsinttdt1.0895\frac{1}{\pi} \int_0^{\pi} \frac{\sin t}{t} \, dt \approx 1.0895

About 8.95% overshoot above the target value.

The Sawtooth Wave

The sawtooth wave f(x)=xπf(x) = \frac{x}{\pi} on (π,π)(-\pi, \pi) provides another instructive example:

f(x)=2πn=1(1)n+1nsin(nx)f(x) = \frac{2}{\pi} \sum_{n=1}^{\infty} \frac{(-1)^{n+1}}{n} \sin(nx)

Unlike the square wave, all harmonics are present, with alternating signs. The amplitude still decays as 1/n1/n, giving the same slow convergence.

Interactive: Sawtooth Wave Fourier Series

f(x)=xπ=2πn=1(1)n+1nsin(nx)f(x) = \frac{x}{\pi} = \frac{2}{\pi}\sum_{n=1}^{\infty} \frac{(-1)^{n+1}}{n}\sin(nx)
n=1

2sin(x)/π

n=2

-sin(2x)/π

n=3

2sin(3x)/(3π)

n=4

-sin(4x)/(2π)

The sawtooth wave uses all harmonics, with alternating signs. The amplitude decreases as 1/n, so convergence is slower than for the triangle wave (which decreases as 1/n²). Notice the Gibbs phenomenon at the jumps.

Convergence: How Good is the Approximation?

Fourier series convergence is a subtle topic with several types of convergence to consider.

Pointwise convergence: At each point where ff is continuous, the Fourier series converges to f(x)f(x). At a jump discontinuity, it converges to the average of the left and right limits.

Uniform convergence: If ff is continuous and periodic, and if ff' is piecewise continuous, then the Fourier series converges uniformly to ff.

L2L^2 convergence: The Fourier series always converges in the mean-square sense:

limNLLf(x)SN(x)2dx=0\lim_{N \to \infty} \int_{-L}^{L} \left| f(x) - S_N(x) \right|^2 dx = 0

where SN(x)S_N(x) is the NN-th partial sum. This is called convergence in L2L^2 norm, and it holds for any square-integrable function.

The rate of convergence depends on the smoothness of ff:

  • If ff has a jump discontinuity, coefficients decay as 1/n1/n
  • If ff is continuous but ff' has jumps, coefficients decay as 1/n21/n^2
  • If ff has kk continuous derivatives, coefficients decay as 1/nk+11/n^{k+1}

Smoother functions have Fourier series that converge faster.

Interactive: Watch Partial Sums Converge

n = 1

Square wave: only odd harmonics (1, 3, 5, ...)

Compare the square wave (slow 1/n convergence) with the triangle wave (faster 1/n² convergence). The triangle wave's smoothness means its Fourier series catches up to the target much more quickly.

Connection to PDEs

In the heat equation chapter, we solved:

ut=k2ux2\frac{\partial u}{\partial t} = k \frac{\partial^2 u}{\partial x^2}

with u(0,t)=u(L,t)=0u(0, t) = u(L, t) = 0 and initial condition u(x,0)=f(x)u(x, 0) = f(x).

Separation of variables gave us solutions of the form sin(nπxL)ek(nπ/L)2t\sin\left(\frac{n\pi x}{L}\right) e^{-k(n\pi/L)^2 t}. The general solution is a superposition:

u(x,t)=n=1bnsin(nπxL)ek(nπ/L)2tu(x, t) = \sum_{n=1}^{\infty} b_n \sin\left(\frac{n\pi x}{L}\right) e^{-k(n\pi/L)^2 t}

The coefficients bnb_n are determined by the initial condition f(x)f(x). They are precisely the Fourier sine coefficients of ff:

bn=2L0Lf(x)sin(nπxL)dxb_n = \frac{2}{L} \int_0^L f(x) \sin\left(\frac{n\pi x}{L}\right) dx

This is why Fourier series matter for differential equations. They allow us to match arbitrary initial conditions to the eigenfunctions of the differential operator.

Parseval's Theorem

A beautiful result connects the energy of a function to its Fourier coefficients:

1LLLf(x)2dx=a022+n=1(an2+bn2)\frac{1}{L} \int_{-L}^{L} |f(x)|^2 dx = \frac{a_0^2}{2} + \sum_{n=1}^{\infty} (a_n^2 + b_n^2)

The left side is the average of f2f^2, a measure of the function's "energy." The right side is the sum of squared coefficients. Parseval's theorem says these are equal.

This is analogous to the Pythagorean theorem: the square of the length of a vector equals the sum of squares of its components in an orthonormal basis. The Fourier coefficients are the components of ff in the orthogonal basis of sines and cosines.

Beyond Periodic Functions: Fourier Sine and Cosine Series

Not all functions are naturally periodic. For functions defined on a finite interval [0,L][0, L], we can use either:

Fourier sine series (for odd extensions):

f(x)=n=1bnsin(nπxL)f(x) = \sum_{n=1}^{\infty} b_n \sin\left(\frac{n\pi x}{L}\right)

Fourier cosine series (for even extensions):

f(x)=a02+n=1ancos(nπxL)f(x) = \frac{a_0}{2} + \sum_{n=1}^{\infty} a_n \cos\left(\frac{n\pi x}{L}\right)

The sine series is appropriate when f(0)=f(L)=0f(0) = f(L) = 0 (Dirichlet boundary conditions). The cosine series is appropriate when f(0)=f(L)=0f'(0) = f'(L) = 0 (Neumann boundary conditions).

Applications

Fourier series appear throughout science and engineering:

Signal processing: Any periodic signal (sound, electromagnetic waves, seismic data) can be decomposed into frequency components. This is the basis of audio compression, noise filtering, and spectrum analysis.

Quantum mechanics: The wave function of a particle in a box is a superposition of sine functions, exactly the Fourier sine series.

Heat transfer: Temperature distributions in periodic geometries are naturally expressed as Fourier series.

Music and acoustics: The timbre of a musical instrument depends on which harmonics are present and their relative amplitudes. A pure sine wave sounds like a tuning fork; a square wave sounds harsh and buzzy.

Image processing: Two-dimensional Fourier analysis is the foundation of image compression (JPEG) and medical imaging (MRI, CT scans).

Key Takeaways

  • Any periodic function can be written as a sum of sines and cosines: the Fourier series
  • The Fourier coefficients are found by projecting ff onto each basis function using integration
  • Orthogonality of sines and cosines makes the coefficient formulas work: each basis function picks out only its own coefficient
  • At discontinuities, Fourier series exhibit the Gibbs phenomenon: about 9% overshoot that never goes away, only gets narrower
  • Smoother functions have faster-converging Fourier series; discontinuities cause slow 1/n1/n decay
  • Fourier series are the key to solving PDEs with arbitrary initial conditions by expressing them in the natural eigenbasis
  • Parseval's theorem connects the energy of a function to the sum of squared coefficients

The Journey Completes

This chapter brings our study of differential equations to a close. We began with the simple question of what it means to have an equation involving derivatives. We learned techniques for first-order equations, then second-order equations. We explored systems, phase portraits, transforms, and series solutions. Finally, we stepped into partial differential equations, where functions depend on multiple variables.

Fourier series sit at the intersection of many of these ideas. They use integration (first-order calculus), they arise from eigenvalue problems (second-order equations), they connect to linear algebra through orthogonality, and they are essential for solving PDEs.

The deeper insight is this: sines and cosines are the natural basis for periodic phenomena, just as exponentials are the natural basis for growth and decay. When you decompose a function into frequencies, you are revealing its fundamental structure.

From here, the journey continues in many directions. Numerical methods for PDEs, more complex boundary conditions, nonlinear equations, applications in physics and engineering: the foundations we have built support all of them.

Mathematics has a way of revealing unexpected connections. The same Fourier series that Fourier invented to study heat conduction now underpin everything from music streaming to medical imaging to quantum mechanics. The simple building blocks of sines and cosines, combined with the elegant formulas for their coefficients, create a universal language for describing periodic phenomena.

That is the power of mathematics: simple ideas, carefully developed, reaching far beyond their origins.