Numerical Methods

Euler, Runge-Kutta, and computational approaches

When Exact Solutions Are Not Enough

Throughout this course, we have developed an arsenal of techniques for solving differential equations. Separation of variables, integrating factors, characteristic equations, Laplace transforms, power series. These methods are powerful, but they share a limitation: they only work for certain classes of equations.

Most differential equations that arise in practice do not have closed-form solutions. The equation describing weather, fluid flow, or the motion of three gravitating bodies cannot be solved with a formula. Yet these systems are governed by differential equations, and we need to understand their behavior.

This is where numerical methods enter. Instead of finding a formula for y(t)y(t), we compute approximate values y1,y2,y3,y_1, y_2, y_3, \ldots at discrete time points t1,t2,t3,t_1, t_2, t_3, \ldots. The approximation can be made as accurate as we need by taking smaller steps and using better algorithms.

Numerical methods are not a fallback for when analysis fails. They are how differential equations are actually solved in science and engineering. Every weather forecast, every structural simulation, every orbit calculation relies on numerical integration.

The Core Idea: Following Tangent Lines

Consider the initial value problem:

y=f(t,y),y(t0)=y0y' = f(t, y), \quad y(t_0) = y_0

We know the starting point (t0,y0)(t_0, y_0) and the differential equation tells us the slope f(t0,y0)f(t_0, y_0) at that point. The simplest idea is to walk along the tangent line for a small step hh:

y1=y0+hf(t0,y0)y_1 = y_0 + h \cdot f(t_0, y_0)

This gives us an approximation to y(t1)y(t_1) where t1=t0+ht_1 = t_0 + h. Now we have a new point (t1,y1)(t_1, y_1) and can compute the slope there, then take another step. Repeating this process generates a sequence of approximations.

This is Euler's method, the simplest numerical integrator. Its formula is:

yn+1=yn+hf(tn,yn)y_{n+1} = y_n + h \cdot f(t_n, y_n)

Each step follows the tangent line at the current point for a distance hh in the tt-direction. The method is intuitive: if the derivative tells us how the solution is changing, we approximate the change over a small interval as that rate times the interval width.

Euler's Method Step by Step

Euler's method follows the tangent line at each step. The blue curve is the approximation, and the green curve is the exact solution.

Step 1 of 7

Watch how Euler's method constructs an approximation. At each step, the algorithm:

  1. Evaluates the slope f(tn,yn)f(t_n, y_n) at the current point
  2. Follows that slope for one step of size hh
  3. Arrives at the next approximation (tn+1,yn+1)(t_{n+1}, y_{n+1})

The tangent line (shown dashed) is the best linear approximation to the solution at each point. Euler's method chains these linear approximations together.

The Price of Simplicity: Error Accumulation

Euler's method is easy to understand and implement, but it pays a price in accuracy. Each step introduces a small error because the tangent line diverges from the true curve. These errors compound as we take more steps.

Euler Approximation vs Exact Solution

Final error: 6.2997

The shaded region shows the accumulated error.

The shaded region shows the gap between Euler's approximation and the exact solution. This gap grows over time. With step size h=0.5h = 0.5, the approximation drifts significantly by t=3t = 3.

Two types of error affect numerical methods:

Local truncation error is the error introduced in a single step, assuming we start from the exact value. For Euler's method, this error is O(h2)O(h^2): proportional to the square of the step size. Halving the step size cuts the per-step error by a factor of 4.

Global error is the total accumulated error after many steps. To reach time TT with step size hh, we take approximately T/hT/h steps. Each step contributes O(h2)O(h^2) error, but these compound. The global error for Euler's method is O(h)O(h): proportional to the step size itself. Halving the step size only halves the total error.

This is why Euler's method is called a first-order method. Its global accuracy improves linearly with decreasing step size.

Improving Accuracy: The Heun Method

Euler's method uses the slope at the beginning of each step. But the slope changes as we move, and using only the initial slope ignores this variation.

The Heun method (also called the improved Euler or explicit trapezoid method) takes a smarter approach:

  1. Use Euler to predict where we will end up: y~n+1=yn+hf(tn,yn)\tilde{y}_{n+1} = y_n + h \cdot f(t_n, y_n)
  2. Evaluate the slope at the predicted endpoint: f(tn+1,y~n+1)f(t_{n+1}, \tilde{y}_{n+1})
  3. Average the slopes at the start and predicted end
  4. Take the step using this averaged slope

The formula is:

k1=f(tn,yn)k2=f(tn+h,yn+hk1)yn+1=yn+h2(k1+k2)\begin{aligned} k_1 &= f(t_n, y_n) \\ k_2 &= f(t_n + h, y_n + h k_1) \\ y_{n+1} &= y_n + \frac{h}{2}(k_1 + k_2) \end{aligned}

By using information about the slope at both ends of the interval, Heun's method achieves second-order accuracy: global error O(h2)O(h^2). Halving the step size cuts the error by a factor of 4.

The cost is that we now evaluate ff twice per step instead of once. But the accuracy improvement usually more than compensates.

The Workhorse: Fourth-Order Runge-Kutta

The idea behind Heun's method generalizes. We can sample the slope at more points within each step and combine them cleverly to achieve higher accuracy. The fourth-order Runge-Kutta method (RK4) is the result of taking this to a practical sweet spot.

RK4 evaluates the slope at four points:

k1=f(tn,yn)k2=f(tn+h2,yn+h2k1)k3=f(tn+h2,yn+h2k2)k4=f(tn+h,yn+hk3)yn+1=yn+h6(k1+2k2+2k3+k4)\begin{aligned} k_1 &= f(t_n, y_n) \\ k_2 &= f\left(t_n + \frac{h}{2}, y_n + \frac{h}{2}k_1\right) \\ k_3 &= f\left(t_n + \frac{h}{2}, y_n + \frac{h}{2}k_2\right) \\ k_4 &= f(t_n + h, y_n + h k_3) \\ y_{n+1} &= y_n + \frac{h}{6}(k_1 + 2k_2 + 2k_3 + k_4) \end{aligned}

The slopes at the midpoint (k2k_2 and k3k_3) are weighted more heavily in the average, reflecting their importance in a Simpson's rule-like approximation.

RK4 achieves fourth-order accuracy: global error O(h4)O(h^4). Halving the step size reduces the error by a factor of 16. This dramatic improvement makes RK4 the default choice for many applications.

Method Comparison: Euler vs Heun vs RK4

The dashed line is the exact solution. Notice how RK4 stays much closer with the same step size.

With the same step size, the three methods produce very different results. Euler drifts noticeably. Heun stays closer. RK4 is nearly indistinguishable from the exact solution.

The improvement is not magic. RK4 achieves its accuracy by doing more work per step: four function evaluations instead of one. But the payoff in accuracy is substantial. For smooth problems, RK4 with large steps often outperforms Euler with tiny steps, even accounting for the extra evaluations.

Step Size and Computational Cost

Smaller steps mean better accuracy but more computation. There is a tradeoff between the precision we want and the time we can spend.

Step Size Explorer

Euler

Error: 2.64e+0

Steps: 31

f evals: 30

Heun

Error: 9.30e-2

Steps: 31

f evals: 60

RK4

Error: 4.62e-5

Steps: 31

f evals: 120

Smaller steps mean more accuracy but more computation. RK4 achieves much better accuracy per step, but requires 4 function evaluations per step instead of 1.

The visualization shows how error scales with step size for each method. Notice:

  • Euler's error decreases linearly with hh
  • Heun's error decreases quadratically
  • RK4's error decreases with h4h^4

When hh is small, RK4 can be thousands of times more accurate than Euler for the same number of function evaluations. This efficiency is why RK4 dominates practical computation.

The number of function evaluations measures the true computational cost. Each evaluation of f(t,y)f(t, y) might involve expensive calculations. RK4 uses 4 evaluations per step, so if we take nn steps, we do 4n4n evaluations. Euler uses nn evaluations for the same number of steps. But RK4 can often achieve the same accuracy with far fewer total steps.

How Errors Grow

Understanding error accumulation is crucial for trusting numerical results. Errors do not just add up; they can amplify.

Error Accumulation Over Time

Solution

Error over time

The purple bars show error at each step. Notice how errors accumulate over time. Higher-order methods keep errors smaller for longer.

Final error: 6.300e+0

The right panel shows how the absolute error grows over time. For exponential growth problems, errors tend to grow exponentially too. The Lipschitz constant of ff governs this growth, connecting back to our earlier study of existence and uniqueness.

For higher-order methods, errors start smaller and grow slower. This is particularly important for long-time simulations where we need to track a system for many periods.

Beyond Fixed Step Sizes: Adaptive Methods

Real solvers do not use a fixed step size. They adapt the step size based on local error estimates. Where the solution changes rapidly, they take small steps. Where it is smooth, they take large steps.

The idea is elegant: compute each step with two different methods (say, RK4 and RK5) and compare the results. If they agree well, the step is accurate and we can try a larger step next time. If they disagree, the step is inaccurate and we should retry with a smaller step.

This adaptive approach, embodied in methods like the Dormand-Prince integrator (often called ode45 in MATLAB), gives accuracy guarantees without wasting computation on easy regions of the solution.

Other practical considerations include:

  • Stiffness: Some problems have multiple time scales. The fast dynamics force tiny steps even when we only care about slow dynamics. Implicit methods handle stiff problems more efficiently.
  • Symplectic integrators: For Hamiltonian systems (like planetary motion), special methods preserve geometric structure and give qualitatively correct long-time behavior.
  • Multistep methods: Instead of discarding information from previous steps, Adams-Bashforth methods reuse past values to achieve high accuracy with fewer function evaluations per step.

The Numerical Perspective

Numerical methods transform the continuous problem y=f(t,y)y' = f(t, y) into a discrete iteration. This shift in perspective reveals new structure:

A numerical method defines a map ynyn+1y_n \mapsto y_{n+1}. The orbit of this map should shadow the true solution. Fixed points of this map correspond to equilibria of the ODE. Stable fixed points attract nearby orbits. The discrete dynamics mirror the continuous ones.

This connection runs deep. Stability analysis of numerical methods parallels stability analysis of differential equations. A method is stable if numerical errors do not amplify uncontrollably, just as a solution is stable if perturbations decay.

The choice of method and step size is a balance between accuracy, stability, and efficiency. There is no universal best choice. The art of scientific computing lies in matching the method to the problem.

Key Takeaways

  • Numerical methods approximate solutions at discrete points when exact formulas are unavailable
  • Euler's method follows tangent lines: yn+1=yn+hf(tn,yn)y_{n+1} = y_n + h \cdot f(t_n, y_n), with first-order accuracy
  • Local error is the per-step error; global error is the accumulated error over the full integration
  • Heun's method averages slopes at both ends of each step, achieving second-order accuracy
  • RK4 samples four slopes per step and achieves fourth-order accuracy: halving hh reduces error by 16x
  • Higher-order methods require more function evaluations per step but often win by needing far fewer steps
  • Adaptive methods adjust step size automatically to balance accuracy and efficiency
  • Numerical integration is how differential equations are actually solved in science and engineering