Existence and Uniqueness

When do solutions exist and when are they unique

Why This Matters

Throughout the previous chapters, we have been finding solutions to differential equations. We separated variables, used integrating factors, and applied substitutions. But we glossed over a fundamental question: does a solution even exist?

This is not a mere technicality. In physics, if y(t)y(t) represents the position of a particle, the existence and uniqueness of solutions determines whether the future is predictable from the present. If multiple solutions pass through the same initial point, the laws of physics would not determine what happens next. Determinism would fail.

Consider the initial value problem:

y=f(t,y),y(t0)=y0y' = f(t, y), \quad y(t_0) = y_0

We want to know two things:

  1. Existence: Is there at least one solution passing through (t0,y0)(t_0, y_0)?
  2. Uniqueness: Is there at most one solution?

These questions have precise answers, and the conditions that guarantee them reveal deep connections between analysis and differential equations.

When Uniqueness Fails

Before stating the theorem that guarantees uniqueness, let us see what can go wrong. Consider the innocent-looking equation:

y=y1/3,y(0)=0y' = y^{1/3}, \quad y(0) = 0

The obvious solution is y(t)=0y(t) = 0 for all tt. The derivative of zero is zero, and 01/3=00^{1/3} = 0, so the equation is satisfied.

But there is another solution. Try y(t)=(2t3)3/2y(t) = \left(\frac{2t}{3}\right)^{3/2} for t0t \geq 0:

y=32(2t3)1/223=(2t3)1/2y' = \frac{3}{2} \cdot \left(\frac{2t}{3}\right)^{1/2} \cdot \frac{2}{3} = \left(\frac{2t}{3}\right)^{1/2}

And y1/3=((2t3)3/2)1/3=(2t3)1/2y^{1/3} = \left(\left(\frac{2t}{3}\right)^{3/2}\right)^{1/3} = \left(\frac{2t}{3}\right)^{1/2}. So y=y1/3y' = y^{1/3}, and y(0)=0y(0) = 0.

Both solutions pass through the origin. In fact, infinitely many solutions do.

Uniqueness Failure at the Origin

All curves satisfy y=y1/3y' = y^{1/3} with y(0)=0y(0) = 0.

The solution can stay at zero and then branch off at any time. The branching point can be anywhere, giving infinitely many solutions.

Use the slider to move the branching point. The remarkable thing is that the solution can stay at zero for any amount of time before branching off. There is no physical reason to prefer one branching time over another. Infinitely many solutions pass through the origin, all equally valid mathematically.

What went wrong? The function f(t,y)=y1/3f(t, y) = y^{1/3} has a vertical tangent at y=0y = 0. Its derivative with respect to yy is 13y2/3\frac{1}{3}y^{-2/3}, which blows up as y0y \to 0. This unbounded behavior in the yy-direction is what allows solutions to separate.

The Lipschitz Condition

The key to uniqueness is controlling how fast f(t,y)f(t, y) can change as yy varies. We say ff is Lipschitz continuous in yy if there exists a constant LL such that:

f(t,y1)f(t,y2)Ly1y2|f(t, y_1) - f(t, y_2)| \leq L |y_1 - y_2|

for all points in some region. This says the function cannot change faster than linearly in the yy-direction. The constant LL is called the Lipschitz constant.

Intuitively, if you pick any two values of yy at the same tt, the difference in ff cannot exceed LL times the difference in yy. This puts a ceiling on how steeply the function can rise or fall.

Understanding the Lipschitz Condition

f(y2)f(y1)y2y1\frac{|f(y_2) - f(y_1)|}{|y_2 - y_1|} = 1.00

For f(y)=yf(y) = y, this ratio is always exactly 1. The function is Lipschitz with L=1L = 1.

Drag the two points to compare the secant slopes. For f(y)=yf(y) = y, the ratio f(y2)f(y1)/y2y1|f(y_2) - f(y_1)|/|y_2 - y_1| is always exactly 1, no matter where you place the points. Switch to f(y)=y1/3f(y) = y^{1/3} and drag both points close to zero. The slope explodes, violating the Lipschitz condition at the origin.

A useful fact: if fy\frac{\partial f}{\partial y} exists and is bounded by LL on a region, then ff is Lipschitz with that constant. This gives a practical test.

The Picard-Lindelof Theorem

Now we can state the fundamental result. This theorem goes by many names: Picard-Lindelof, Cauchy-Lipschitz, or simply the Existence and Uniqueness Theorem.

Theorem (Picard-Lindelof): Let f(t,y)f(t, y) be continuous on a rectangle R={(t,y):tt0a,yy0b}R = \{(t, y) : |t - t_0| \leq a, |y - y_0| \leq b\} containing the point (t0,y0)(t_0, y_0). Suppose ff is Lipschitz continuous in yy on RR. Then the initial value problem

y=f(t,y),y(t0)=y0y' = f(t, y), \quad y(t_0) = y_0

has a unique solution on some interval tt0<h|t - t_0| < h, where hh depends on aa, bb, and the bound on f|f|.

The theorem gives us two things:

  1. A solution exists (at least locally)
  2. That solution is the only one through (t0,y0)(t_0, y_0)

The interval of existence might be smaller than the width of the rectangle. If M=maxRf(t,y)M = \max_{R} |f(t, y)|, then we can take h=min(a,b/M)h = \min(a, b/M). The solution is guaranteed to exist until it either leaves the rectangle or reaches the edge of the interval.

The Existence Region

For y=y2y' = y^2 with y(0)=1y(0) = 1:

M=maxRy2=3.24M = \max_R |y^2| = 3.24, h=min(a,b/M)=0.25h = \min(a, b/M) = 0.25

The purple segment shows the guaranteed existence interval. The actual solution blows up at t=1t = 1, but the theorem only promises existence for t<0.25|t| < 0.25.

Adjust the rectangle dimensions to see how the guaranteed interval changes. Making the rectangle taller increases MM (the maximum of f|f|), which shrinks the interval. The actual solution y=1/(1t)y = 1/(1-t) blows up at t=1t = 1, but the theorem can only guarantee existence up to the smaller value h=min(a,b/M)h = \min(a, b/M).

Picard Iteration: A Constructive Proof

The proof of Picard-Lindelof is not just existential; it provides a method to construct the solution. This method, called Picard iteration, starts with an initial guess and repeatedly improves it.

The idea is to convert the differential equation into an integral equation. If y=f(t,y)y' = f(t, y) with y(t0)=y0y(t_0) = y_0, then integrating both sides gives:

y(t)=y0+t0tf(s,y(s))dsy(t) = y_0 + \int_{t_0}^{t} f(s, y(s)) \, ds

This is an equation where yy appears on both sides. Picard's insight was to use it iteratively:

  1. Start with the constant function y0(t)=y0y_0(t) = y_0
  2. Compute y1(t)=y0+t0tf(s,y0(s))dsy_1(t) = y_0 + \int_{t_0}^{t} f(s, y_0(s)) \, ds
  3. Compute y2(t)=y0+t0tf(s,y1(s))dsy_2(t) = y_0 + \int_{t_0}^{t} f(s, y_1(s)) \, ds
  4. Continue: yn+1(t)=y0+t0tf(s,yn(s))dsy_{n+1}(t) = y_0 + \int_{t_0}^{t} f(s, y_n(s)) \, ds

Under the Lipschitz condition, these iterates converge to the true solution.

Picard Iteration: Successive Approximations

Iteration 0: y0=1y_0 = 1

Error at t=1.5t = 1.5: 3.4817

The blue iterate converges to the purple true solution

For the equation y=yy' = y with y(0)=1y(0) = 1, the true solution is ete^t. Step through the iterations to watch the approximations converge:

  • y0=1y_0 = 1 (constant guess)
  • y1=1+ty_1 = 1 + t (linear)
  • y2=1+t+t2/2y_2 = 1 + t + t^2/2 (quadratic)
  • y3=1+t+t2/2+t3/6y_3 = 1 + t + t^2/2 + t^3/6 (cubic)

These are the partial sums of the Taylor series for ete^t. Each iteration adds another term, and by the seventh iteration the error at t=1.5t = 1.5 is negligible.

The beauty of Picard iteration is that it simultaneously proves existence (by constructing a sequence that converges) and uniqueness (by showing any solution must be a fixed point of the iteration).

Continuous Dependence on Initial Conditions

The Lipschitz condition gives us one more gift: continuous dependence on initial conditions. If two solutions start close together, they stay close together (at least for a while).

Specifically, if y(t)y(t) and y~(t)\tilde{y}(t) solve the same equation with initial conditions y0y_0 and y~0\tilde{y}_0, then:

y(t)y~(t)y0y~0eLtt0|y(t) - \tilde{y}(t)| \leq |y_0 - \tilde{y}_0| \cdot e^{L|t - t_0|}

The exponential factor shows that differences can grow, but they grow at most exponentially. Small changes in initial conditions lead to small changes in the solution, at least over finite time intervals.

This is essential for physical applications. Measurements always have some error. If tiny errors in initial conditions led to wildly different solutions immediately, prediction would be impossible. The Lipschitz condition ensures a kind of stability.

When the Conditions Fail

The Picard-Lindelof theorem gives sufficient conditions, not necessary ones. Solutions might exist even when the hypotheses fail. But when they do fail, strange things can happen:

Non-uniqueness: As we saw with y=y1/3y' = y^{1/3}, when the Lipschitz condition fails, multiple solutions can pass through the same point. The system becomes indeterminate.

Finite-time blowup: The equation y=y2y' = y^2 with y(0)=1y(0) = 1 has solution y=1/(1t)y = 1/(1-t), which blows up at t=1t = 1. The solution exists but only on (,1)(-\infty, 1). The theorem correctly tells us existence is only guaranteed locally.

Non-existence: If ff is discontinuous, solutions might not exist at all in the classical sense. The equation y=sign(y)y' = \text{sign}(y) has no differentiable solution through y=0y = 0.

Looking Ahead

The existence and uniqueness theorem is the theoretical foundation for everything we do with differential equations. When we solve an initial value problem, we are implicitly assuming a unique solution exists. Now we know when that assumption is justified.

In numerical methods, we will approximate solutions using discrete steps. The continuous dependence on initial conditions guarantees that small numerical errors do not immediately destroy our approximations.

In systems of differential equations, the same theorem applies component-wise. The phase portraits we will study in later chapters are meaningful precisely because trajectories do not cross (except at equilibrium points where f=0f = 0).

The interplay between analysis and differential equations runs deep. The tools of calculus tell us when solutions exist, and the structure of solutions tells us about the underlying mathematics.

Key Takeaways

  • Existence asks whether a solution exists; uniqueness asks whether it is the only one
  • The equation y=y1/3y' = y^{1/3} at the origin shows uniqueness can fail, with infinitely many solutions through one point
  • The Lipschitz condition bounds how fast f(t,y)f(t, y) changes in the yy-direction
  • The Picard-Lindelof theorem guarantees existence and uniqueness when ff is continuous and Lipschitz in yy
  • Picard iteration constructs the solution as a limit of successive approximations
  • Solutions depend continuously on initial conditions, with differences growing at most exponentially
  • Uniqueness means physical determinism: the present state completely determines the future