Existence and Uniqueness
When do solutions exist and when are they unique
Why This Matters
Throughout the previous chapters, we have been finding solutions to differential equations. We separated variables, used integrating factors, and applied substitutions. But we glossed over a fundamental question: does a solution even exist?
This is not a mere technicality. In physics, if represents the position of a particle, the existence and uniqueness of solutions determines whether the future is predictable from the present. If multiple solutions pass through the same initial point, the laws of physics would not determine what happens next. Determinism would fail.
Consider the initial value problem:
We want to know two things:
- Existence: Is there at least one solution passing through ?
- Uniqueness: Is there at most one solution?
These questions have precise answers, and the conditions that guarantee them reveal deep connections between analysis and differential equations.
When Uniqueness Fails
Before stating the theorem that guarantees uniqueness, let us see what can go wrong. Consider the innocent-looking equation:
The obvious solution is for all . The derivative of zero is zero, and , so the equation is satisfied.
But there is another solution. Try for :
And . So , and .
Both solutions pass through the origin. In fact, infinitely many solutions do.
Uniqueness Failure at the Origin
All curves satisfy with .
The solution can stay at zero and then branch off at any time. The branching point can be anywhere, giving infinitely many solutions.
Use the slider to move the branching point. The remarkable thing is that the solution can stay at zero for any amount of time before branching off. There is no physical reason to prefer one branching time over another. Infinitely many solutions pass through the origin, all equally valid mathematically.
What went wrong? The function has a vertical tangent at . Its derivative with respect to is , which blows up as . This unbounded behavior in the -direction is what allows solutions to separate.
The Lipschitz Condition
The key to uniqueness is controlling how fast can change as varies. We say is Lipschitz continuous in if there exists a constant such that:
for all points in some region. This says the function cannot change faster than linearly in the -direction. The constant is called the Lipschitz constant.
Intuitively, if you pick any two values of at the same , the difference in cannot exceed times the difference in . This puts a ceiling on how steeply the function can rise or fall.
Understanding the Lipschitz Condition
= 1.00
For , this ratio is always exactly 1. The function is Lipschitz with .
Drag the two points to compare the secant slopes. For , the ratio is always exactly 1, no matter where you place the points. Switch to and drag both points close to zero. The slope explodes, violating the Lipschitz condition at the origin.
A useful fact: if exists and is bounded by on a region, then is Lipschitz with that constant. This gives a practical test.
The Picard-Lindelof Theorem
Now we can state the fundamental result. This theorem goes by many names: Picard-Lindelof, Cauchy-Lipschitz, or simply the Existence and Uniqueness Theorem.
Theorem (Picard-Lindelof): Let be continuous on a rectangle containing the point . Suppose is Lipschitz continuous in on . Then the initial value problem
has a unique solution on some interval , where depends on , , and the bound on .
The theorem gives us two things:
- A solution exists (at least locally)
- That solution is the only one through
The interval of existence might be smaller than the width of the rectangle. If , then we can take . The solution is guaranteed to exist until it either leaves the rectangle or reaches the edge of the interval.
The Existence Region
For with :
,
The purple segment shows the guaranteed existence interval. The actual solution blows up at , but the theorem only promises existence for .
Adjust the rectangle dimensions to see how the guaranteed interval changes. Making the rectangle taller increases (the maximum of ), which shrinks the interval. The actual solution blows up at , but the theorem can only guarantee existence up to the smaller value .
Picard Iteration: A Constructive Proof
The proof of Picard-Lindelof is not just existential; it provides a method to construct the solution. This method, called Picard iteration, starts with an initial guess and repeatedly improves it.
The idea is to convert the differential equation into an integral equation. If with , then integrating both sides gives:
This is an equation where appears on both sides. Picard's insight was to use it iteratively:
- Start with the constant function
- Compute
- Compute
- Continue:
Under the Lipschitz condition, these iterates converge to the true solution.
Picard Iteration: Successive Approximations
Iteration 0:
Error at : 3.4817
The blue iterate converges to the purple true solution
For the equation with , the true solution is . Step through the iterations to watch the approximations converge:
- (constant guess)
- (linear)
- (quadratic)
- (cubic)
These are the partial sums of the Taylor series for . Each iteration adds another term, and by the seventh iteration the error at is negligible.
The beauty of Picard iteration is that it simultaneously proves existence (by constructing a sequence that converges) and uniqueness (by showing any solution must be a fixed point of the iteration).
Continuous Dependence on Initial Conditions
The Lipschitz condition gives us one more gift: continuous dependence on initial conditions. If two solutions start close together, they stay close together (at least for a while).
Specifically, if and solve the same equation with initial conditions and , then:
The exponential factor shows that differences can grow, but they grow at most exponentially. Small changes in initial conditions lead to small changes in the solution, at least over finite time intervals.
This is essential for physical applications. Measurements always have some error. If tiny errors in initial conditions led to wildly different solutions immediately, prediction would be impossible. The Lipschitz condition ensures a kind of stability.
When the Conditions Fail
The Picard-Lindelof theorem gives sufficient conditions, not necessary ones. Solutions might exist even when the hypotheses fail. But when they do fail, strange things can happen:
Non-uniqueness: As we saw with , when the Lipschitz condition fails, multiple solutions can pass through the same point. The system becomes indeterminate.
Finite-time blowup: The equation with has solution , which blows up at . The solution exists but only on . The theorem correctly tells us existence is only guaranteed locally.
Non-existence: If is discontinuous, solutions might not exist at all in the classical sense. The equation has no differentiable solution through .
Looking Ahead
The existence and uniqueness theorem is the theoretical foundation for everything we do with differential equations. When we solve an initial value problem, we are implicitly assuming a unique solution exists. Now we know when that assumption is justified.
In numerical methods, we will approximate solutions using discrete steps. The continuous dependence on initial conditions guarantees that small numerical errors do not immediately destroy our approximations.
In systems of differential equations, the same theorem applies component-wise. The phase portraits we will study in later chapters are meaningful precisely because trajectories do not cross (except at equilibrium points where ).
The interplay between analysis and differential equations runs deep. The tools of calculus tell us when solutions exist, and the structure of solutions tells us about the underlying mathematics.
Key Takeaways
- Existence asks whether a solution exists; uniqueness asks whether it is the only one
- The equation at the origin shows uniqueness can fail, with infinitely many solutions through one point
- The Lipschitz condition bounds how fast changes in the -direction
- The Picard-Lindelof theorem guarantees existence and uniqueness when is continuous and Lipschitz in
- Picard iteration constructs the solution as a limit of successive approximations
- Solutions depend continuously on initial conditions, with differences growing at most exponentially
- Uniqueness means physical determinism: the present state completely determines the future