Inverse Laplace and Applications

Solving initial value problems with transforms

In the previous chapter, we learned how to transform differential equations into algebraic equations using the Laplace transform. The original problem becomes easier to solve in the s-domain. But the answer we obtain is a function Y(s), and we need to get back to the time domain to find y(t).

This is where the inverse Laplace transform enters. We write:

y(t)=L1{Y(s)}y(t) = \mathcal{L}^{-1}\{Y(s)\}

The challenge is that directly computing this inverse requires contour integration in the complex plane. Instead, we rely on a more practical approach: decompose Y(s) into simpler pieces whose inverse transforms we already know.

Partial Fraction Decomposition

The key technique for inverting Laplace transforms is partial fraction decomposition. When Y(s) is a rational function (a ratio of polynomials), we can break it into a sum of simpler fractions, each of which corresponds to a known inverse transform.

Consider a typical result from solving an ODE:

Y(s)=P(s)Q(s)Y(s) = \frac{P(s)}{Q(s)}

where P(s) and Q(s) are polynomials. If the degree of P is less than the degree of Q, and Q factors into distinct linear factors, we can write:

P(s)(sr1)(sr2)(srn)=A1sr1+A2sr2++Ansrn\frac{P(s)}{(s - r_1)(s - r_2) \cdots (s - r_n)} = \frac{A_1}{s - r_1} + \frac{A_2}{s - r_2} + \cdots + \frac{A_n}{s - r_n}

Each simple fraction has a known inverse:

L1{Asr}=Aert\mathcal{L}^{-1}\left\{\frac{A}{s - r}\right\} = A e^{rt}

Interactive: Partial Fraction Decomposition

Given:

F(s)=2s+3s2+3s+2F(s) = \frac{2s + 3}{s^2 + 3s + 2}

Step through the examples to see how we systematically decompose a rational function and then invert each piece. The final solution is the sum of exponentials, one for each pole of Y(s).

Finding the Coefficients

There are several methods for finding the coefficients in a partial fraction expansion.

Cover-up Method: For distinct linear factors, multiply both sides by one factor and then set s equal to the root. This "covers up" that factor and gives the coefficient directly.

For example, to find A in:

2s+3(s+1)(s+2)=As+1+Bs+2\frac{2s + 3}{(s + 1)(s + 2)} = \frac{A}{s + 1} + \frac{B}{s + 2}

Multiply both sides by (s+1)(s + 1):

2s+3s+2=A+B(s+1)s+2\frac{2s + 3}{s + 2} = A + \frac{B(s + 1)}{s + 2}

Set s=1s = -1:

2(1)+31+2=A    A=1\frac{2(-1) + 3}{-1 + 2} = A \implies A = 1

Repeated Roots: When Q(s) has a repeated factor like (sr)k(s - r)^k, the decomposition includes terms:

A1sr+A2(sr)2++Ak(sr)k\frac{A_1}{s - r} + \frac{A_2}{(s - r)^2} + \cdots + \frac{A_k}{(s - r)^k}

The inverse transforms are:

L1{1(sr)n}=tn1(n1)!ert\mathcal{L}^{-1}\left\{\frac{1}{(s - r)^n}\right\} = \frac{t^{n-1}}{(n-1)!}e^{rt}

Complex Roots: Complex conjugate pairs like (sα)2+β2(s - \alpha)^2 + \beta^2 require completing the square and using the shift property. The inverse involves exponential decay (or growth) modulated by sines and cosines.

The Complete Laplace Method

Here is the full procedure for solving an initial value problem using Laplace transforms:

  1. Take the Laplace transform of both sides of the ODE
  2. Use the derivative properties to substitute initial conditions
  3. Solve algebraically for Y(s)
  4. Apply partial fractions to decompose Y(s)
  5. Invert each term using known transform pairs
  6. Combine to get y(t)

This method is particularly powerful for equations with discontinuous or impulsive forcing, where traditional methods struggle.

Interactive: Complete IVP Solution

Initial point: y(0) = 1, y'(0) = 0

Start with the IVP

y+3y+2y=0,y(0)=1,  y(0)=0y'' + 3y' + 2y = 0, \quad y(0) = 1, \; y'(0) = 0

Watch each step of the Laplace method unfold. Adjust the initial conditions to see how they appear in the algebraic manipulation and ultimately affect the solution.

Essential Transform Pairs

Here are the transform pairs you will use most often:

f(t)F(s)
111s\frac{1}{s}
tt1s2\frac{1}{s^2}
tnt^nn!sn+1\frac{n!}{s^{n+1}}
eate^{at}1sa\frac{1}{s-a}
teatt e^{at}1(sa)2\frac{1}{(s-a)^2}
sin(ωt)\sin(\omega t)ωs2+ω2\frac{\omega}{s^2 + \omega^2}
cos(ωt)\cos(\omega t)ss2+ω2\frac{s}{s^2 + \omega^2}
eatsin(ωt)e^{at}\sin(\omega t)ω(sa)2+ω2\frac{\omega}{(s-a)^2 + \omega^2}
eatcos(ωt)e^{at}\cos(\omega t)sa(sa)2+ω2\frac{s-a}{(s-a)^2 + \omega^2}

The last two pairs use the first shifting theorem: if L{f(t)}=F(s)\mathcal{L}\{f(t)\} = F(s), then:

L{eatf(t)}=F(sa)\mathcal{L}\{e^{at}f(t)\} = F(s - a)

Multiplying by an exponential in the time domain shifts the transform in the s-domain.

The Heaviside Step Function

Real systems do not always experience smooth forcing. A switch flips. A valve opens. A voltage suddenly turns on. To model these discontinuities, we introduce the Heaviside step function:

H(ta)={0t<a1taH(t - a) = \begin{cases} 0 & t < a \\ 1 & t \geq a \end{cases}

This function is zero until time aa, then jumps to one and stays there. It represents an instantaneous switch.

The Laplace transform of the delayed step function is:

L{H(ta)}=eass\mathcal{L}\{H(t - a)\} = \frac{e^{-as}}{s}

More generally, the second shifting theorem states: if L{f(t)}=F(s)\mathcal{L}\{f(t)\} = F(s), then:

L{H(ta)f(ta)}=easF(s)\mathcal{L}\{H(t - a) f(t - a)\} = e^{-as} F(s)

This means a delay in the time domain produces an exponential factor in the s-domain. When inverting, an exponential ease^{-as} in F(s) indicates that the solution is delayed by aa units of time.

Interactive: Step Function Response

Input: H(t - a)

Output: System Response

The system y+y=H(ta)y' + y = H(t - a) starts at rest. When the step input turns on at t=2t = 2, the output rises exponentially toward the new steady state of 1.

Watch how a first-order system responds when a step input suddenly turns on. The output cannot change instantaneously; it rises smoothly toward its new equilibrium. The delay in the step function translates to a delay in the response.

The Dirac Delta Function

If the Heaviside function models a sudden switch, the Dirac delta function models an instantaneous impulse. Think of striking a bell: an enormous force applied for an infinitesimally short time.

The delta function δ(ta)\delta(t - a) is not a function in the ordinary sense. It is defined by its action on other functions:

δ(ta)f(t)dt=f(a)\int_{-\infty}^{\infty} \delta(t - a) f(t) \, dt = f(a)

It "picks out" the value of f at t=at = a. We think of it as an infinitely tall, infinitely narrow spike at t=at = a with unit area.

The Laplace transform is beautifully simple:

L{δ(ta)}=eas\mathcal{L}\{\delta(t - a)\} = e^{-as}

For an impulse at the origin:

L{δ(t)}=1\mathcal{L}\{\delta(t)\} = 1

This is remarkable: the Laplace transform of an impulse is just 1. This makes delta functions extremely useful for understanding system behavior.

The Transfer Function

Consider a linear system described by an ODE with zero initial conditions. If we apply a unit impulse δ(t)\delta(t) as input, the output is called the impulse response, denoted h(t)h(t).

Taking the Laplace transform of both sides with zero initial conditions:

Output Y(s)=H(s)Input \text{Output }Y(s) = H(s) \cdot \text{Input }

where H(s)=L{h(t)}H(s) = \mathcal{L}\{h(t)\} is the transfer function. Since the Laplace transform of a unit impulse is 1:

H(s)=Y(s)when input=δ(t)H(s) = Y(s) \quad \text{when input} = \delta(t)

The transfer function is simply the Laplace transform of the impulse response. It encodes everything about how the system responds to inputs:

  • The poles of H(s)H(s) determine stability and oscillation behavior
  • The zeros of H(s)H(s) affect the shape of the response
  • Engineers use transfer functions to design controllers, filters, and feedback systems

Interactive: Impulse Response and Transfer Function

Input: Impulse

Impulse Response h(t)

Response type: Underdamped

Transfer function: H(s)=1s2+1.0s+1H(s) = \frac{1}{s^2 + 1.0s + 1}

The impulse response h(t) is the system's output when given a unit impulse input. The transfer function H(s) is simply the Laplace transform of h(t). Once you know h(t), you can find the response to any input via convolution.

Adjust the damping ratio to see how the impulse response changes from oscillatory to smooth decay. The transfer function remains the same algebraic form, but the pole locations change, fundamentally altering the time-domain behavior.

Convolution

Once we know the impulse response h(t)h(t), we can find the response to any input f(t)f(t) using convolution:

y(t)=(fh)(t)=0tf(τ)h(tτ)dτy(t) = (f * h)(t) = \int_0^t f(\tau) \, h(t - \tau) \, d\tau

The idea is elegant: any input can be thought of as a sum of weighted, delayed impulses. The response to each is a weighted, delayed copy of the impulse response. Integration sums them all up.

In the s-domain, convolution becomes multiplication:

L{fh}=F(s)H(s)\mathcal{L}\{f * h\} = F(s) \cdot H(s)

This is why transfer functions are so powerful. To find the output Y(s), simply multiply the input F(s) by the transfer function H(s).

Interactive: Convolution

Convolution Integral

f(s) = inputh(tau - s) = shifted hproduct (shaded area)

Output y(t) = (f * h)(t)

y(τ)=0τf(s)h(τs)dsy(\tau) = \int_0^\tau f(s) \cdot h(\tau - s) \, ds

The shaded area equals the output value at time tau. As tau increases, the flipped impulse response slides across the input, and the accumulated area traces out the output curve.

Watch the convolution integral in action. As time advances, the flipped impulse response slides across the input, and the accumulated product area traces out the output. This mechanical interpretation makes convolution intuitive.

Solving ODEs with Discontinuous Forcing

The real power of Laplace transforms appears when the forcing function is discontinuous. Consider:

y+4y=H(tπ)H(t2π)y'' + 4y = H(t - \pi) - H(t - 2\pi)

with y(0)=0y(0) = 0 and y(0)=0y'(0) = 0.

This represents a force that turns on at t=πt = \pi and off at t=2πt = 2\pi. Taking the Laplace transform:

s2Y(s)+4Y(s)=eπse2πsss^2 Y(s) + 4Y(s) = \frac{e^{-\pi s} - e^{-2\pi s}}{s} Y(s)=eπse2πss(s2+4)Y(s) = \frac{e^{-\pi s} - e^{-2\pi s}}{s(s^2 + 4)}

Apply partial fractions to 1s(s2+4)\frac{1}{s(s^2 + 4)}:

1s(s2+4)=1/4ss/4s2+4\frac{1}{s(s^2 + 4)} = \frac{1/4}{s} - \frac{s/4}{s^2 + 4}

The inverse transform, accounting for the exponential delays:

y(t)=H(tπ)[1414cos(2(tπ))]H(t2π)[1414cos(2(t2π))]y(t) = H(t - \pi)\left[\frac{1}{4} - \frac{1}{4}\cos(2(t - \pi))\right] - H(t - 2\pi)\left[\frac{1}{4} - \frac{1}{4}\cos(2(t - 2\pi))\right]

The solution is piecewise: zero until the force turns on, then oscillating, and eventually returning to a different pattern after the force turns off.

Systems of Differential Equations

The Laplace method extends naturally to systems. For a system:

x=Ax+f(t),x(0)=x0\mathbf{x}' = A\mathbf{x} + \mathbf{f}(t), \quad \mathbf{x}(0) = \mathbf{x}_0

Taking the Laplace transform:

sX(s)x0=AX(s)+F(s)s\mathbf{X}(s) - \mathbf{x}_0 = A\mathbf{X}(s) + \mathbf{F}(s) (sIA)X(s)=x0+F(s)(sI - A)\mathbf{X}(s) = \mathbf{x}_0 + \mathbf{F}(s) X(s)=(sIA)1[x0+F(s)]\mathbf{X}(s) = (sI - A)^{-1}[\mathbf{x}_0 + \mathbf{F}(s)]

The matrix (sIA)1(sI - A)^{-1} is called the resolvent of AA. Inverting it and taking inverse Laplace transforms gives the solution.

Why Engineers Love This Method

The Laplace transform is ubiquitous in engineering because:

  1. Initial conditions are built in: No need to solve the homogeneous equation separately and then match conditions.

  2. Discontinuous forcing is natural: Step functions, pulses, and impulses are handled seamlessly.

  3. Transfer functions enable modular design: Complex systems can be analyzed by combining simpler blocks.

  4. Stability analysis: Poles in the right half-plane mean instability; this is visible directly from H(s).

  5. Frequency response: Evaluating H(iω)H(i\omega) gives the system's response to sinusoidal inputs at frequency ω\omega.

Control theory, signal processing, and circuit analysis all rely heavily on Laplace transform techniques.

Key Takeaways

  • The inverse Laplace transform converts Y(s) back to y(t), completing the solution process
  • Partial fraction decomposition breaks rational functions into simple terms with known inverses
  • The Heaviside step function H(ta)H(t - a) models sudden switches; its transform is eas/se^{-as}/s
  • The Dirac delta function δ(ta)\delta(t - a) models impulses; its transform is ease^{-as}
  • The transfer function H(s)H(s) is the Laplace transform of the impulse response and completely characterizes a linear system
  • Convolution (fh)(t)(f * h)(t) gives the response to any input once the impulse response is known; in the s-domain, convolution becomes multiplication
  • The shifting theorems handle exponential factors and time delays: eate^{at} in time shifts F(s) to F(s-a), while delay by aa multiplies by ease^{-as}
  • This method excels at discontinuous and impulsive forcing, where classical methods struggle