In my last blog post, I demonstrated how to drive the Leapfrog integrator using Taylor series of \(x(t + \Delta t)\), \(v(t + \Delta t)\), and \(a(t + \Delta t)\). I mentioned that since we truncated the series for \(x(t + \Delta t)\) and \(v(t + \Delta t)\) after the second-order terms, the Leapfrog integrator was a second-order method where the error is divided by eight every time we half the timestep.

In this blog post, I’ll analyze my statements about the error in more depth. First, I’ll give a definition for the truncation error \(e_{t + \Delta t}\) for a single step. Assume that the differential equation has an analytical solution \(x(t + \Delta t)\) and the estimated solution generated by the Leapfrog integrator is given by \(\tilde{x}(t + \Delta t)\). Then, the truncation error is simply the difference between the real and approximate solutions:

\[e_{t+\Delta t} = |x(t + \Delta t) - \tilde{x}(t + \Delta t)|\]

We can verify the local trunction error for a single step by comparing the difference between the analytical solution at time \(t + \Delta t\) with a single step of the Leapfrog integrator. First, assume that we can express the analytical solution as a Taylor series:

\[x(t + \Delta t) = x(t) + x'(t)\Delta t + \frac{1}{2} x''(t) \Delta t^2 + \frac{1}{6} x'''(t) \Delta t^3\]

We substitute the Taylor series and the Leapfrog equation for \(x(t + \Delta t)\) into \(e_{t+\Delta t}\). We note that \(x'(t) = v(t)\) and \(x''(t) = a(t)\). Errors tend accumulate from step to step. Thus \(x(t)\), \(v(t)\), and \(a(t)\) all need to come from the analytical solution so that we are only computing a single step with the Leapfrog method.

\[e_{t+\Delta t} = |x(t) + x'(t)\Delta t + \frac{1}{2} x''(t) \Delta t^2 + \frac{1}{6} x'''(t) \Delta t^3 - \\ [x(t) + v(t)\Delta t + \frac{1}{2} a(t) \Delta^2]| \\ = |\frac{1}{6} x'''(t) \Delta t^3| \\ = \mathcal{O}(\Delta t^3)\]

Using a harmonic oscillator model solved with the analytical and Leapfrog methods, we can demonstrate how to verify the per-step error. We evaluate the analytical solution at discretized time points over 100 seconds using a range of time steps. For each time point, we also compute the approximate solution from the previous analytical evaluation using the Leapfrog method. For each step size, we take an average over the local truncation errors from every time step. I did so, creating a table like so:

Step size Steps Average Position Error Average Velocity Error
0.1 1000 0.02623153722 0.08734682373
0.05 2000 0.003287155951 0.01060538765
0.025 4000 0.0004111489105 0.001300412672
0.01 10000 2.63E-05 8.28E-05
0.005 20000 3.29E-06 1.03E-06
0.0025 40000 4.11E-07 1.29E-06
0.001 100000 2.63E-08 8.27E-08

We can then estimate the order of the error with respect to the time step. Assume the average error is dominated by the time step to a power. We set up an equation to model that relationship and take the logarithm of both sides:

\[\bar{e} = c \Delta t^n \\ \log \bar{e} = \log c \Delta t^n \\ \log \bar{e} = \log \Delta t^n + \log c \\ \log \bar{e} = n \log \Delta t + \log c\]

where \(\bar{e}\) is the average error, \(\Delta t\) is the time step, \(n\) is the order, and \(c\) is a constant.

Using the log-log formulation, we can estimate \(n\) and \(c\) using linear regression on the table of time steps and average errors.

Error Type Estimated Order \(r^2\)
Position 2.99961268983 0.999999966311
Velocity 3.01051197354 0.999993840038

We can see that the estimated orders of the local position and velocity truncation errors are close to 3, matching our expectation from the analytical analysis. Our implementation is thus likely correct.

(All code for this blog post is available on GitHub under the Apache License v2.)