note that the product vanishes when i≠1=1j!∏j=2jj−1=1j!(j−1)!=1j.\begin{aligned} \delta_{j} &= \frac{1}{j!} \frac{\mathop{}\!\mathrm{d} }{\mathop{}\!\mathrm{d} c} \prod_{i=1}^j c-i-2 \Big|_{c=1} \\ &= \frac{1}{j!} \sum_{i=1}^j \prod_{j=1 \text{ and } j\ne i}^j c + i - 2 \Big|_{c=1} \\ &= \frac{1}{j!} \sum_{i=1}^j \prod_{j=1 \text{ and } j\ne i}^j j-1 \quad \text{note that the product vanishes when $i\ne 1$}\\ &= \frac{1}{j!} \prod_{j=2}^j j-1 = \frac{1}{j!} (j-1)! = \frac{1}{j}. \end{aligned}δj=j!1dcdi=1∏jc−i−2∣∣∣∣c=1=j!1i=1∑jj=1 and j=i∏jc+i−2∣∣∣∣c=1=j!1i=1∑jj=1 and j=i∏jj−1note that the product vanishes when i=1=j!1j=2∏jj−1=j!1(j−1)!=j1.
Together, the constant step size BDF is
j=1∑sj1∇jun+1=hun+1′.
Varying the step size by transplanting to equidistant grid
A simple strategy to vary the step size is to simply evaluate the old polynomial p from the at the equidistant grid tn−ih~ for i=0,1,2,...,s to obtain an interpolated "constant step size history", when changing to a new step size h~. During the step changing process, we do not need to compute un+1. Hence, we only need the polynomial ps,n(tn+ch). This strategy is simple because evaluating and re-interpolating a polynomial interpolant are linear transformations of the history, so we can solve for matrices that performs such transformations instead of working with polynomials explicitly.
Note that the only step size dependent term is ∇hjun. Let's define r=h~/h,
With the interpolation condition, we force ps,n(tn+ch~)=ps,n(tn+crh)=p~s,n(tn+ch~) for c=0,−1,...,−s. Note that when c=0, the interpolation condition simplifies to p~s,n(tn)=un=ps,n(tn) which always holds. Therefore, we only need to solve
Note that c=−k as k=1,2,...,s. We end this section by remarking that U2=I.
julia> using LinearAlgebra
julia> U(s) = [prod(i->i-k, 0:j-1)//factorial(j) for j in1:s, k in1:s]
U (generic function with 1 method)
julia> U(5)^2 == I
true
Therefore, the divided differences with h~ is simply
D~=D(RU).
Updating backward differences
A natural choice for the predictor is then un+1(0)=ps,n(tn+h)=un+∑j=1sDj. From the definition of the backward difference, we have ∇hs+1un+1=∇hsun+1−∇hsun=∇hs−1un+1−∇hs−1un−∇hsun=∇hs−2un+1−∇hs−2un−∇hs−1un−∇hsun=un+1−un−...−∇hs−2un−∇hs−1un−∇hsun=un+1−un+1(0).
Therefore, the s+1-th order backward difference of the next step is simply the difference between the corrector un+1 and the predictor un+1(0). Also, note that from
∇hjun+1=∇hj+1un+1+∇hjun,
where j∈N, we can compute all the lower order backward differences of the next step, and
∇hs+2un+1=∇hs+1un+1−∇hs+1un.
We can use the above updating relations to simplify the backward differences:
where γj=∑k=1jj1. This naïve approach contains a lot of redundant computation, and the computational complexity is O(s2m) for un∈Rm. We can do a lot better if reorder the summation:
==j=1∑sj1k=j∑s∇kun=j=1∑sk=j∑sj1∇kun=j=1∑sk=1∧k≥j∑sj1∇kunk=1∑sj=1∧j≤k∑sj1∇kunnote the interchange of summationk=1∑sj=1∑kj1∇kun=k=1∑s(j=1∑kj1)∇kun=k=1∑sγk∇kun.
This expression can be evaluated in O(sm) time.
Putting everything together, the simplified BDF is
γs(un+1−un+1(0))+k=1∑sγk∇kun=hun+1′,
and we can use a nonlinear equation solver to solve for un+1. We will omit the details of writing a stable and efficient nonlinear solver here as they are independent from the BDF method itself.
Local truncation error
By the standard result on polynomial interpolation, we know that u(t)−ps,n+1(t)=(s+1)!u(s+1)(ξt)w(t),
where w(t)=∏j=−1s−1(t−tn−j). We assume that all the history is completely accurate and compute the error of ps,n+1′(tn+1):
Note that we don't have the term with w(tn+1) because it goes to 0 by the construction of w. Expanding w′(tn+1):
w′(tn+1)=j=−1∑s−1⎣⎢⎡(tn+1−tn−j)′k=−1∧k=j∏s−1(tn+1−tn−k)⎦⎥⎤=j=−1∑s−1k=−1∧k=j∏s−1(tn+1−tn−k)the product only doesn’t vanish when j=−1=k=0∏s−1(tn+1−tn−k).
Hence, the error estimate for the s-th order BDF is
s+11∇hs+1un+1,
and this quantity can easily be computed from the updated backward differences. With an appropriate controller for the step size and order, a quasi-constant step size BDF method can be implemented in its totality.