In today's post, we study ways to accurately interpolate non-periodic, smooth data on a uniform grid.
Intro
This series of posts looks into different strategies for interpolating non-periodic, smooth data on a uniform grid with high accuracy. For an introduction, see the first post of this series. In the following, we will look at subtraction methods. You may find the accompanying Python code on GitHub .
Subtraction Methods
A finite Fourier series, associated with a non-periodic function \(f\), converges very slowly, like \(\mathcal{O}(N^{-1})\) where \(N\) is the number of Fourier collocation points. One can increase this accuracy by smoothing the non-periodic function at the domain boundaries. Subtracting a known function \(F\) at the boundaries so that \(g = f - G \in C^0\) (g is continuous at the domain boundary), the associated Fourier series converges like \(\mathcal{O}(N^{-2})\). Subtraction methods estimate the function’s derivatives at the domain boundary, often employing finite difference approximations. Then, they identify a suitable set of non-periodic functions that meet the estimated boundary conditions, typically by solving a linear system. These basis functions are subtracted from the original data, leaving a smoother function conducive to an accurate Fourier transform. Moreover, the basis functions can be analytically manipulated.
Subtraction methods commonly involve subtracting either polynomials or trigonometric functions. Their accuracy hinges on the precise estimation of the data’s derivatives at the boundaries. Sköllermo used partial integration to show that if a function \(f \in C^{2p - 1}\) and \(f^{(2p)}\) is integrable, the Fourier coefficients decay as \(\propto \mathcal{O}(N^{-(2p - 1)})\). However, the maximum order of convergence is fundamentally limited by a variant of Runge’s phenomenon, as it is impossible to estimate arbitrarily high-order derivatives using polynomial stencils. In addition, high-order subtraction methods may be relatively unstable when building PDE solvers. Based on my experience with wave and fluid equations, I do not believe that PDE solvers using subtraction methods can compete with DFT solvers for periodic PDEs because they become unstable for high subtraction orders.
In the following, I will implement two subtraction methods: trigonometric and polynomial subtraction.
Trigonometric Subtraction
The trigonometric method, described in Matthew Green’s M.Sc. thesis Spectral Solution with a Subtraction Method to Improve Accuracy, estimates boundary derivatives using finite-difference stencils and subtracts an inhomogeneous linear combination of cosine functions, leaving a homogeneous remainder. This remainder can either be expanded using a sine transform or, less efficiently, antisymmetrically expanded into a periodic function and then manipulated using a Fourier transform.
Schematically, the process looks as follows:
Given a grid: \(0, \Delta x, ..., L - \Delta x, L\)
Estimate even derivatives \(f^{(0)}(x_0), f^{(0)}(x_1), f^{(2)}(x_0), f^{(2)}(x_1),\)… of \(f(x)\) at \(x_0=0\) and \(x_1 = L\)
Set them to \(0\) by subtracting suitable linear combinations of cosine functions evaluated on the discrete grid
Accurate Fourier transform of antisymmetric extension
Extension
It is useful to construct an antisymmetric extension because the \(0^{th}\) order derivatives, i.e. the function values, are known already.
Ideally, a sine transform should be used instead of the of the antisymmetric extension. It has the same analytical properties as the antisymmetric extension and is twice as fast as a DFT. The following figure demonstrates the latter process using an exponential function.
Accuracy
Computing derivatives by summing analytical derivatives of cosine functions with the appropriate coefficients and numerical derivatives of the Fourier transform, we see that while the first derivate can be computed very accurately, higher derivatives become less and less accurate.
Decay of Fourier Coefficients
Finally, the decay of the Fourier coefficients can be beautifully visualised. The stability and complexity of the algorithm is determined by how many derivatives are estimated. All odd derivatives of the antisymmetric extension are continuous at the domain boundary for smooth input data. Therefore, ensuring continuity of the function at the domain boundary implies that the Fourier coefficients decay like \(\mathcal{O}(N^{-3})\), faster than without the antisymmetric extension. This can be done by subtracting a linear combination of two cosine functions such that the remainder satisfies Dirichlet boundary conditions. Every further pair of trigonometric functions subtracted increases the order of convergence by \(2\). Note that there are additional discretisation errors from the DFT and the approximation of the boundary derivatives discussed in the above references.
Polynomial Subtraction
For the polynomial subtraction, I demonstrate a slightly different approach. Instead of constructing an antisymmetric extension, I simply subtract all derivatives so that the remainder becomes periodic. The remainder can then be expanded using a DFT. In principle, one might expect antisymmetric extensions to yield higher accuracy because they achieve more continuous derivatives with the same polynomial order. However, my numerical experiments indicate that the accuracy limits of both polynomial and trigonometric subtraction because of Runge’s phenomenon are similar.
Extension
The following figure demonstrates the subtraction of a 9th-order polynomial from an exponential function. It is evident that a high-order polynomial can approximate exponential functions well, and the remainder is close to zero.
Accuracy
A suitably chosen 9th-order subtraction polynomial ensures that the homogeneous remainder is continuously differentiable four times. Accordingly, we expect order unity oscillation from the fifth derivative onwards, as confirmed by plotting the error of the reconstructed derivatives:
It is worth noting that, in my experience, all spectral methods acting on non-periodic data (including Chebyshev methods on Chebyshev grids) share one drawback: reconstruction errors at the domain boundaries close to the discontinuities are often orders of magnitude higher than in the domain center. This is also demonstrated in the following figure, where the logarithm of the reconstruction error is shown in blue, and the reconstructed function is in red.
Decay of Fourier Coefficients
Studying the decay of the polynomial coefficients reveals that every pair of polynomials subtracted increases the order of convergence by \(1\), not by \(2\) as in the case of the antisymmetric extension. Comparing the decay of coefficients between the two methods shows that the polynomial subtraction method decays to machine precision faster than the trigonometric subtraction method despite the same order of convergence. This is one of the reasons why I do not think one of the two methods is to be preferred over the other necessarily.