\documentclass{article}
\usepackage{fullpage}
\title{DRAFT:
Integration of oscillatory integrals, a computer-algebra approach}
\author {Richard Fateman\\
Computer Science\\
University of California\\
Berkeley, CA, USA}
\begin{document}
\maketitle
\begin{abstract}
The numerical integration of oscillatory integrals is an important and
well-studied area of mathematical inquiry. See \cite{HOP} for a
series of recent conferences on the topic, including
applications. Examining some of the proffered methods from the
position of a computer algebra system (CAS) provides several
opportunities not available with purely numerical approaches. These
include the computing of symbolic derivatives as part of the
processing, the provision of a framework for error control based on
exact or arbitrarily-high-precision arithmetic, and a relatively
simple exposition upon which further elaborations may be built. By
executing code in a symbolic system it is also possible to compute
approximate answers as expressions in terms of symbolic parameters
such as frequency, displaying the nature of dependencies as well as
re-useable formulas for different values of parameters.
\end{abstract}
\section{Introduction}
Assume that we have a computer algebra system at our disposal, and
we need numerical or semi-numerical approximations to certain integrals
that we can write in this form:
$$I~=~\int_{a}^b e^{i \omega f(x)}g(x)dx.$$
As the real function $f(x)$ varies for $x$ in $[a,b]$, the complex
exponential (alternatively the literature may use sines or cosines
since $\exp(i x)~=~\cos(x)+i \sin(x)$) produces oscillations in
the integrand. As is done commonly in this context, we multiply $f(x)$
by a real parameter $\omega$ to allow us to demonstrate how the
situation changes with frequency. That is, given a particular $f(x)$,
an exponent where
$\omega=100$ will have a higher frequency, and be more oscillatory
than $\omega=1$. In our subsequent considerations, the situation is
that $f(x)$ and $g(x)$ are analytic and vary relatively modestly in
the interval $[a,b]$. If they are not, the integral may be broken
up into pieces where these assumptions hold.
Sometimes it is computationally convenient to deal
with the real and imaginary parts of the integral separately, but
notationally, the complex exponential is more often used in the literature.
This type of integral is problematical for typical quadrature programs
since almost any procedure which is based on sampling of the integrand
will be subject to the unreliable numerical consequences resulting
from computing values of the rapidly varying integrand. Other
approaches, targeted to oscillatory integrands, can be quite
successful on these. A readable survey of various approaches from the
theoretical and numerical analysis perspective has been composed by
Iserles \cite{iserles05}.
There are apparently significant applications of this type of integral
in investigations of optics, \cite{HOP}. There are also many papers
and computer programs ranging back to work by Filon \cite{filon28},
addressing this application. One recent computer algebra package by
Andrew Moylan \cite{LevinIntegrate}, has been incorporated into
Mathematica version 8.0's {\tt NIntegrate} command. {\tt NIntegrate}
is an impressive collection of methods; unfortunately as apparently is
the case with ``automatic'' quadrature packages, particular problems
may not fit well with the default choices made by the system. In such
cases the user may be required to deal with the mundane aspects of
trying to track many options, estimating errors, adjusting precision,
etc. Especially in these cases, understanding some of the consequences
of the user-available choices may be helpful.
In subsequent sections we review a few of the approaches specifically for oscillatory integrals that have
occurred in the literature and view each from a simple tutorial
perspective to see how these ideas fit in a computer algebra
framework. Understanding simple programs may allow the reader to
adjust them to particular situations. Rather than dealing with the
full complexity of packages intended to account for all eventualities
for all inputs, we hope to provide a readable exposition.
\section{Filon's method, more or less}
The idea from Filon \cite{filon28} is to approximate $g(x)$ piecewise
as a polynomial, each section looking like $g_0+g_1x+ g_2x^2$.
Instead of integrating $g(x)$ using Simpson's Rule,
consider that you can integrate $x^k\exp(i \omega f(x))$ for
$k=0,1,2$. This gives you alternatives for the Simpson's Rule
weights, and the integral can be written as the (piecewise)
sum of the modified Simpson's Rule. If we were actually integrating
a quadratic polynomial times an exponential, we could write it out:
$${{e^{i\,\omega\,x}}\over{\omega^3}}\left(
\left(-{ g_2}\,\omega^2\,x^2-{ g_1}\,\omega^2\,x-{ g_0}\,
\omega^2+2\,{ g_2}\right)\,i+2\,{ g_2}\,\omega\,x+{ g_1}\,
\omega\right)$$
Of course you may not start with the coefficients in that polynomial
but only its value at several points. Three points determine a
quadratic, and thus the polynomial can be computed by interpolation.
As an example, the interpolation algebra works out simply if you have
values of $g$ at the points $-1,~0,~+1$ in which case
$$ { g_0}=g\left(0\right) , { g_1}={{2\,g\left(1\right)-2
\,g\left(-1\right)}\over{4}} , { g_2}=-{{-g\left(1\right)+2\,g
\left(0\right)-g\left(-1\right)}\over{2}} $$
There are advantages to alternative techniques in accuracy and speed
if (more) points are not equally spaced, but chosen instead to
facilitate Gaussian
quadrature.
In any case, details are then encoded in numerical routines. The
result is to entirely eliminate the problematic oscillatory term from
the quadrature. Programs for Filon integration for $f(x)=x$ can be
found in scientific subroutine libraries. One precaution of note
included in the ACM library implementation is that it evaluates
certain subexpressions by an alternative series calculation less
subject to cancellation error than the most obvious formula. Another
technique to avoid accumulation of this error, available in a CAS, is
to simply increase the floating-point precision temporarily. There
are numerous tweaks possible on this idea.
The QUADPACK library routine QAWO\footnote{Available
at http://www.netlib.org, and included in the CAS Maxima.} and
related programs can also be used. These use a ``modified
Clenshaw-Curtis'' quadrature scheme. The adaptation provides that the
function $g(x)$ is approximated by a single Chebyshev series
approximated over the whole range.
%The weights used are those tabulated by the integrals of ($T_n(x) \exp
%(i \omega x)$). We display $n=0 \cdots 4$ here:
%
%$$ -{{i\,e^{i\,\omega\,x}}\over{\omega}} $$
% $$ -{{\left(i\,\omega\,
% x-1\right)\,e^{i\,\omega\,x}}\over{\omega^2}} $$
% $$ -{{\left(2\,i\,
% \omega^2\,x^2-4\,\omega\,x-i\,\omega^2-4\,i\right)\,e^{i\,\omega\,x}
% }\over{\omega^3}} $$
% $$ -{{\left(4\,i\,\omega^3\,x^3-12\,\omega^2\,x^2+
% \left(-3\,i\,\omega^3-24\,i\,\omega\right)\,x+3\,\omega^2+24\right)
% \,e^{i\,\omega\,x}}\over{\omega^4}} $$
%$$ -{{\left(8\,i\,\omega^4\,x^4-
% 32\,\omega^3\,x^3+\left(-8\,i\,\omega^4-96\,i\,\omega^2\right)\,x^2+
% \left(16\,\omega^3+192\,\omega\right)\,x+i\,\omega^4+16\,i\,\omega^2
% +192\,i\right)\,e^{i\,\omega\,x}}\over{\omega^5}} $$
We provide a version of Filon's procedure using the Maxima computer
algebra system to illustrate its brevity. It is based on a FORTRAN program
by Chase and Fosdick\cite{chase}\footnote {We do not display lengthier
``optimized'' code which computes arrays of sines/cosines by
differencing, and/or uses faster machine floats rather than
higher-precision software floats, deals with cosines rather than
sines, uses general endpoints, attempts to estimate error,
etc.}. This is the code for the imaginary part $\sin(\omega x)$;
similar code for $\cos (\omega x)$ computes the real part; combined
they compute the code for $\exp(i \omega x)$.
%% note
%% for all n,
%% Integrate[ChebyshevT[n_, x] Sin[x]] :=
%% -Cos[x]*ChebyshevT[n, x] +
%% Integrate[n* ChebyshevU[n-1, x]* Cos[x], x]
%%
%% II[ChebyshevU[n_, x]*Cos[x]] :=
%% ChebyshevU[n, x]*Sin[x] -
%% Integrate[(((-1 - n) ChebyshevU[-1 + n, x] +
%% n *x* ChebyshevU[n, x])* Sin[x])/(-1 + x^2), x]
%%
%%, um, express ChebyU.. U[n]=2*x*U[n-1]-U[n-2]; U0=1, u1=2*x
%% we could always express stuff in terms of 1,x,x^2 and convert to
%% cheby
\begin{verbatim}
filons0(f,m,p):= /*integrate f(x)*sin(m x) from 0 to 1, p points.
for notation, see CACM Algorithms: Chase & Fosdick Alg 353 */
block([h:bfloat(1/(2*p)),
k:bfloat(m), theta,s2p,s2pm1,alf,bet,gam],
theta:bfloat(k*h),
s2p:
block([sum:0],
for i:0 thru p do sum:sum+f(2*h*i)*sin(2*k*h*i),
sum-1/2*(f(1)*sin(k))),
s2pm1:
block([sum:0],
for i:1 thru p do sum:sum+f(h*(2*i-1))*sin (k*h*(2*i-1)), sum),
fpprec:2*fpprec, /*double the floating precision, precaution */
alf: 1/theta + sin(2*theta)/2/theta^2-2*sin(theta)^2/theta^3,
bet: 2*((1+cos(theta)^2)/theta^2 -sin(2*theta)/theta^3),
gam: 4*(sin(theta)/theta^3-cos(theta)/theta^2),
fpprec:fpprec/2,
h*(alf*(f(0)-f(1))*cos(k) +bet*s2p+ gam*s2pm1))
\end{verbatim}
\section{What if the argument of sin/cos/exp is complicated?}
The case $f(x)=mx$, simply a constant times $x$, is solved by a direct
application of Filon's method or modified Clenshaw-Curtis.
If the function $f(x)$ is more
complicated than this, we can try to make a substitution (change of
variables) \cite{evans} to convert $f(x)$ to this simple form. Let
$y=f(x)$. Solve for $x=f^{(-1)}(y)$, compute $dy$ and integrate
$g(f^{(-1)}(y)) e^{i \omega y} dy$ between transformed bounds.
Finding a suitable inverse function can be tricky\footnote{ Sufficient
conditions, mathematically speaking: $f$ is differentiable and
$f^(-1)$ is continuous. Solution methods in computer algebra systems
may not necessarily assure this second condition, or for that
matter, may not find an explicit inverse.}. Some computer algebra
systems provide a command (e.g. Maxima's {\tt changevar}).
Example:
$$\int_{a}^ {b} {e^{i\,w\,\sinh x}}\,\cos x\;dx$$
becomes suitable for Filon quadrature after it is changed to
$$\int_{\sinh a}^{\sinh b}
{{{ e^{i\,\omega\,y}\,\cos {\rm arcsinh}\;
y}\over{\sqrt{y^2+1}}}\;dy}.$$
%integrate(exp(%i*w*y)*cos(asinh (y))/sqrt(y^2+1),y,sinh(a),sinh(b))
%integrate(exp(%i*w*y)*cos(asinh(y))/sqrt(y^2+1),y,sinh(-1),sinh(1))
%quad_qawo(cos(asinh(y))/sqrt(y^2+1),y,sinh(-1),sinh(1),1000,cos);
It is interesting to see how this change can be used
to pre-process out the difficulty and just use an ordinary
(non-oscillatory) adaptive quadrature program. Given a simple test of
$a=-1,~b=1,~\omega=1000$, Mathematica 7's default numerical
integration program ran on the original problem for 0.8 seconds before
signalling failure. On the transformed result the same program
produced a result in 0.05 seconds. In each case a result of
approximately 1.69206e-4 was obtained, so the failure message was
actually produced erroneously. Quadpack's {\tt QAWO}, called from Maxima,
returns in about 0.001 seconds\footnote{QAWO returns in 0.0002 sec if the
function $\cos({\rm asinh} (y))/\sqrt{y^2+1}$ is compiled.}
with an answer 1.69206436906{\em 66968}e-4, where we know from more
accurate evaluation that the digits in italics are in error.
A Mathematica program oriented toward oscillatory integrands, {\tt
LevinIntegrate} by Andrew Moylan \cite{LevinIntegrate} obtained the same result in 0.05
seconds.
(More about this method later.)
Note that the inverse function could in fact be represented by
an approximation, and computed in a separate step, rather than
seeing it substituted literally into the transformed integrand.
In passing we also note that there is sometimes a requirement to integrate
$\sin(x)^ng(x)$ for some integer $n$. Since the power of
$\sin(x)$ can be written as a sum of multiple angles, (e.g. $\sin mx
~=~ 1/4 (3 \sin mx - \sin 3mx)$) the task can be reduced to the previous
case.
\section{Integration by Parts}
In this section we follow the approach suggested by work by Iserles and Olver on using
derivatives \cite{iserles05}; computer algebra systems provide an efficient
route to setting up this kind of method.
Here's the idea. We can use integration by parts as follows:
$$\int e^{i \omega f(x)}g(x)dx~ = ~\int u dv~=~ uv-\int v~ du.$$
Let $u=g/(i \omega f')$, $dv=e^{i \omega f}\times i \omega f'$. Then $v=e^{i \omega f}$, and
$du=(f'g'-gf'')/(- i \omega(f')^2)$
There are several points to note. We consider the $uv$ part as
``solved'' and that it requires no further attention other than
ordinary evaluation. We can factor out $-1/(i \omega)$ from the
remaining integral, $K=\int v~ du$. Significantly, $K$ is itself
again of the same form as the original problem, so integration by
parts can be used again. Because the remaining integral will have
that coefficient out front of $1/\omega ^2$, we can consider this as a
``correction'' to the $uv$, and that this term will drop even faster
if the oscillation parameter $\omega$ is larger. (That is, the approximation
gets better for {\em more oscillatory} integrands.) Indeed, by
repeating this operation (recursively, in our program) we can generate
a series in $1/ \omega$, although this asymptotic series does not
actually converge for fixed $\omega$ as we add more terms.
This expansion in series requires the
computation of (perhaps many) derivatives of $g$, although ultimately
these will be evaluated only at the endpoints of the definite
integral. The slightly odd display of $du$ with the $i \omega$ in the
denominator makes for a slightly better appearance of the
formula/program below.
\begin{verbatim}
/*integrate by parts oscillatory integral, ibpoi */
ibpoi(f,g,x,w,L):= ibpoi1(f,g,x,w,L,0); /* set count to 0 */
ibpoi1(f,g,x,w,L,c):= if c>=L then 0 else
block([df:diff(f,x)],
exp(%i*w*f)*g/(%i*w*df)
- 1/(%i*w)*ibpoi1(f, (df*diff(g,x)-g*diff(df,x))/df^2,x,w,L,c+1))
/*our example */
define(v(x),ibpoi(sinh(x),cos(x),x,1000,1));
expand(v(1.0)-v(-1.0));
\end{verbatim}
In this case, just one term yields 1.{\em 702}e-4, which is correct to
3 decimal places. Carrying 3 terms provides 1.69206436{\em 7290}e-4,
good for 9 decimal places. Continuing for 6 terms, the result 1.69206436906{\em
6987}e-4, is good for about 13 places. Ultimately, increasing the
number of terms ceases to increase the accuracy in this asymptotic
series for fixed $\omega$. The final, otherwise omitted, error term
can be evaluated by some other means. Increasing $\omega$,
predictably, will make the series converge faster. Because {\tt
ipboi} is a symbolic program, we can leave the parameter $\omega$
symbolic instead of using the number 1000, and compute the resulting
formula. For $v(x)$ with one step in the expansion:
$$-{{i\,e^{i\,\omega\,\sinh x}\,\cos x}\over{\omega\,\cosh x}}.$$
Clearly this is not expensive to evaluate.
For further analysis, see Iserles, (Lemma 2.1) \cite{iserles05}. While
additional intuition may be gleaned from examining these formulas, in
practice just running the four-line program may be quite useful. An
elaboration of the program in case many terms are deemed useful would
take a different approach to computing derivatives: in fact, one need
not compute $f'$, $f''$, etc. One only needs $f'(a)$, $f'(b)$
$f''(a)$, $f''(b)$ etc. These are likely more rapidly calculated by computing
two Taylor series for $f$, one centered at $a$ and one centered at
$b$, and similarly for $g$. Ordinary arithmetic on Taylor series (so
long as they are expanded at the same point) is also provided, in most
computer algebra systems.
The Maxima program then looks like this:
\begin{verbatim}
(ibpoilims(f,g,x,w,L,a,b):= /* integrate from a to b */
block([keepfloat:true, ratprint:false],
at(ibpoilims1(taylor(f,x,b,L),taylor(g,x,b,L),x,w,L,0),x=b)
-at(ibpoilims1(taylor(f,x,a,L),taylor(g,x,a,L),x,w,L,0),x=a)),
ibpoilims1(f,g,x,w,L,c):= if c>=L then 0 else
block([df:diff(f,x,1)],
exp(%i*w*f)*g/(%i*w*df)
- 1/(%i*w)*ibpoilims1(f,(df*diff(g,x,1)-g*diff(df,x,1))/df^2,x,w,L,c+1)))
/*our example */
ibpoilims(sinh(x),cos(x),x,1000.0,3,-1.0,1.0); /* integrate from -1 to 1 */
\end{verbatim}
Another tack, similar in concept to the program above but
quite different in execution would be to use ``automatic
differentiation'' \footnote{http://www.autodiff.org}.
While the example given runs quite fast as given, note if we used the
command {\tt ibpoilims(sinh(x),cos(x), x,1000,3,-1,1)} using integer
values rather than floats, the program produces a large symbolic
expression including exact terms like $\sinh(1)$, which can later be
evaluated to any desired precision. Furthermore, if we change the
first line of {\tt ibpoilims1} to return a non-zero function of $x$,
say {\tt ErrorTerm(x)} we can see how its value enters into the result.
Note that this idea as well as variable substitution (change of
variables) essentially approaches the definite integration
problem by modifying the related {\em
indefinite} integral. Change of variables removes the oscillation,
and integration by parts attenuates the contribution of the
oscillation.
\section{Elaborations and improvements: Levin}
When the integrand is recognized as a case of a smooth function
multiplied by a simple oscillation, Filon's method or modified
Clenshaw-Curtis can be used directly, and oscillation presents no
problem. If we intend for the computer system to use this procedure
automatically as part of a general integration routine
then we need to programmatically recognize that the integral is in fact
appropriately oscillatory, and the routine must separate the pieces.
A major alternative can be based on an idea due to Levin
\cite{levin-82}, which works as follows: Again we compute the {\em
indefinite} integral, a function of $x$
$$H(x)~=~ \int e^{ i \omega f(x)}g(x)dx.$$ Given $H$ one can
simply computes $H(b)-H(a)$.
The problem is neatly solved if any of the infinite
number of formulas for $H$ can be found. $H$, in this
case is a formula for an indefinite
integral which is valid up to an arbitrary constant! All we
need is some formula which can be evaluated (perhaps only approximately)
at two points. In the course of the subtraction the arbitrary constant
cancels.
Finding some $H$ may be time-consuming though it is something that
computer algebra systems can sometimes do.
{If we can anticipated that it is too time-consuming,
perhaps we won't bother to look! In the case of solving many
problems with the same integrand but different endpoints or internal
parameters, it may be well worth looking.}
% (cite something re Risch \cite{}).
A more
likely case is that it is {\em not especially convenient or possible to find a
closed form}. It is still possible to find a method to approximate
$H$ as closely as desired.
%%%% not ... and in a form that gets better when $\omega$ increases.
The Levin technique \cite{levin-82} proceeds by finding an
approximation for $H$ by observing that we are looking for (or
approximating) a function $p(x)$ such that
$$ e^{f(x)} p(x)~\approx
~\int e^{ f(x)}g(x)dx.$$
If we compute the derivative with respect to $x$ of both sides and divide
out by $e^{f(x)}$ we have a differential equation $$p'(x)+f'(x)p(x)=g(x)$$.
All we need to do is find $p$, and then $H(x)=e^{f(x)} p(x)$, leading to
the approximate value of the desired integral, $H(b)-H(a)$.
Levin observes that under the assumptions that $g$ and $f'$ are at
most slowly oscillatory, then among the solutions for $p$
there is one that is not rapidly oscillating. Furthermore,
we will find it by a collocation
method. Using a computer algebra system instead of a
purely numerical approach to finding $p(x)$ can provide additional
information -- like variation of values (and error) with respect to
$\omega$.
%%%
In reviewing options for solving the associated differential equation,
one thought in using computer algebra was to try direct generation of
a Taylor series for $p$, using one of several symbolic-iterative
methods. In fact this is not a good idea because such a solution
requires choosing an expansion point (we could select the midpoint of
the interval) and producing a series whose coefficients are computed
by matching derivatives at that point. Since the integrand is highly
oscillatory, this would not provide a high quality result at any
distance from the point of expansion, and as $\omega$ increased, the
approximation would worsen. Basically, we would find the wrong
solution to the ODE.
Instead, as demonstrated by Levin, we can find an approximation to
a less oscillatory solution by collocation.
{This transforms the computation of finding $p$ into a set of linear
equations for finding a polynomial fit to the solution. Numerous
variations on this technique have emerged to try to overcome
problems with this numerical approach, since it appears that the
generated equations may be an unstable system.}
Let us return to a simple form where we have, for the moment absorbed
$i \omega$ into $f$. We wish to compute or approximate $p$ in:
$$p'+f'p=g$$
We proceed by assuming some expansion of $p$ in the same (or other)
basis functions but with coefficients to be found by collocation.
Given the basis, $\{x^j\}$, and therefore
assuming $p_k=\alpha_0 + \alpha_1 x + ... +\alpha_{k-1}x^{k-1}$, we have a
corresponding $p'= \alpha_1 + ...+ (k-1) \alpha_{k-1}x^{k-2}$.
Choose some set of collocation points, which might be chosen for
simplicity of exposition (but not optimality) as equally-spaced from
$a$ to $b$. $a=r_0, r_1, ...., r_{k-1}=b$. We set up $k$ linear equations
by which one can determine the $\{\alpha_j\}, j=0,...,k-1$.
This requires computing $f'(x_j)$ and $g(x_j)$ at those points, as part
of the setup.
Once one has the values of $\{\alpha_j\}, j=0,...,k-1$, and hence an
approximate functional form for $p$, we can use that to compute $p(a)$
and $p(b)$.
Here's a sample program in Maxima's language:
\begin{verbatim}
/* integrate (exp(i*w*f)*g,a,b) by Levin method, k points*/
LevinInt(f,g,a,b,omega,k):= block([varlist,p,pointlist,sol,ratprint:false],
varlist: makelist(alpha[i],i,0,k-1),
p: varlist . makelist(x^i,i,0,k-1),
pointlist : makelist(a+ (b-a)*i/(k-1),i,0,k-1),
define(eq(x), diff(p,x)+%i*omega*diff(f,x)*p-g),
sol:linsolve(eqs:map(eq,pointlist),varlist),
define(H(x), exp(%i*omega*f)*ev(p,sol)),
rectform(H(b)-H(a)))
\end{verbatim}
Using our standard example, {\tt
LevinInt(sinh(x),cos(x),-1.0,1.0,1000,8);} in which we have chosen
collocation at only 8 points, produces a result of 1.69{\em 178...}e-4.
Using 23 subdivisions we get 1.6920643{\em 3981}e-4.
%0.000169206436906715960940227786164 + 0.*10^-43
This program is only a single example, and building a more-or-less
automatic program would require attempting some measurement of likely
error, measuring effort, increasing precision, and for that matter,
checking that $f$ and $g$ are sufficiently smooth, and the formulation makes sense.
We have experimented with increasing the precision (above machine
double-floats), with somewhat inconclusive results. In particular an
interesting alternative to the numerical operation of the above
program is to allow for the complete answer to be generated
(symbolically including expressions like $\sinh(1/30)$) and then
proceed to simplify that [large] expression, and/or evaluate it using
ever-higher numerical precision to see if that affects the
result\footnote{Methods for the exact solution of a linear system with
symbolic coefficients are quite different from numerical methods,
and can result in huge solution expressions.}. Other variation
include using points and a set of orthogonal functions that provide
better fit with fewer terms, for example Gauss-Lobatto points and
Chebyshev polynomials.
%%% i think, from mathematica, the answer may
Incidentally, we believe an accurate value is about 1.6920643690671596e-4.
\section{Extension to other functions}
Let us sketch the basic idea of extension to other functions as shown by Levin \cite{levin-96} where the exponential can be replaced by an element of a vector of oscillators
which has the required form ${\bf W}'={\bf A} \cdot {\bf W}$.
This paper solves for
(composed) functions for collocation\footnote{ On the computer algebra
front, Levin's 1996 paper expands on this and provides an example
computed by Mathematica, but other than setting up the collocation
matrix more or less conveniently, no symbolic features are used: the
example is primarily using Mathematica to access a library routine for
numerical solution of a set of linear equations. Moylan makes all this
work more automatically \cite{LevinIntegrate}.}. Extensions to this
idea include matching derivatives as well as values at grid
points\cite{iserles04}.
As a specific example, take the Bessel functions of the first kind, ($J$) which satisfy the differential equation
$$\left(
\begin{array}{c}
J_0(x)' \\
J_1(x)'
\end{array}
\right)
=
{\bf A}
\left(
\begin{array}{c}
J_0(x) \\
J_1(x)
\end{array}
\right)
$$
where
$${\bf A} = {\left(
\begin{array}{cc}
0 & -1 \\
1 & -\frac{1}{x}
\end{array}
\right)}.
$$
The product of any two oscillators satisfying the criterion with an invertible
matrix also satisfies the criterion. For example, to add products of sin, cos, $J_0$ and $J_1$ to the possible oscillators, consider the matrix
\left(
\begin{array}{cccccccc}
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
-1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 & -1 \\
0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 & -\frac{1}{x} & 1 & 0 \\
0 & 0 & 0 & 0 & -1 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 1 & 0 & -\frac{1}{x}
\end{array}
\right)
corresponding to the vector ${\bf W}^T =(\sin(x),cos(x), J_0(x), J_1(x), \sin(x) J_0(x), \cos(x) J_1(x), \cos(x) J_0(x), \sin(x) J_1(x)) $
We reduce the integration problem to finding the (now vector) solutions for $p$ of the
differential equation $${\bf p}'+ {\bf A} {\bf p} = {\bf g}.$$
We do this via collocation: by choosing a set of points, say $\{c_j\}$ and a proposed
expansion of ${\bf p}$ in some orthogonal set (e.g. powers of $x$, or Chebyshev polynomials of degree $r$ or less), evaluating and solving
$${\bf p)'(c_j)+{\bf A}(r,c_J) {\bf p}(c_j)={\bf g}(c_j).$$
Again, finding ${\bf p}$ allows us to integrate from $a$ to $b$ by computing
${\bf p}(b){\bf W}(b)-{\bf p}(a){\bf W}(a)$.
How best to arrange this computation demands attention to many details; for example subdivision of the range from $a$ to $b$ may be advisable: fitting a sequence of low-degree curves may be superior to trying to make a single polynomial fit. Moylan's program \cite{levin-integrate} addresses these issues.
%***
\section{CAS quadrature and error estimation}
It is easily demonstrated that any deterministic mechanism for
quadrature that is based on sampling can be stymied by an adversarial
integrand-evaluation procedure to produce an arbitrarily incorrect
result, and an under-estimated error.
A CAS may have a different prospect in this regard, since it is plausible
that the integrand is presented as a mathematical formula which can be
analyzed for various characteristics including continuity,
differentiability, and presence of poles and singularities. An obvious upper and
lower bound can be obtained by even such a simple mechanism as the
rectangular rule. If $M=\max_{a \le x \le b}f(x)$, and
$m=\min_{a \le x \le b}f(x)$,
then $|b-a|m < \int_a^b f(x) dx < |b-a|M$. For some of
the techniques here, $|b-a|$ can be made small by subdivision, or
$f(x)$ can be made slowly-varying.
It is also possible, in many cases, to bound the truncation error, that
is, the error incurred in replacing a mathematical function by an
approximation: say, replacing a function by its Taylor series, or
Chebyshev series.
Finally, there will usually be a contribution to the error in a reported
result caused by the finite-precision arithmetic used for evaluation.
This can be alleviated in a CAS in several ways.
\begin{itemize}
\item The expressions being evaluated can be mechanically rearranged
to minimize cancellation.
\item The expression, or parts of it, can sometimes be evaluated
{\em exactly} using rational arithmetic.
\item The expression can be evaluated in arbitrary-precision
arithmetic. That is, the same expression can be used but with a
working precision of any specified number of bits, and thus some
computational deficiency at precision $n$ can be overwhelmed by
recomputation at precision $2n$ or higher.
\end{itemize}
Overall,
we have found that in studying particular formulations of quadrature,
we can more easily experiment in this framework:
given an integral, how best can we approximate it with one of these
techniques? Which improves the value, a higher order procedure (more
terms, smaller subdivisions, etc.) or more precise arithmetic? While
a standard numerical language or system can generally allow for more
terms, a CAS provides easy access for other arithmetics, in
particular, arbitrary precision experiments, or perhaps interval
arithmetic. These techniques can be honed on chosen examples
whose exact values are known.
\section{Conclusion}
We have added a few nuances to previous discussions of numerical
integration in a symbolic context (\cite{Fateman81, Geddes86}) which
were primarily concerned with dealing with singularities in the
integrand, stationary points in the oscillatory component,
or with infinite integrals. We have by no means completely
surveyed the literature on the numerical treatment of highly
oscillatory problems, but have simply reviewed the kinds of
contributions that rather simple computer algebra programs can make
toward advancing some of the methods. We hope these demonstrations are
rather simple to understand. We have not, in this brief paper,
attempted to provide a ``one-stop'' complete automated solution, nor
have we included necessary code in our examples to make estimates of
error. We have not covered a number of other approaches or variants
(in particular, variations on the method of stationary phase) that
seem either less accurate, more expensive, or more difficult to
program than the methods here.
The recently released version 8.0 of Mathematica, which was released
after the first draft of this paper, has an expanded facility,
enlarging its already ambitious numerical integration suite of
programs. It includes the Levin-like methods contributed by Andrew
Moylan \cite{LevinIntegrate}. The resulting system is quite complex,
although in our experimentation, capable of good results with many
``automatic'' settings. Even so,a better understandinf of how these
ideas work may be fruitful in finding efficient methods for particular
kinds of documented or new oscillatory integrals.
for exploration of techniques for oscillatory integrals to increase
efficiency and accuracy.
{\begin{thebibliography}{99}
\bibitem{chase}
S.M. Chase and L.D. Fosdick,
``An algorithm for Filon Quadrature,''
{\em CACM 12 no 8} August 1969 453--457.
\bibitem{evans} G.A. Evans, An alternative method for irregular oscillatory integrals over a finite range, {\em Internat. J. Comput. Math. 53} (1994) 185-193.
\bibitem{Fateman81} R.J.~Fateman, ``Computer Algebra and Numerical Integration,
Proc. SYMSAC'81 (ISSAC), August, 1981, 228--232.
\bibitem{filon28}
L.N.G. Filon,
``On a quadrature formula for trigonometric integrals,''
{\em Proc. Roy Soc. Edinburgh 49} 1928-29 38.
\bibitem{Geddes86} K.O.~Geddes,
``Numerical Integration in a Symbolic Context''
Proc. SYMSAC-86, (ISSAC) July, 1986, 185--191.
\bibitem{iserles05}
Arieh Iserles and
Syvert P Norsett.
``Efficient quadrature of highly oscillatory integrals using derivatives,''
{\em Proc. R. Soc. A 8} May 2005 vol. 461 no. 2057 1383---1399 .
\bibitem{iserles04}
A. Iserles and S. P. Norsett,
``On Quadrature Methods for Highly Oscillatory Integrals and Their Implementation''
BIT Numerical Mathematics
Volume 44, Number 4, 755---772, DOI: 10.1007/s10543-004-5243-3
\bibitem{levin-82} David Levin. ``Procedures for computing one- and two-dimensional integrals of functions with rapid irregular oscillations,''
{\em Math. Comput. 38 (1982)} 531---538.
\bibitem{levin-96} David Levin,
``Fast integration of rapidly oscillatory functions,''
{\em J. Computational and Applied Math. 67 (1996)}, 95---101.
\bibitem{LevinIntegrate} David Levin,{\tt http://sites.google.com/site/andrewjmoylan/levinintegrate}. Also see arxiv 0710.3140v1.pdf.
\bibitem{HOP} Newton Institute Seminars on Highly Oscillatory Problems,
{\tt http://www.newton.ac.uk/webseminars/pg+ws/2007/hop/}
\end{thebibliography}
}
\end{document}
some examples
picardfode(y,y,x,1,10,0); exponential
picardfode(sqrt(1/(x^2+1))*y,y,x,1,10,0);
\end{verbatim}
OLD
We need to choose a starting point, but it does not matter, so long as it is
not a zero of $f'$, which by hypothesis allows us to choose $(b+a)/2$.
As an example, consider the differential equation
$r'(x) = \sqrt{r(x)^2+x^2+4}, ~~~r(0)=0.$
Solving it as a Taylor series to 8th order by \verb|newtonfode(sqrt(x^2+y^2+4),y,x,0,8);|
yields
$$ y~=~2\,x+{{5\,x^3}\over{12}}+{{x^5}\over{192}}+{{149\,x^7}\over{32256}}$$
Substitution into
$${{d}\over{dx}}\,y-\sqrt{y^2+x^2+4}$$
yields $1683 x^8/114688$, making us wonder if it is right:
back-substitution shows the result is zero to order 7, which seems pretty close.
(Indeed, it is easily shown by computing more terms that the displayed value for $r$ is
actually correct to order 8).
Given that we can compute a value for $r(x)$, we can compute $H(x)=\exp(f(x))\times r(x)$ and are done.
We can even give a formula for $H(x,\omega)$ showing how the error varies, as a function of $\omega$,
according to the last term in the series. The formula for $H$ may be used, subject
to sufficient accuracy, anywhere within the radius of convergence of the series. The
functional dependency on $\omega$ is explicit, and
(arguably?) we do not have to write out graphs or tables
to illustrate its effectiveness.
There are alternative formulations for solving an ODE semi-symbolically, at
least one of which, using Chebyshev series, seems particularly appropriate. In
this case, a series solution in a specific range, say $[-1, 1]$ would not suffer from
increasing error away from the point of expansion, (zero) but would instead present
nearly optimal approximations for a given degree within a pre-determined range or
at its edges (cite chebfun site).
\subsection{Example}
Consider
$$ I_1 ~=~ \int_{-1}^1 \frac{1}{1+x^2} \exp(i \omega (x^3+4 x )) dx.$$
For this example $f(x) = (i \omega (x^3+4x))$ and hence $f'(x) = (i \omega (3 x^2+ 4))$.
Also, $g(x) = \frac{1}{1+x^2}.$
The differential equation is $r'(x)= g(x)- f'(x)*r(x)$ and can be solved by calling
{\tt newtonfode} as
\verb|newtonfode(1/(x^2+1)-(%i*w*(3*x^2+4))*r, r,x,1,15)| where we have chosen an
expansion to power 10 in $x$. Whether this is an appropriate size may be judged by
dropping off one or more terms to see if the answer changes significantly.
\begin{verbatim}
Notes:
MUST account for Newton seminar activity on this topic, see
http://www.comlab.ox.ac.uk/sheehan.olver/papers/storysofar.pdf
http://www.newton.ac.uk/programmes/HOP/seminars/070509001.pdf
and the detailed Mathematica program from Andrew Moylan's 2008 thesis
Bibliography etc.
\end{verbatim}
iserles/norsett asymptotic series.
sig[0](f,g,x):=f$
sig[k](f,g,x):= diff(sig[k-1](f,g,x)/diff(g,x),x)
kill(sig,dg,h,IS,f,g);
sig[0](f,x):=f;
f:1+cosh(x)$
g:x^3+x^2+4*x+1$ /* g' must be non-zero at 0 and 1 */
dg:diff(g,x);
sig[k](f,x):=diff( sig[k-1](f,x)/dg,x);
h(a,x):=subst(a,x,exp(%i*w*g)/dg*sig[m-1](f,x))$
S[m]:=-(1/(-%i*w)^m*(h(1,x)-h(0,x)))$
IS[L]:=sum(S[m],m,1,L)$
picardfode(F,y,x,a0,n,pt):=
block([s:a0,deg:0],
while deg <= n do
s:a0+integrate( taylor(subst(y=s,F),x,pt,deg:deg+1),x),
return(taylor(s,x,pt,n)))$
%..................
show w out front ..
r' + i*w*f'*r = g
assume r(x) = a1(x)*exp(-i*w*f(x)).
then r'(x)= (a1'(x) - a1(x)*i*w*f'(x)) *exp(-i*w*f(x)).
r' + i*w*f'*r =G is then
(a1'(x) - a1(x)*i*w*f'(x)) *exp(i*w*f(x))
+ i*w*f'(x)*a1(x)*exp(-i*w*f(x)) =
(a1'(x) *exp(-i*w*f(x)) =G
so a1'(x) = G*exp(i*w*f(x)).
which tells us nothing since that is the problem we started with..
of course, not surprising.
what if we let s= exp(-i*w*f(x)), and express r(x) as
a0 +a1*s +a2*s^2 + ... which is a Fourier expansion.
r' is then a1*s' + 2*a2*s*s' + ... = s'*(a1+2*a2*s+3*a3*s^2+ ...)
s' is -i*w*f'*s .
r'+i*w*f'*r =G is then
s'*(a1+2*a2*s+ ...)- s'*(a0+a1*s+a2*s^2 + ...)= G
s'*(a0 + a1(1-s)+a2(2s-s^2) +a3 (3s^2-s^3) + ....) =G
(a0+a1) + (-a1-2*a2)*s + (-a2+3*a3)*s^2 + ...) =G/s'
if we can expand G/s' in powers of s, do we have anything useful?
it would look sort of like a series in sin(k*w*x), k=1,2,.../
presumably the coefficients would show the spectrum of G.
Not helpful? Dunno.
...................
consider integrating factor in general..
y'+P(x)*y=Q(x)
M(x) is an integrating factor ..
M*y' + MPy =MQ
we want lhs to be
(My)' = MQ
to integrate as ..
My= int(MQ,x)
so
y= (1/M)*int(MQ,x)
what is M?
M' = PM or
M'/M =P
log(M)=int(P) or
M= exp(int(P))
put iw in there..
M=exp(int(iwP),x))
look at int(MQ,x)...
integrate by parts.
du=Q/(iwP')dx
v=M=exp(int(iwP)*(iwP')
u =1/(iw)* int(Q/P',x)
dv=iwP'*exp(int... yeech.
uv-int(u dv) ...
uv is int(Q,x)*M- i*w* ... no. try the other way
du=Mdx
u= (1/(i w))... no, no good there..
go back to iserles
r'+i*w*f'*r=g, asymptotically large w
r= int(g-i*w*f'*r,x)= int(g,x) - int (i*w*f'*r,c).
assume r=exp(i*w*h).
r'=i*w*h
$$\int e^{i \omega f(x)}g(x)dx~ = ~\int u dv~=~ uv-\int v~ du.$$
Let $u=g/(i \omega f')$, $dv=e^{i \omega f}\times f'$. Then $v=e^{i \omega f}$, and
$du=(f'g'-gf'')/(- i \omega(f')^2)$
int(exp(iwf)*g,x) = int(u dv) = uv-int(v du).
let u = g/(iwf'),
dv=exp(iwf)*f'*iw
v =exp(iwf)
du =(- f'g'-gf'')/ (-i w f'^2). So
u*v = (1/iw) * exp(iwf)*(g/f'). f' better not be zero...
int(v du) is 1/(-iw)* int(exp(iwf)* (-f'g'-gf'')/f'^2,x).
looks smaller as w increases.
..............
OI(f,g,a,b,omega,k):= block([varlist,p,pointlist,sol,ratprint:false, keepfloat:true],
varlist: makelist(alpha[i],i,0,k), /*start at 0 or 1*/
p: varlist . makelist(x^i,i,0,k),
pointlist : makelist(float(a+ (b-a)*i/k),i,0,k),
define(eq(x), diff(p,x)+%i*omega*diff(f,x)*p-g), ldisplay(yy),
eqs:ev(map(eq,pointlist),numer), ldisplay(sol),
sol:linsolve(eqs,varlist), ldisplay(sol),
define(H(x), exp(%i*omega*f)*ev(p,sol)),
rectform(H(b)-H(a)))
OI(cos(x),x,0,1,50,9)
transformations
consider integrate(f(x),x,a,b). assume -inf =L then 0 else
block([df:diff(f,x,1)],
exp(%i*w*f)*g/(%i*w*df)
- 1/(%i*w)*ibpoilims1(f,(df*diff(g,x,1)-g*diff(df,x,1))/df^2,x,w,L,c+1)))
/*our example */
ibpoilims(sinh(x),cos(x),x,1000.0,3.0,-1.0,1.0); /* integrate from -1 to 1 */
(ibpoilims(f,g,x,w,L,a,b):= /* integrate from a to b */
block([keepfloat:true, ratprint:false],
at(ibpoilims1(taylor(f,x,b,L),taylor(g,x,b,L),x,w,L,0),x=b)
-at(ibpoilims1(taylor(f,x,a,L),taylor(g,x,a,L),x,w,L,0),x=a))
ibpoilims1(f,g,x,w,L,c):= if c>=L then 0 else
block([df:diff(f,x,1)],
exp(%i*w*f)*g/(%i*w*df)
- 1/(%i*w)*ibpoilims1(f,(df*diff(g,x,1)-g*diff(df,x,1))/df^2,x,w,L,c+1)))
/*our example */
ibpoilims(sinh(x),cos(x),x,1000.0,3.0,-1.0,1.0); /* integrate from -1 to 1 */
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
let's try Levin for Bessel functions.
integrate(bessel_j(1,200*x) *(x^2+1),x,0,1)
which is 1/200*(1-bessel_j(0,200)+bessel_j(2,200))
exactly, or about
0.005151659172396532004783694962224718046978102435
consider ibpoi,
consider a version of LevinInt
Let bessel_j[1,x] == J1(x).
diff(J1(x),x) is 1/2*(J0(x)-J2(x)), by the way.
H= integrate(J1(200*x)*(x^2+1),x,0,1)