%latex
\documentclass{article}
\usepackage{fullpage,amsmath}
\title{Fun with Filon Quadrature ---
a Computer Algebra Perspective}
\author {Richard Fateman\\
Computer Science\\
University of California\\
Berkeley, CA, USA}
%April 18, 2007
\begin{document}
\maketitle
\begin{abstract}
Filon quadrature, designed specifically for a common class of oscillatory integrals,
is easily implemented in a computer algebra system
(CAS) using exact or approximate arithmetic. Although quadrature is
almost always used in the context of approximation, we see that
valuable exact properties are exhibited by running an exact algorithm on
exact {\em symbolic} inputs. This experimental approach to the
mathematics allows us to prove the implementation of the algorithm has
the expected mathematical correctness properties, and is therefore more
likely to be, itself, a correct implementation.
\end{abstract}
\section{Introduction}
Filon\cite{filon28,chase} quadrature is supposed to be exact for
integrands of the form $f(x)\sin(m x)$ where $f(x)$ is a polynomial of
degree 2 or less. One parameter, namely the number of panels used for
the formula, should be irrelevant if the formula is exact. Note that conventional
numerical quadrature programs based on sampling typically run into substantial
difficulties with highly oscillatory integrands.
\section{An exact formula}
Here is an algorithm representing an exact Filon formula for the $\sin$ product integral
from 0 to 1, using the syntax of the free Maxima CAS. (To simplify the presentation
we ignore variations that involve $\cos$ and use arbitrary limits.)
Our coding follows the documentation for ACM Algorithm 353 \cite{chase}, which should
be consulted by a reader interested in fuller specific background.
Our code is vastly simpler and in some ways more general
than the ACM FORTRAN code (The FORTRAN has various limitations because of its fixed precision,
but is also complicated by the inclusion of code related to bounding error.)
\begin{verbatim}
filonse2(f,m,p) :=
/* Filon quadrature, Sin, Exact, f(x)*sin(m*x), from 0 to 1, p panels,*/
block([h : 1/(2*p),k : m,theta,s2p,s2pm1,alf,bet,gam,f1],
theta : k*h,
s2p: sum(apply(f,[2*h*i])*sin(2*k*h*i),i,0,p) - (f1:apply(f,[1]))*sin(k)/2,
s2pm1: sum(apply(f,[h*(2*i-1)]) *sin(k*h*(2*i-1)),i,1,p),
alf : 1/theta+sin(2*theta)/2/theta^2 -2*sin(theta)^2/theta^3,
bet : 2*((1+cos(theta)^2)/theta^2 -sin(2*theta)/theta^3),
gam : 4*(sin(theta)/theta^3-cos(theta)/theta^2),
h*(alf*(apply(f,[0])-f1*cos(k))+bet*s2p+gam*s2pm1))
\end{verbatim}
Other versions of this algorithm can be found elsewhere\cite{fateosc07}, which benefit
by insisting relatively early in the calculation that all data will be given as specific precision
floating-point numbers. Here we proceed ``exactly'' with this simple version as far as possible.
Say we wish to integrate $(3x^2+4)*\sin(100 x)$ from 0 to 1, an appropriate
use of the progra would be {\tt filonse(lambda([x], 3*x\^{}2+4), 100, 3)} where
the final parameter 3 is just specifying how many panels to use for subdividing
the range of integration. (The ``{\tt lambda}'' notation here is used to denote a function
of one local variable {\tt x} without making up a name for the function.)
Since the Filon formula should be exact
for any quadratic function, {\tt filonse(lambda([x], q*x\^{}2+r*x+s), 100, 3)}
should be exact.
Invoking this command produces a large expression which can, by using a few
additional commands\footnote{The commands ``ratsimp'', ``trigsimp'' and
``trigreduce''.} be reduced to
% -((5000*COS(100)-5000)*S+(5000*COS(100)-50*SIN(100))*R+(-100*SIN(100)+4999*COS(100)+1)*Q)/500000
\begin{equation}
-{
\frac{\left(5000\,\cos 100-5000\right)\,s+\left(5000\,\cos 100-50\,%
\sin 100\right)\,r+\left(-100\,\sin 100+4999\,\cos 100+1\right)\,q%
}{500000}}.
\end{equation}
One simple way to show that this formula {\em is independent of the number of
panels}, is to run same command with (say) 10 instead of 3 as
the last parameter, resulting in the same formula after
simplification. That shows the Filon formula for certain cases
deduces exact symbolic definite integrals and could be used as a
(possibly much faster) alternative to the usual CAS program. The
conventional symbolic indefinite integration would most likely find
the antiderivative, and then after checking for singularities would
apply the ``fundamental theorem of integral calculus'', implemented by
substitution of bounds.
Converting the above formula to a more compact (although now approximate) result, we learn that
a formula for the integral is
% 0.00138*S-0.00867*R-0.00872*Q
$$ 0.00138\,s-0.00867\,r-0.00872\,q $$
\section{What else can be done with this algorithm?}
Running the same experiment with an arbitrary cubic function shows
that the formula depends on the number of panels, and thus this Filon
formula is not exact for higher degree polynomials.
How does the argument of the $\sin$ enter into the formula?
Consider \verb|filonse(lambda([x],r*x+s),m,2)|, a command which asks to integrate
$(rx+s)\sin(m x)$ from 0 to 1, using 2 panels. The command results in a large expression,
but one which can, in a few simplification commands, be reduced to this:
% -((M*COS(M)-M)*S+(M*COS(M)-SIN(M))*R)/M^2
$$ -{\frac{\left(m\,\cos m-m\right)\,s+\left(m\,\cos m-\sin m\right)\,r%
}{m^{2}}} $$
Naturally, this formula is exact for any values of $r$, $s$, and
non-zero $m$\footnote{Considering $m=0$ we could compute the limit of
this expression and get the right answer, $0$; alternatively, looking at
the origin of the problem, a constant $0$ integrand results in a $0$ answer.}.
Could we also allow for a symbolic ``number of panels''? In the
algorithm as stated, not likely. We use $p$ as a limit in a summation.
While this can, at least in principle, be simplified to a closed
form in terms of an arbitrary $p$, this seems unlikely.
We have however, done so with other algorithms (in particular
a symbolic Simpson's rule), but these summation formulas are more daunting.
We can however leave parts of the formula even more symbolic. For example,
with $p=1$ (which actually means there are two panels), we can feed
the algorithm an arbitrary function $f$ and an arbitrary frequency $m$
and determine the points at which $f$, $\sin$, $\cos$ are
evaluated. The answers being $f$ evaluated at 0, 0.5, 1; $\sin$ and
$\cos$ at $m/2$ and $m$. After some manipulation of the resulting
expression, we can condense some of the pieces, eliminate some
redundant subexpressions, and express the formula as a simple program
(though too wide to be comfortably typeset on one line):
% ((3*F(1)-4*F(0.5)+F(0))*M*SIN(M)+(-F(1)*M^2+4*F(1)-8*F(0.5)+4*F(0))*COS(M)+F(0)*M^2-4*F(1)+8*F(0.5)-4*F(0))/M^3
\begin{verbatim}
( (3*f(1)-4*f(0.5)+f(0))*m * sin(m)
+(-f(1)*m^2+4*f(1)-8*f(0.5)+4*f(0)) * cos(m)
+f(0)*m^2-4*f(1)+8*f(0.5)-4*f(0))
/m^3.
\end{verbatim}
%$$ {{\left(3\,f\left(1\right)-4\,f\left(0.5\right)+f\left(0\right)%
% \right)\,m\,\sin m+\left(-f\left(1\right)\,m^{2}+4\,f\left(1\right)-%
% 8\,f\left(0.5\right)+4\,f\left(0\right)\right)\,\cos m+f\left(0%
% \right)\,m^{2}-4\,f\left(1\right)+8\,f\left(0.5\right)-4\,f\left(0%
% \right)}\over{m^{3}}} $$ %too wide.
\section{Comments and Conclusion}
Writing, debugging, testing scientific software can benefit from
symbolic computation. Symbolic execution of Filon-style quadrature
programs provides a neat example of such testing and shows how even a
traditional ``numerical'' routine can sometimes be executed with
non-numeric parameters left in place -- sort of like raisins in a
muffin. Although there are more general approaches to oscillatory
integrals \cite{levin-96,LevinIntegrate,HOP,fateosc07}, this paper is
intended to be a simple fun example, yet with potentially serious use.
This papers was originally written in 2006, and was lightly revised in 2014.
\begin{thebibliography}{99}
\bibitem{fateosc07}
R.~Fateman
``{Numerical Integration for Oscillatory Integrands:
a Computer Algebra Perspective}, Draft.
\bibitem{filon28}
L.N.G. Filon,
``On a quadrature formula for trigonometric integrals,''
{\em Proc. Roy Soc. Edinburgh 49} 1928-29 38.
\bibitem{chase}
S.M. Chase and L.D. Fosdick,
``An algorithm for Filon Quadrature,''
{\em CACM 12 no 8} August 1969 453--457.
\bibitem{cew2000}
K.C. Chung, G.A. Evans, J.R. Webster,
``A method to generate generalized quadrature rules for oscillatory integrals,''
{\em Applied Numer. Math. 34 (2000)} 85---93.
\bibitem{evans03}
G.A.~Evans, ``How Integration by Parts Leads to Generalised Quadrature Methods,''
{\em Intern. J. Computer Math.,} 2003, vol 80(1), 75---81.
\bibitem{Fateman-osc} R.J.~Fateman, ``When is a function oscillatory?'' draft, June, 2007.
\bibitem{Fateman81} R.J.~Fateman, ``Computer Algebra and Numerical Integration,
Proc. SYMSAC'81 (ISSAC), August, 1981, 228--232.
\bibitem{Geddes86} K.O.~Geddes,
``Numerical Integration in a Symbolic Context''
Proc. SYMSAC-86, (ISSAC) July, 1986, 185--191.
\bibitem{levin-96} David Levin,
``Fast integration of rapidly oscillatory functions,''
{\em J. Computational and Applied Math. 67 (1996)}, 95---101.
\bibitem{LevinIntegrate} David Levin,{\tt http://sites.google.com/site/andrewjmoylan/levinintegrate}.
Also see {\tt arxiv 0710.3140v1.pdf}.
\bibitem{HOP} Newton Institute Seminars on Highly Oscillatory Problems, \\
{\tt http://www.newton.ac.uk/webseminars/pg+ws/2007/hop/}
\end{thebibliography}
\end{document}
{\section{Testing}
I am grateful for the following examples suggested by Daniel Lichtblau of Wolfram Research
\begin{verbatim}
NIntegrate[Cos[20000*x]/Sqrt[x], {x, 0, 1}]
NIntegrate[Sin[2000*x]/Sqrt[x], {x, 0, 2}]
NIntegrate[Sin[10*x^3]*Sqrt[x], {x, 1, 10}]
NIntegrate[Sin[20000*x^3]*Sqrt[x], {x, 1, 200}]
NIntegrate[Sin[20*x^2 + 200]/Sqrt[x], {x, 1, Infinity}]
NIntegrate[BesselJ[4, 20*x^3 + 4]/Sqrt[x], {x, 1, Infinity}]
NIntegrate[Sin[200*x2 + 5]*(1/(x + 1)2), {x, 0, Infinity}]
NIntegrate[Sin[2*x]*(1/x2), {x, 1, Infinity}]
NIntegrate[Sin[20*x]*Cos[18*x]*(1/Sqrt[x + 1]), {x, 0, Infinity}]
NIntegrate[Piecewise[{{Sin[20*x]/x, x < 0}, {Cos[2*x^5 + 4]/Sqrt[x],
0 < x < 1}, {BesselJ[30, 20*x^3 + 4], x > 1}}],
{x, -Infinity, Infinity}]
NIntegrate[Piecewise[{{Cos[2000*x]/Sqrt[x], 0 < x < 2},
{Sin[20*x]/Sqrt[x2], x < 0}, {BesselY[2, x^3]/x, x > 2}}],
{x, -Infinity, Infinity}]
NIntegrate[BesselJ[1, x]/(1 + x), {x, 0, Infinity}]
NIntegrate[Sin[10*x]*Cos[10*x]/x, {x,0,Infinity}]
NIntegrate[Sin[x^2]*Cos[x^2], {x,1,Infinity}]
NIntegrate[x^(3/5)*AiryBi[-x], {x,0,Infinity}]
NIntegrate[Cos[5*t3-7*t], {t,0,Infinity}]
NIntegrate[Exp[I*(x^2+3*x)], {x, -Infinity, Infinity}]
NIntegrate[Sin[x^2]*Cos[x], {x,0,Infinity}]
NIntegrate[(4*E^(I*x))/x^5, {x,1,Infinity}]
NIntegrate[AiryBi[z], {z,-Infinity,0}]
NIntegrate[Exp[2+I*x^2],{x, -Infinity,Infinity}]
NIntegrate[(6*x*Cos[x] + 2*(-3 + x^2)*Sin[x])/x^3, {x, 0, Infinity}]
NIntegrate[Exp[(I*(t-1/t))/2]*t^(-3/2), {t, 0, Infinity}]
NIntegrate[x^(-3)*(Sin[x]/x-Cos[x])2, {x,0,Infinity}]
NIntegrate[Exp[I/2*x^2], {x,-Infinity,Infinity}]
NIntegrate[AiryAi[t], {t,-Infinity,Infinity}]
NIntegrate[Sin[x]/x*Exp[3*I*x], {x,-Infinity,Infinity}]
NIntegrate[Exp[3*I*x]*Sech[x], {x,-Infinity,Infinity}]
NIntegrate[Exp[2*Pi*I/x]*Exp[(8+I)*Pi*I*x], {x,0,Infinity}]
NIntegrate[(t*Cos[t] - Sin[t])/(1 + t2), {t, 0, Infinity}]
NIntegrate[(x*Sin[Pi*x])/(-1 + x^2), {x, 0, Infinity}]
NIntegrate[(Exp[I*Pi*x]*x)/(x^2+1), {x,-Infinity,Infinity}]
NIntegrate[ArcTan[2/x^2]*Cos[3*x], {x,0,Infinity}]
NIntegrate[Sin[x]/E^(5*x)/x, {x,0,Infinity}]
NIntegrate[1/(E^x - 1) - Cos[x]/x, {x, 0, Infinity}]
NIntegrate[1/Sqrt[x]*BesselJ[2,2*x]*BesselJ[1/3,3*x],{x,0,Infinity}]
Integrate[Cos[20*x^2 + 6]/(1 + x)2, {x, 0, Infinity}]
Integrate[Erf[x^5]*Sin[x^2], {x, 0, Infinity}]
Integrate[Sin[z2]*Erfc[z], {z, 0, Infinity}]
Integrate[Sin[x^2 + 1]/(x^2 + 1)^(3/2), {x, 0, Infinity}]
Integrate[Sin[x^3 + x], {x,0,Infinity}]
Integrate[(Sin[u]*SinIntegral[u]+Cos[u]*CosIntegral[u])/
Sqrt[u], {u,0,Infinity}]
Integrate[x^(3/5)*AiryBi[-x], {x, 0, Infinity}]
Integrate[Sin[104*x^2]*Exp[-x], {x, 1, 103}]
Integrate[Sin[104*x^2]*Exp[-x], {x, 1, Infinity}]
Integrate[Sin[200000*x^2 + 50]*Exp[x], {x, 1, 20}]
\end{verbatim}
Insert here: What does Maple do? What does Mathematica do?
(Apparently Maple does not do much for highly oscillatory
functions. Mathematica needs to be told to use an oscillatory
method??) }
Another idea: Look at the relatively slowly varying function, $f$, in the
expression
$\exp(ix))*\exp(f(x))$ or for some different $f$, just $\exp(f(x))$
Laplace's method (see wikipedia for nice synopsis) is used to
approximate integrals of the form integrate(exp(M*f(x)),x,a,b) where
f(x) is twice differentiable and ``M is large''. Of course M
``large'' is hardly a formal specification, but the idea is that f(x)
has a single global maximum at $x_0$ in the interior of $[a,b]$ that
is ``much larger'' than some distance away from $x_0$. Also we assume
that $f(x)$ is not close to $f(x_0)$ unless $x$ is near $x_0$.
Also $f'(x_0)<0$.
The thought is that hardly anything far away from $x_0$ matters.
Expand $f(x)$ at $x_0$. Since $x_0$ is a maximum, the first derivative is 0, and the second derivative is negative.
so $f(x)=f(x_0)+f''(x_0)(x-x_0)^2/2$
or to emphasize the sign of the second term,
$f(x)=f(x_0)-|f''(x_0)|(x-x_0)^2/2$
Now put this approximation for $f$ in the original integral (note: a higher order taylor series is also plausible; integrate(exp(x^n),x) is
[using Mathematica 5.1]
-((x*gamma[n^(-1), -x^n])/(n*(-x^n)^n^(-1))), or perhaps...
-((x*ExpIntegralE[(-1 + n)/n, -x^n])/n)
Evans tests
Exp[x]*Sin[50*Cos[x]] 0 to 1; easy for mma6, easy for maxima quanc8
f(x):=exp(x)*sin(50*cos(x));
Cos[100*Sqrt[1-x^2]]/Sqrt[1-x^2] 0 to 0.9 easy
Sin[x]^2*Cos[10*Tanh[x]] 0 to 1 easy
h(x):=exp(x)*bessel_j(100,100*cos(x)); no problem either. 0.5 to 1
Exp[x]*BesselJ[0,100*Cos[x]]
Some code from filon.lisp
;; p is number of panels.
;; (filonsinunit f m p) will integrate f*sin(m x)from 0 to 1
;; example (filonsinunit #'identity 500 1); should be "exact"
;; the integral of x*sin(500*x) from 0 to 1.
;; This result is the
;; same as (filonsinunit #'identity 500 100)
#+ignore
(defun filonsinunit(fun m p)
(let*((h (/ 1 (* 2 p)))
(k m)
(theta (* k h))
(s2p 0)(s2pm1 0)(alf 0)(bet 0)(gam 0) (sum 0))
(loop for i from 0 to p do (setf sum (+ sum (*(funcall fun (* 2 h i))
(sin (* 2 k h i))))))
(setf s2p (- sum (* 1/2 (funcall fun 1)(sin k))))
(setf sum 0)
(loop for i from 1 to p do (setf sum (+ sum (*(funcall fun (* h (1- (* 2 i))))
(sin (* theta (1- (* 2 i))))))))
(setf s2pm1 sum)
;; increase precision by factor of 2 here, if we can.
;; ignore this aspect for the moment.
;; alf: 1/theta + sin(2*theta)/2/theta^2-2*sin(theta)^2/theta^3,
(setf alf (+ (/ 1 theta)
(/ (* 1/2 (sin (* 2 theta))) (expt theta 2))
(/ (* -2 (expt (sin theta) 2))(expt theta 3))))
;;bet: 2*((1+cos(theta)^2)/theta^2 -sin(2*theta)/theta^3),
(setf bet (* 2 (- (/ (+ 1 (expt (cos theta) 2)) (expt theta 2))
(/ (sin (* 2 theta)) (expt theta 3)))))
;;gam: 4*(sin(theta)/theta^3-cos(theta)/theta^2)),
(setf gam (* 4 (- (/ (sin theta)(expt theta 3))
(/ (cos theta) (expt theta 2)))))
;; return this value
;;(format t "~% theta=~s a=~s b=~s g=~s s2p=~s s2pm1=~s" theta alf bet gam s2p s2pm1)
(* h (+ (* alf (-(funcall fun 0)(funcall fun 1)) (cos k))
(* bet s2p)
(* gam s2pm1))
)))
;; what about computing sin/cos faster?
;; and other ways of running faster?
;;(sh:sin(h), ch:cos(h), kill(ss,cc),
;; ss[0]:0,
;; ss[n]:= ch*ss[n-1]+sh*cc[n-1],
;; cc[0]:1,
;; cc[n]:=ch*cc[n-1]-sh*ss[n-1])
(defun fsu(fun m p)
(declare (optimize (speed 3)(safety 0)) (fixnum m p))
;; p is number of panels. integrate f*sin(m x)from 0 to 1
(let*((h (/ 1 (* 2 p)))
(k m)
(theta (coerce (* k h) 'double-float))
(z 0)
(s2p 0)(s2pm1 0)(alf 0)(bet 0)(gam 0) (sum 0)
(sh (sin theta))
(ch (cos theta))
(arsin (make-array (1+(* 2 p))))
(arcos (make-array (1+ (* 2 p)))))
;;; arrays of sin and cos
(setf (aref arsin 0) 0)
(setf (aref arsin 1)sh)
(setf (aref arcos 0) 1)
(setf (aref arcos 1)ch)
(loop for n from 2 to (* 2 p) do
(setf (aref arsin n) (+ (* ch (aref arsin (1- n))) (* sh (aref arcos (1- n)))))
(setf (aref arcos n) (- (* ch (aref arcos (1- n))) (* sh (aref arsin (1- n))))))
(loop for i from 0 to (* 2 p) by 2 do (setf sum (+ sum (*(funcall fun (* h i))
(aref arsin i)))))
(setf s2p (- sum (* 1/2 (funcall fun 1)(sin k))))
(setf sum 0)
(loop for i from 1 to p do
(setf z (1- (* 2 i)))
(setf sum (+ sum (*(funcall fun (* h z))
(aref arsin z)))))
(setf s2pm1 sum)
;; increase precision by factor of 2 here, if we can.
(let*
((ct (cos theta))
(st (sin theta))
(s2t (sin (* 2 theta)))
(t2 (* theta theta))
(t3 (* theta t2)))
;; ignore this aspect for the moment.
;; alf: 1/theta + sin(2*theta)/2/theta^2-2*sin(theta)^2/theta^3,
(setf alf (+ (/ 1 theta)
(/ (* 1/2 s2t) t2)
(/ (* -2 (expt st 2))t3 )))
;;bet: 2*((1+cos(theta)^2)/theta^2 -sin(2*theta)/theta^3),
(setf bet (* 2 (- (/ (+ 1 (expt ct 2)) t2)
(/ s2t t3))))
;;gam: 4*(sin(theta)/theta^3-cos(theta)/theta^2)),
(setf gam (* 4 (- (/ st t3)
(/ ct t2)))))
;; return this value
;;(format t "~% theta=~s a=~s b=~s g=~s s2p=~s s2pm1=~s" theta alf bet gam s2p s2pm1)
(* h (+ (* alf (-(funcall fun 0)(funcall fun 1)) (cos k))
(* bet s2p)
(* gam s2pm1))
)))
;;; note, if theta is less than about 1/6, Chase and Fosdick recommend
;;; using power series for alpha, beta, gamma. They also display the
;;; terms that they use for their desired precision. Note that theta
;;; is m/(2*p) where p is the number of panels, m is the coefficient of
;;; sin arg [sin(mx)]. Here we advocate doubling the precision of the
;;; computation. Using powerseries of fixed length is less attractive
;;; for us because the number of power series terms to carry depends
;;; on the required accuracy, which can vary as we vary the number of
;;; fraction bits in our representation, as well as the size of theta.
.................
old version of maxima version...
\begin{verbatim}
filonse(f,m,p) := /* Filon quadrature, Sin, Exact, f(x)*sin(m*x), from 0 to 1, p panels,*/
block([h : 1/(2*p),k : m,theta,s2p,s2pm1,alf,bet,gam,f1],
theta : k*h,
s2p : block([sum : 0],
for i from 0 thru p do
sum : sum+apply(f,[2*h*i])*sin(2*k*h*i),
f1 : apply(f,[1]),
sum-f1*sin(k)/2),
s2pm1 : block([sum : 0],
for i thru p do
sum : sum
+apply(f,[h*(2*i-1)]) *sin(k*h*(2*i-1)),
sum),
alf : 1/theta+sin(2*theta)/2/theta^2 -2*sin(theta)^2/theta^3,
bet : 2*((1+cos(theta)^2)/theta^2 -sin(2*theta)/theta^3),
gam : 4*(sin(theta)/theta^3-cos(theta)/theta^2),
h*(alf*(apply(f,[0])-f1*cos(k))+bet*s2p+gam*s2pm1))
\end{verbatim}