Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
Download

Testing latest pari + WASM + node.js... and it works?! Wow.

28495 views
License: GPL3
ubuntu2004
Function: sumnumap
Section: sums
C-Name: sumnumap0
Prototype: V=GEDGp
Help: sumnumap(n=a,f,{tab}): numerical summation of f(n) from
 n = a to +infinity using Abel-Plana formula. Assume that f is holomorphic
 in the right half-plane Re(z) > a; a must be an integer, and tab, if given,
 is the output of sumnumapinit.
Wrapper: (,G)
Description:
  (gen,gen,?gen):gen:prec sumnumap(${2 cookie}, ${2 wrapper}, $1, $3, $prec)
Doc: Numerical summation of $f(n)$ at high accuracy using Abel-Plana,
 the variable $n$ taking values from $a$ to $+\infty$, where $f$ is
 holomorphic in the right half-place $\Re(z) > a$; \kbd{a} must be an integer
 and \kbd{tab}, if given, is the output of \kbd{sumnumapinit}. The latter
 precomputes abscissas and weights, speeding up the computation; it also allows
 to specify the behavior at infinity via \kbd{sumnumapinit([+oo, asymp])}.
 \bprog
 ? \p500
 ? z3 = zeta(3);
 ? sumpos(n = 1, n^-3) - z3
 time = 2,332 ms.
 %2 = 2.438468843 E-501
 ? sumnumap(n = 1, n^-3) - z3 \\ here slower than sumpos
 time = 2,565 ms.
 %3 = 0.E-500
 @eprog

 \misctitle{Complexity}
 The function $f$ will be evaluated at $O(D \log D)$ real arguments
 and $O(D)$ complex arguments,
 where $D \approx \kbd{realprecision} \cdot \log(10)$. The routine is geared
 towards slowly decreasing functions: if $f$ decreases exponentially fast,
 then one of \kbd{suminf} or \kbd{sumpos} should be preferred.
 The default algorithm \kbd{sumnum} is usually a little \emph{slower}
 than \kbd{sumnumap} but its initialization function \kbd{sumnuminit}
 becomes much faster as \kbd{realprecision} increases.

 If $f$ satisfies the stronger hypotheses required for Monien summation,
 i.e. if $f(1/z)$ is holomorphic in a complex neighbourhood of $[0,1]$,
 then \tet{sumnummonien} will be faster since it only requires $O(D/\log D)$
 evaluations:
 \bprog
 ? sumnummonien(n = 1, 1/n^3) - z3
 time = 1,128 ms.
 %3 = 0.E-500
 @eprog\noindent The \kbd{tab} argument precomputes technical data
 not depending on the expression being summed and valid for a given accuracy,
 speeding up immensely later calls:
 \bprog
 ? tab = sumnumapinit();
 time = 2,567 ms.
 ? sumnumap(n = 1, 1/n^3, tab) - z3 \\ now much faster than sumpos
 time = 39 ms.
 %5 = 0.E-500

 ? tabmon = sumnummonieninit(); \\ Monien summation allows precomputations too
 time = 1,125 ms.
 ? sumnummonien(n = 1, 1/n^3, tabmon) - z3
 time = 2 ms.
 %7 = 0.E-500
 @eprog\noindent The speedup due to precomputations becomes less impressive
 when the function $f$ is expensive to evaluate, though:
 \bprog
 ? sumnumap(n = 1, lngamma(1+1/n)/n, tab);
 time = 10,762 ms.

 ? sumnummonien(n = 1, lngamma(1+1/n)/n, tabmon); \\ fewer evaluations
 time = 205 ms.
 @eprog

 \misctitle{Behaviour at infinity}
 By default, \kbd{sumnumap} assumes that \var{expr} decreases slowly at
 infinity, but at least like $O(n^{-2})$. If the function decreases
 like $n^{\alpha}$ for some $-2 < \alpha < -1$, then it must be indicated via
 \bprog
   tab = sumnumapinit([+oo, alpha]); /* alpha < 0 slow decrease */
 @eprog\noindent otherwise loss of accuracy is expected.
 If the functions decreases quickly, like $\exp(-\alpha n)$ for some
 $\alpha > 0$, then it must be indicated via
 \bprog
   tab = sumnumapinit([+oo, alpha]); /* alpha  > 0 exponential decrease */
 @eprog\noindent otherwise exponent overflow will occur.
 \bprog
 ? sumnumap(n=1,2^-n)
  ***   at top-level: sumnumap(n=1,2^-n)
  ***                             ^----
  *** _^_: overflow in expo().
 ? tab = sumnumapinit([+oo,log(2)]); sumnumap(n=1,2^-n, tab)
 %1 = 1.000[...]
 @eprog

 As a shortcut, one can also input
 \bprog
   sumnumap(n = [a, asymp], f)
 @eprog\noindent instead of
 \bprog
   tab = sumnumapinit(asymp);
   sumnumap(n = a, f, tab)
 @eprog

 \misctitle{Further examples}
 \bprog
 ? \p200
 ? sumnumap(n = 1, n^(-2)) - zeta(2) \\ accurate, fast
 time = 169 ms.
 %1 = -4.752728915737899559 E-212
 ? sumpos(n = 1, n^(-2)) - zeta(2)  \\ even faster
 time = 79 ms.
 %2 = 0.E-211
 ? sumpos(n=1,n^(-4/3)) - zeta(4/3)   \\ now much slower
 time = 10,518 ms.
 %3 = -9.980730723049589073 E-210
 ? sumnumap(n=1,n^(-4/3)) - zeta(4/3)  \\ fast but inaccurate
 time = 309 ms.
 %4 = -2.57[...]E-78
 ? sumnumap(n=[1,-4/3],n^(-4/3)) - zeta(4/3) \\ decrease rate: now accurate
 time = 329 ms.
 %6 = -5.418110963941205497 E-210

 ? tab = sumnumapinit([+oo,-4/3]);
 time = 160 ms.
 ? sumnumap(n=1, n^(-4/3), tab) - zeta(4/3) \\ faster with precomputations
 time = 175 ms.
 %5 = -5.418110963941205497 E-210
 ? sumnumap(n=1,-log(n)*n^(-4/3), tab) - zeta'(4/3)
 time = 258 ms.
 %7 = 9.125239518216767153 E-210
 @eprog

 Note that in the case of slow decrease ($\alpha < 0$), the exact
 decrease rate must be indicated, while in the case of exponential decrease,
 a rough value will do. In fact, for exponentially decreasing functions,
 \kbd{sumnumap} is given for completeness and comparison purposes only: one
 of \kbd{suminf} or \kbd{sumpos} should always be preferred.
 \bprog
 ? sumnumap(n=[1, 1], 2^-n) \\ pretend we decrease as exp(-n)
 time = 240 ms.
 %8 = 1.000[...] \\ perfect
 ? sumpos(n=1, 2^-n)
 %9 = 1.000[...] \\ perfect and instantaneous
 @eprog

 \synt{sumnumap}{(void *E, GEN (*eval)(void*,GEN), GEN a, GEN tab, long prec)}
 where an omitted \var{tab} is coded as \kbd{NULL}.