Ir al contenido principal

# Numerical methods challenge: Day 16

During October (2017) I will write a program per day for some well-known numerical methods in both Python and Julia. It is intended to be an exercise then don't expect the code to be good enough for real use. Also, I should mention that I have almost no experience with Julia, so it probably won't be idiomatic Julia but more Python-like Julia.

Today we have the Simpson's rule. This method is based in the expansion of the integrand in Chebyshev polynomials.

We will test the quadrature with the integral

\begin{equation*} \int_0^3 e^{-x^2} \mathrm{d}x \approx 0.8862073482595214 \end{equation*}

Following are the codes.

### Python

from __future__ import division, print_function
from numpy import linspace, cos, pi, exp, sum

def clenshaw_curtis(fun, N=10, a=-1, b=1):
nodes = linspace(-1, 1, N + 1)*pi
jac = 0.5*(b - a)
tfun = lambda x: fun(a + jac*(x + 1))
inte = 0
for k in range(0, N + 1, 2):
coef = 1/N*(tfun(1) + tfun(-1)*(-1)**k +\
2*sum(tfun(cos(nodes[1:-1]))*cos(k*nodes[1:-1])))
if k == 0:
inte += coef
elif k == N:
inte += coef/(1 - N**2)
else:
inte += 2*coef/(1 - k**2)
return inte*jac

N = 100
fun = lambda x: exp(-x**2)
print(clenshaw_curtis(fun, N=N, a=0, b=3))


with result

0.885906202792


### Julia

function clenshaw_curtis(fun; N=50, a=-1, b=1)
nodes = linspace(-1, 1, N + 1)*pi
jac = 0.5*(b - a)
tfun(x) = fun(a + jac*(x + 1))
inte = 0
for k = 0:2:N
coef = 1/N*(tfun(1) + tfun(-1)*(-1)^k +
2*sum(tfun(cos.(nodes[2:end-1])).*cos.(k*nodes[2:end-1])))
if k == 0
inte += coef
elseif k == N
inte += coef/(1 - N^2)
else
inte += 2*coef/(1 - k^2)
end
end
return inte*jac
end

N = 100
fun(x) = exp.(-x.^2)
print(clenshaw_curtis(fun, N=N, a=0, b=3))


with result

0.8859062027920102


### Comparison Python/Julia

Regarding number of lines we have: 24 in Python and 23 in Julia. The comparison in execution time is done with %timeit magic command in IPython and @benchmark in Julia.

For Python:

%timeit -n 10000 clenshaw_curtis(fun, N=N, a=0, b=3)


with result

10000 loops, best of 3: 2.4 ms per loop


For Julia:

@benchmark clenshaw_curtis(fun, N=N, a=0, b=3)


with result

BenchmarkTools.Trial:
memory estimate:  359.56 KiB
allocs estimate:  565
--------------
minimum time:     381.676 μs (0.00% GC)
median time:      388.497 μs (0.00% GC)
mean time:        413.471 μs (1.77% GC)
maximum time:     1.298 ms (49.07% GC)
--------------
samples:          10000
evals/sample:     1


In this case, we can say that the Python code is roughly 6 times slower than Julia.