Skip to main content

Numerical methods challenge: Day 22

During October (2017) I will write a program per day for some well-known numerical methods in both Python and Julia. It is intended to be an exercise then don't expect the code to be good enough for real use. Also, I should mention that I have almost no experience with Julia, so it probably won't be idiomatic Julia but more Python-like Julia.

Finite Difference: Eigenproblems

Today we have a Finite difference method to compute the vibration modes of a beam. The equation of interest is

\begin{equation*} \nabla^ 4 u = k^2 u \end{equation*}

with

\begin{equation*} u(0) = u(L) = 0 \end{equation*}

Following are the codes.

Python

from __future__ import division, print_function
import numpy as np
from scipy.linalg import eigh as eigsh
import matplotlib.pyplot as plt


def beam_FDM_eigs(L, N):
    x = np.linspace(0, L, N)
    dx = x[1] - x[0]
    stiff = np.diag(6*np.ones(N - 2)) -\
            np.diag(4*np.ones(N - 3), -1) - np.diag(4*np.ones(N - 3), 1) +\
            np.diag(1*np.ones(N - 4), 2) + np.diag(1*np.ones(N - 4), -2)
    vals, vecs = eigsh(stiff/dx**4)
    return vals, vecs, x


N = 1001
nvals = 20
nvecs = 4
vals, vecs, x = beam_FDM_eigs(1.0, N)

#%% Plotting
num = np.linspace(1, nvals, nvals)
plt.rcParams["mathtext.fontset"] = "cm"
plt.figure(figsize=(8, 3))
plt.subplot(1, 2, 1)
plt.plot(num, np.sqrt(vals[0:nvals]), "o")
plt.xlabel(r"$N$")
plt.ylabel(r"$\omega\sqrt{\frac{\lambda}{EI}}$")
plt.subplot(1, 2 ,2)
for k in range(nvecs):
    vec = np.zeros(N)
    vec[1:-1] = vecs[:, k]
    plt.plot(x, vec, label=r'$n=%i$'%(k+1))

plt.xlabel(r"$x$")
plt.ylabel(r"$w$")
plt.legend(ncol=2, framealpha=0.8, loc=1)
plt.tight_layout()
plt.show()

Julia

using PyPlot


function beam_FDM_eigs(L, N)
    x = linspace(0, L, N)
    dx = x[2] - x[1]
    stiff = diagm(6*ones(N - 2)) -
            diagm(4*ones(N - 3), -1) - diagm(4*ones(N - 3), 1) +
            diagm(1*ones(N - 4), 2) + diagm(1*ones(N - 4), -2)
    vals, vecs = eig(stiff/dx^4)
    return vals, vecs, x
end


N = 1001
nvals = 20
nvecs = 4
vals, vecs, x = beam_FDM_eigs(1.0, N)

#%% Plotting
num = 1:nvals
# Missing line for setting the math font
figure(figsize=(8, 3))
subplot(1, 2, 1)
plot(num, sqrt.(vals[1:nvals]), "o")
xlabel(L"$N$")
ylabel(L"$\omega\sqrt{\frac{\lambda}{EI}}$")
subplot(1, 2 ,2)
for k in 1:nvecs
    vec = zeros(N)
    vec[2:end-1] = vecs[:, k]
    plot(x, vec, label="n=$(k)")
end

xlabel(L"$x$")
ylabel(L"$w$")
legend(ncol=2, framealpha=0.8, loc=1)
tight_layout()
show()

Both have (almost) the same result, as follows

Vibration modes of an encastred beam.

Comparison Python/Julia

Regarding number of lines we have: 40 in Python and 39 in Julia. The comparison in execution time is done with %timeit magic command in IPython and @benchmark in Julia.

For Python:

%timeit beam_FDM_eigs(1.0, N)

with result

10 loops, best of 3: 196 ms per loop

For Julia:

@benchmark beam_FDM_eigs(1.0, N)

with result

BenchmarkTools.Trial:
  memory estimate:  99.42 MiB
  allocs estimate:  55
  --------------
  minimum time:     665.152 ms (17.14% GC)
  median time:      775.441 ms (21.76% GC)
  mean time:        853.401 ms (16.86% GC)
  maximum time:     1.236 s (15.68% GC)
  --------------
  samples:          6
  evals/sample:     1

In this case, we can say that the Python code is roughly 4 times faster than Julia.

Numerical methods challenge: Day 21

During October (2017) I will write a program per day for some well-known numerical methods in both Python and Julia. It is intended to be an exercise then don't expect the code to be good enough for real use. Also, I should mention that I have almost no experience with Julia, so it probably won't be idiomatic Julia but more Python-like Julia.

Jacobi iteration

Today we have a Finite difference method combined with Jacobi method We are solving the Laplace equation.

\begin{equation*} \nabla^ 2 u = 0 \end{equation*}

with

\begin{align*} u(0, y) = 1 -y,\quad u(x, 0) = 1 - x,\\ u(1, y) = u(x, 1) = 0 \end{align*}

Following are the codes.

Python

from __future__ import division, print_function
import numpy as np
import matplotlib.pyplot as plt


def jacobi(u, update, tol=1e-6, niter=500):
    for n in range(niter):
        u_new = update(u)
        if np.linalg.norm(u_new - u) < tol:
            break
        else:
            u = u_new.copy()
    return u, n


def heat_FDM(u):
    u_new = u.copy()
    u_new[1:-1, 1:-1] = 0.25*(u_new[0:-2, 1:-1] + u_new[2:, 1:-1] +\
         u_new[1:-1, 0:-2] + u_new[1:-1, 2:])
    return u_new


nx = 50
ny = 50
x_vec = np.linspace(-0.5, 0.5, nx)
y_vec = np.linspace(-0.5, 0.5, ny)
u0 = np.zeros((nx, ny))
u0[:, 0] = 1 - x_vec
u0[0, :] = 1 - y_vec
nvec = [100, 1000, 10000, 100000]
for num, niter in enumerate(nvec):
    u, n = jacobi(u0, heat_FDM, tol=1e-12, niter=niter)
    plt.subplot(2, 2, num + 1)
    plt.contourf(u, cmap='hot')
    plt.xlabel(r"$x$")
    plt.ylabel(r"$y$")
    plt.title('{} iterations'.format(n + 1))
    plt.axis('image')
plt.tight_layout()
plt.show()

Julia

using PyPlot


function jacobi(u, update; tol=1e-6, niter=500)
    num = niter
    for n = 1:niter
        u_new = update(u)
        if norm(u_new - u) < tol
            num = n
            break
        else
            u = copy(u_new)
        end
    end
    return u, num
end


function heat_FDM(u)
    u_new = copy(u)
    u_new[2:end-1, 2:end-1] = 0.25*(u_new[1:end-2, 2:end-1] +
        u_new[3:end, 2:end-1] + u_new[2:end-1, 1:end-2] + u_new[2:end-1, 3:end])
    return u_new
end


nx = 50
ny = 50
x_vec = linspace(-0.5, 0.5, nx)
y_vec = linspace(-0.5, 0.5, ny)
u0 = zeros(nx, ny)
u0[:, 1] = 1 - x_vec
u0[1, :] = 1 - y_vec
nvec = [100, 1000, 10000, 100000]
for (num, niter) = enumerate(nvec)
    u, n = jacobi(u0, heat_FDM, tol=1e-12, niter=niter)
    subplot(2, 2, num)
    contourf(u, cmap="hot")
    xlabel(L"$x$")
    ylabel(L"$y$")
    title("$(n) iterations")
    axis("image")
end
tight_layout()
show()
Solution of the differential equation that satisfy the boundary conditions.

Comparison Python/Julia

Regarding number of lines we have: 40 in Python and 45 in Julia. The comparison in execution time is done with %timeit magic command in IPython and @benchmark in Julia.

For Python:

%timeit jacobi(u0, heat_FDM, tol=1e-12, niter=1000)

with result

10 loops, best of 3: 29.6 ms per loop

For Julia:

@benchmark jacobi(u0, heat_FDM, tol=1e-12, niter=1000)

with result

BenchmarkTools.Trial:
  memory estimate:  247.89 MiB
  allocs estimate:  43002
  --------------
  minimum time:     196.943 ms (5.66% GC)
  median time:      203.230 ms (5.74% GC)
  mean time:        203.060 ms (6.01% GC)
  maximum time:     208.017 ms (5.51% GC)
  --------------
  samples:          25
  evals/sample:     1

In this case, we can say that the Python code is roughly 10 times faster than Julia.

Numerical methods challenge: Day 20

During October (2017) I will write a program per day for some well-known numerical methods in both Python and Julia. It is intended to be an exercise then don't expect the code to be good enough for real use. Also, I should mention that I have almost no experience with Julia, so it probably won't be idiomatic Julia but more Python-like Julia.

Shooting method

Today we have the shooting method. This is a method for solving boundary value problems by turning them into a sequence of initial value problems.

Loosely speaking, for a second order equation we have

\begin{equation*} y''(x) = f(x, y(x), y'(x)),\quad y(x_0) = y_0,\quad y(x_1) = y_1\, , \end{equation*}

and we solve the sequence of problems

\begin{equation*} y''(x) = f(x, y(x), y'(x)),\quad y(x_0) = y_0,\quad y'(x_0) = a \end{equation*}

until the function \(F(a) = y(x_1; a) - y_1\) has a root.

We will test the method with the boundary value problem

\begin{equation*} y''(x) = \frac{3}{2} y^2,\quad w(0) = 4,\quad w(1) = 1\, . \end{equation*}

Following are the codes.

Python

from __future__ import division, print_function
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import newton
from scipy.integrate import odeint


def shooting(dydx, x, x0, xf, shoot=None):
    if shoot is None:
        shoot = np.random.uniform(-20, 20)
    F = lambda s, x0, xf, x: odeint(dydx, [x0, s], x)[-1, 0] - xf
    shoot = newton(F, shoot, args=(x0, xf, x))
    y = odeint(dydx, [x0, shoot], x)
    return y[:, 0], shoot


func = lambda y, t: [y[1], 1.5*y[0]**2]
x = np.linspace(0, 1, 1000)
y, shoot = shooting(func, x, 4, 1, shoot=-5)
plt.plot(x, y)
plt.xlabel(r"$x$")
plt.ylabel(r"$y$")
plt.show()

Julia

using PyPlot, DifferentialEquations, Roots


function shooting(dydx, x, y0, yf; shoot=nothing)
    if shoot == nothing
        shoot = rand(-20:0.1:20)
    end
    function F(s)
        prob = ODEProblem(dydx, [y0, s], (x[1], x[end]))
        sol = solve(prob)
        return sol(x[end])[1] - yf
    end
    shoot = fzero(F, shoot)
    prob = ODEProblem(dydx, [y0, shoot], (x[1], x[end]))
    sol = solve(prob, solveat=x)
    return sol(x)[1, :], shoot
end


func(x, y) = [y[2], 1.5*y[1]^2]
x = linspace(0, 1, 1000)
y, shoot = shooting(func, x , 4.0, 1.0, shoot=-5.0)
plot(x, y)

In both cases the result is the following plot, and the slope is -7.9999999657800833.

Solution of the differential equation that satisfy the boundary conditions.

We should mention that the convergence of the method relies on the selection of initial guesses. For example, if we choose as initial parameter -50 in the previous problem, we obtain a completely differente solution.

y, shoot = shooting(func, x , 4.0, 1.0, shoot=-50.0)
Solution of the differential equation that satisfy the boundary conditions.

And the obtained slope is -35.858547970130971.

Comparison Python/Julia

Regarding number of lines we have: 20 in Python and 23 in Julia. The comparison in execution time is done with %timeit magic command in IPython and @benchmark in Julia.

For Python:

%timeit shooting(func, x, 4, 1, shoot=-5)

with result

100 loops, best of 3: 1.9 ms per loop

For Julia:

@benchmark shooting(func, x, 4.0, 1.0, shoot=-5.0)

with result

BenchmarkTools.Trial:
  memory estimate:  4.18 MiB
  allocs estimate:  78552
  --------------
  minimum time:     10.065 ms (0.00% GC)
  median time:      10.593 ms (0.00% GC)
  mean time:        11.769 ms (9.28% GC)
  maximum time:     22.252 ms (48.58% GC)
  --------------
  samples:          425
  evals/sample:     1

In this case, we can say that the Python code is roughly 5 times faster than Julia. Although, the codes are more different than in the other challenges. For example, I am not using newton in Julia.

Numerical methods challenge: Day 19

During October (2017) I will write a program per day for some well-known numerical methods in both Python and Julia. It is intended to be an exercise then don't expect the code to be good enough for real use. Also, I should mention that I have almost no experience with Julia, so it probably won't be idiomatic Julia but more Python-like Julia.

Verlet integration

Today we have the Verlet integration technique. This method is widely used to integrate equations of motion of several systems, such as orbital mechanics or molecular dynamics. One of the main reasons for choosing this method is that it is a symplectic integrator.

The stepping is done with the formula

\begin{equation*} \mathbf{x}_{n+1}=2 \mathbf{x}_n- \mathbf{x}_{n-1}+ A(\mathbf{x}_n)\,\Delta t^2. \end{equation*}

where \(A(\mathbf{x}_n)\) is the aceleration for that time-step.

Following are the codes.

Python

from __future__ import division, print_function
import numpy as np
import matplotlib.pyplot as plt


def verlet(force, x0, v0, m, t, args=()):
    ndof = len(x0)
    ntimes = len(t)
    x = np.zeros((ndof, ntimes))
    dt = t[1] - t[0]
    x[:, 0] = x0
    F = force(x0, v0, m, t, *args)
    x[::2, 1] = x0[::2] + v0[::2]*dt + 0.5*F[::2]*dt**2/m
    x[1::2, 1] = x0[1::2] + v0[1::2]*dt + 0.5*F[1::2]*dt**2/m
    for cont in range(2, ntimes):
        dt = t[cont] - t[cont - 1]
        v = (x[:, cont - 1] - x[:, cont - 2])/dt
        acel = force(x[:, cont - 1], v, m, t, *args)
        acel[::2] = acel[::2]/m
        acel[1::2] = acel[1::2]/m
        x[:, cont] = 2*x[:, cont - 1] - x[:, cont - 2] + acel*dt**2
    return x


spring_force = lambda x, v, m, t, k: -k*x
x0 = np.array([1.0, 0.0])
v0 = np.array([0.0, 1.0])
m = np.array([1.0])
t = np.linspace(0, 10.0, 1000)
k = 1.0
x = verlet(spring_force, x0, v0, m, t, args=(k,))

#%% Plot
plt.figure(figsize=(6, 3))
plt.subplot(121)
plt.plot(t, x[0, :])
plt.plot(t, x[1, :])
plt.subplot(122)
plt.plot(x[0, :], x[1, :])
plt.show()

Julia

using PyPlot


function verlet(force, x0, v0, m, t; args=())
    ndof = length(x0)
    ntimes = length(t)
    x = zeros(ndof, ntimes)
    dt = t[2] - t[1]
    x[:, 1] = x0
    F = force(x0, v0, m, t, args...)
    x[1:2:end, 2] = x0[1:2:end] + v0[1:2:end]*dt + 0.5*F[1:2:end]*dt^2./m
    x[2:2:end, 2] = x0[2:2:end] + v0[2:2:end]*dt + 0.5*F[2:2:end]*dt^2./m
    for cont = 3:ntimes
        dt = t[cont] - t[cont - 1]
        v = (x[:, cont - 1] - x[:, cont - 2])/dt
        acel = force(x[:, cont - 1], v, m, t, args...)
        acel[1:2:end] = acel[1:2:end]./m
        acel[2:2:end] = acel[2:2:end]./m
        x[:, cont] = 2*x[:, cont - 1] - x[:, cont - 2] + acel*dt^2
    end
    return x
end


spring_force(x, v, m, t, k) = -k*x
x0 = [1.0, 0.0]
v0 = [0.0, 1.0]
m = [1.0]
t = linspace(0, 10.0, 1000)
k = 1.0
x = verlet(spring_force, x0, v0, m, t, args=(k,))

#%% Plot
figure(figsize=(6, 3))
subplot(121)
plot(t, x[1, :])
plot(t, x[2, :])
subplot(122)
plot(x[1, :], x[2, :])
show()

In both cases the result is the following plot

Verlet integration for a mass with two orthogonal springs.

Comparison Python/Julia

Regarding number of lines we have: 40 in Python and 40 in Julia. The comparison in execution time is done with %timeit magic command in IPython and @benchmark in Julia.

For Python:

%timeit -n 100 verlet(spring_force, x0, v0, m, t, args=(k,))

with result

100 loops, best of 3: 26.5 ms per loop

For Julia:

@benchmark verlet(spring_force, x0, v0, m, t, args=(k,))

with result

BenchmarkTools.Trial:
  memory estimate:  4.36 MiB
  allocs estimate:  101839
  --------------
  minimum time:     73.159 ms (0.00% GC)
  median time:      74.883 ms (0.00% GC)
  mean time:        75.464 ms (1.02% GC)
  maximum time:     80.017 ms (4.87% GC)
  --------------
  samples:          67
  evals/sample:     1

In this case, we can say that the Python code is roughly 3 times faster than Julia.

Numerical methods challenge: Day 18

During October (2017) I will write a program per day for some well-known numerical methods in both Python and Julia. It is intended to be an exercise then don't expect the code to be good enough for real use. Also, I should mention that I have almost no experience with Julia, so it probably won't be idiomatic Julia but more Python-like Julia.

The Runke-Kutta

Today we have the Runge-Kutta method. This is the most popular Runge-Kutta out there, and it used a weighted average of four (smaller) increments.

The increments are done with the formula

\begin{equation*} y_{n+1} = y_n + \tfrac{h}{6}\left(k_1 + 2k_2 + 2k_3 + k_4 \right) \end{equation*}

where

\begin{align*} k_1 &= f(t_n, y_n), \\ k_2 &= f\left(t_n + \frac{h}{2}, y_n + \frac{h}{2}k_1\right), \\ k_3 &= f\left(t_n + \frac{h}{2}, y_n + \frac{h}{2}k_2\right), \\ k_4 &= f\left(t_n + h, y_n + h k_3\right). \end{align*}

Following are the codes.

Python

from __future__ import division, print_function
import numpy as np
import matplotlib.pyplot as plt


def RK4(dydt, y0, t, args=()):
    ndof = len(y0)
    ntimes = len(t)
    y = np.zeros((ndof, ntimes))
    y[:, 0] = y0
    for cont in range(1, ntimes):
        h = t[cont] - t[cont - 1]
        k1 = dydt(y[:, cont - 1], t[cont], *args)
        k2 = dydt(y[:, cont - 1] + 0.5*h*k1, t[cont] + 0.5*h, *args)
        k3 = dydt(y[:, cont - 1] + 0.5*h*k2, t[cont] + 0.5*h, *args)
        k4 = dydt(y[:, cont - 1] + h*k3, t[cont] + h, *args)
        y[:, cont] = y[:, cont - 1] + h/6*(k1 + 2*k2 + 2*k3 + k4)
    return y


def pend(y, t, b, c):
    theta, omega = y
    dydt = [omega, -b*omega - c*np.sin(theta)]
    return np.array(dydt)


b = 0.25
c = 5.0
y0 = [np.pi - 0.1, 0.0]
t = np.linspace(0, 10, 101)
y = RK4(pend, y0, t, args=(b, c))
plt.plot(t, y[0, :])
plt.plot(t, y[1, :])
plt.xlabel(r"$t$")
plt.legend([r"$\theta(t)$", r"$\omega(t)$"])
plt.show()

Julia

using PyPlot


function euler(dydt, y0, t; args=())
    ndof = length(y0)
    ntimes = length(t)
    y = zeros(ndof, ntimes)
    y[:, 1] = y0
    for cont = 2:ntimes
        h = t[cont] - t[cont - 1]
        y[:, cont] = y[:, cont - 1] + h*dydt(y[:, cont - 1], t[cont], args...)
    end
    return y
end


function pend(y, t, b, c)
    theta, omega = y
    dydt = [omega, -b*omega - c*sin(theta)]
    return dydt
end


b = 0.25
c = 5.0
y0 = [pi - 0.1, 0.0]
t = linspace(0, 10, 1001)
y = euler(pend, y0, t, args=(b, c))
plot(t, y[1, :])
plot(t, y[2, :])
xlabel(L"$t$")
legend([L"$\theta(t)$", L"$\omega(t)$"])
show()

In both cases the result is the following plot

Solution for a pendulum using Runge-Kutta method.

Comparison Euler/Runge-Kutta

If we compare Euler and Runge-Kutta methods for the previous example using 101 timesteps, 10 times less than before, we obtain the results below. The upper one is the one obtained with Euler method. We can see that the result is not the same. We can (loosely) say that we need less steps in Runge-Kutta method than in Euler method.

Comparison: Euler method.Comparison: Runge-Kutta method.

Comparison Python/Julia

Regarding number of lines we have: 36 in Python and 37 in Julia. The comparison in execution time is done with %timeit magic command in IPython and @benchmark in Julia.

For Python:

%timeit RK4(pend, y0, t, args=(b, c))

with result

100 loops, best of 3: 7.62 ms per loop

For Julia:

@benchmark RK4(pend, y0, t, args=(b, c))

with result

BenchmarkTools.Trial:
  memory estimate:  255.09 KiB
  allocs estimate:  5205
  --------------
  minimum time:     152.881 μs (0.00% GC)
  median time:      159.939 μs (0.00% GC)
  mean time:        202.514 μs (16.55% GC)
  maximum time:     3.785 ms (91.79% GC)
  --------------
  samples:          10000
  evals/sample:     1

In this case, we can say that the Python code is roughly 50 times slower than Julia.

Numerical methods challenge: Day 17

During October (2017) I will write a program per day for some well-known numerical methods in both Python and Julia. It is intended to be an exercise then don't expect the code to be good enough for real use. Also, I should mention that I have almost no experience with Julia, so it probably won't be idiomatic Julia but more Python-like Julia.

Euler method

Today we have the Euler method. Which is the simplest of Runge-Kutta methods, and was named after Leonhard Euler who used in the 18th century.

The method consist in making updates of the function using the slope value with the formula

\begin{equation*} y_{n + 1} = y_n + hf(t_n, y_n) \end{equation*}

Following are the codes.

Python

from __future__ import division, print_function
import numpy as np
import matplotlib.pyplot as plt


def euler(dydt, y0, t, args=()):
    ndof = len(y0)
    ntimes = len(t)
    y = np.zeros((ndof, ntimes))
    y[:, 0] = y0
    for cont in range(1, ntimes):
        h = t[cont] - t[cont - 1]
        y[:, cont] = y[:, cont - 1] + h*dydt(y[:, cont - 1], t[cont], *args)
    return y


def pend(y, t, b, c):
    theta, omega = y
    dydt = [omega, -b*omega - c*np.sin(theta)]
    return np.array(dydt)


b = 0.25
c = 5.0
y0 = [np.pi - 0.1, 0.0]
t = np.linspace(0, 10, 10001)
y = euler(pend, y0, t, args=(b, c))
plt.plot(t, y[0, :])
plt.plot(t, y[1, :])
plt.xlabel(r"$t$")
plt.legend([r"$\theta(t)$", r"$\omega(t)$"])
plt.show()

Julia

using PyPlot


function euler(dydt, y0, t; args=())
    ndof = length(y0)
    ntimes = length(t)
    y = zeros(ndof, ntimes)
    y[:, 1] = y0
    for cont = 2:ntimes
        h = t[cont] - t[cont - 1]
        y[:, cont] = y[:, cont - 1] + h*dydt(y[:, cont - 1], t[cont], args...)
    end
    return y
end


function pend(y, t, b, c)
    theta, omega = y
    dydt = [omega, -b*omega - c*sin(theta)]
    return dydt
end


b = 0.25
c = 5.0
y0 = [pi - 0.1, 0.0]
t = linspace(0, 10, 1001)
y = euler(pend, y0, t, args=(b, c))
plot(t, y[1, :])
plot(t, y[2, :])
xlabel(L"$t$")
legend([L"$\theta(t)$", L"$\omega(t)$"])
show()

In both cases the result is the following plot

Solution for a pendulum using Euler method.

Comparison Python/Julia

Regarding number of lines we have: 32 in Python and 33 in Julia. The comparison in execution time is done with %timeit magic command in IPython and @benchmark in Julia.

For Python:

%timeit euler(pend, y0, t, args=(b, c))

with result

100 loops, best of 3: 18.5 ms per loop

For Julia:

@benchmark euler(pend, y0, t, args=(b, c))

with result

BenchmarkTools.Trial:
  memory estimate:  648.33 KiB
  allocs estimate:  15473
  --------------
  minimum time:     366.236 μs (0.00% GC)
  median time:      399.615 μs (0.00% GC)
  mean time:        486.364 μs (16.96% GC)
  maximum time:     4.613 ms (80.26% GC)
  --------------
  samples:          10000
  evals/sample:     1

In this case, we can say that the Python code is roughly 40 times slower than Julia.

Numerical methods challenge: Day 16

During October (2017) I will write a program per day for some well-known numerical methods in both Python and Julia. It is intended to be an exercise then don't expect the code to be good enough for real use. Also, I should mention that I have almost no experience with Julia, so it probably won't be idiomatic Julia but more Python-like Julia.

Clenshaw-Curtis quadrature

Today we have the Clenshaw-Curtis quadrature. This method is based in the expansion of the integrand in Chebyshev polynomials.

We will test the quadrature with the integral

\begin{equation*} \int_0^3 e^{-x^2} \mathrm{d}x \approx 0.8862073482595214 \end{equation*}

Following are the codes.

Python

from __future__ import division, print_function
from numpy import linspace, cos, pi, exp, sum


def clenshaw_curtis(fun, N=10, a=-1, b=1):
    nodes = linspace(-1, 1, N + 1)*pi
    jac = 0.5*(b - a)
    tfun = lambda x: fun(a + jac*(x + 1))
    inte = 0
    for k in range(0, N + 1, 2):
        coef = 1/N*(tfun(1) + tfun(-1)*(-1)**k +\
            2*sum(tfun(cos(nodes[1:-1]))*cos(k*nodes[1:-1])))
        if k == 0:
            inte += coef
        elif k == N:
            inte += coef/(1 - N**2)
        else:
            inte += 2*coef/(1 - k**2)
    return inte*jac

N = 100
fun = lambda x: exp(-x**2)
print(clenshaw_curtis(fun, N=N, a=0, b=3))

with result

0.885906202792

Julia

function clenshaw_curtis(fun; N=50, a=-1, b=1)
    nodes = linspace(-1, 1, N + 1)*pi
    jac = 0.5*(b - a)
    tfun(x) = fun(a + jac*(x + 1))
    inte = 0
    for k = 0:2:N
        coef = 1/N*(tfun(1) + tfun(-1)*(-1)^k +
            2*sum(tfun(cos.(nodes[2:end-1])).*cos.(k*nodes[2:end-1])))
        if k == 0
            inte += coef
        elseif k == N
            inte += coef/(1 - N^2)
        else
            inte += 2*coef/(1 - k^2)
        end
    end
    return inte*jac
end

N = 100
fun(x) = exp.(-x.^2)
print(clenshaw_curtis(fun, N=N, a=0, b=3))

with result

0.8859062027920102

Comparison Python/Julia

Regarding number of lines we have: 24 in Python and 23 in Julia. The comparison in execution time is done with %timeit magic command in IPython and @benchmark in Julia.

For Python:

%timeit -n 10000 clenshaw_curtis(fun, N=N, a=0, b=3)

with result

10000 loops, best of 3: 2.4 ms per loop

For Julia:

@benchmark clenshaw_curtis(fun, N=N, a=0, b=3)

with result

BenchmarkTools.Trial:
  memory estimate:  359.56 KiB
  allocs estimate:  565
  --------------
  minimum time:     381.676 μs (0.00% GC)
  median time:      388.497 μs (0.00% GC)
  mean time:        413.471 μs (1.77% GC)
  maximum time:     1.298 ms (49.07% GC)
  --------------
  samples:          10000
  evals/sample:     1

In this case, we can say that the Python code is roughly 6 times slower than Julia.

Numerical methods challenge: Day 15

During October (2017) I will write a program per day for some well-known numerical methods in both Python and Julia. It is intended to be an exercise then don't expect the code to be good enough for real use. Also, I should mention that I have almost no experience with Julia, so it probably won't be idiomatic Julia but more Python-like Julia.

Simpson's rule

Today we have the Simpson's rule. This is another Newton-Cotes formulas, used for numerical integration. Newton-Cotes is related to Lagrange interpolation with equidistant points.

Following are the codes.

Python

from __future__ import division, print_function
import numpy as np


def simps(y, x=None, dx=1.0):
    n = len(y)
    if x is None:
        inte = np.sum(y[0:n-2:2] + 4*y[1:n-1:2] + y[2:n:2])*dx
    else:
        dx = x[1::2] - x[:-1:2]
        inte = np.dot(y[0:n-2:2] + 4*y[1:n-1:2] + y[2:n:2], dx)
    return inte/3


n = 21
x = np.linspace(0, 10, n)
y = np.sin(x)
print(simps(y, x=x))
print(1 - np.cos(10))

with result

1.8397296125
1.83907152908

Julia

function simps(y; x=nothing, dx=1.0)
    n = length(y)
    if x == nothing
        inte = sum(y[1:2:n-2] + 4*y[2:2:n-1] + y[3:2:n])*dx
    else
        dx = x[2:2:end] - x[1:2:end-1]
        inte = (y[1:2:n-2] + 4*y[2:2:n-1] + y[3:2:n])' * dx
    end
    return inte/3
end


n = 21
x = linspace(0, 10, n)
y = sin.(x)
println(simps(y, x=x))
println(1 - cos(10))
1.8397296124951257
1.8390715290764525

Comparison Python/Julia

Regarding number of lines we have: 19 in Python and 17 in Julia. The comparison in execution time is done with %timeit magic command in IPython and @benchmark in Julia.

For Python:

%timeit simps(y, x=x)

with result

100000 loops, best of 3: 13.8 µs per loop

For Julia:

@benchmark simps(y, x=x)

with result

BenchmarkTools.Trial:
  memory estimate:  1.23 KiB
  allocs estimate:  14
  --------------
  minimum time:     1.117 μs (0.00% GC)
  median time:      1.200 μs (0.00% GC)
  mean time:        1.404 μs (7.04% GC)
  maximum time:     222.286 μs (96.45% GC)
  --------------
  samples:          10000
  evals/sample:     10

In this case, we can say that the Python code is 10 times slower than Julia.

Numerical methods challenge: Day 14

During October (2017) I will write a program per day for some well-known numerical methods in both Python and Julia. It is intended to be an exercise then don't expect the code to be good enough for real use. Also, I should mention that I have almost no experience with Julia, so it probably won't be idiomatic Julia but more Python-like Julia.

Trapezoidal rule

Today we have the trapezoidal rule. This is the simplest of Newton-Cotes formulas for numerical integration. Newton-Cotes is related to Lagrange interpolation with equidistant points.

Following are the codes.

Python

from __future__ import division, print_function
import numpy as np


def trapz(y, x=None, dx=1.0):
    if x is None:
        inte = 0.5*dx*(2*np.sum(y[1:-1]) + y[0] + y[-1])
    else:
        dx = x[1:] - x[:-1]
        inte = 0.5*np.dot(y[:-1] + y[1:], dx)
    return inte


dx = 0.001
x = np.arange(0, 10, dx)
y = np.sin(x)
print(trapz(y, dx=dx))
print(1 - np.cos(10))

with result

1.83961497726
1.83907152908

Julia

function trapz(y; x=nothing, dx=1.0)
    if x == nothing
        inte = 0.5*dx*(2*sum(y[2:end-1]) + y[1] + y[end])
    else
        dx = x[2:end] - x[1:end-1]
        inte = 0.5* (y[1:end-1] + y[2:end])' * dx
    end
    return inte
end


dx = 0.001
x = 0:dx:10
y = sin.(x)
println(trapz(y, dx=dx))
println(1 - cos(10))

with results

1.8390713758204895
1.8390715290764525

Comparison Python/Julia

Regarding number of lines we have: 18 in Python and 16 in Julia. The comparison in execution time is done with %timeit magic command in IPython and @benchmark in Julia.

For Python:

%timeit trapz(y, dx=dx)

with result

100000 loops, best of 3: 16.9 µs per loop

For Julia:

@benchmark trapz(y, dx=dx)

with result

BenchmarkTools.Trial:
  memory estimate:  78.31 KiB
  allocs estimate:  4
  --------------
  minimum time:     13.080 μs (0.00% GC)
  median time:      16.333 μs (0.00% GC)
  mean time:        20.099 μs (12.66% GC)
  maximum time:     963.732 μs (90.60% GC)
  --------------
  samples:          10000
  evals/sample:     1

In this case, we can say that the Python code is roughly as fast as Julia.

Numerical methods challenge: Day 13

During October (2017) I will write a program per day for some well-known numerical methods in both Python and Julia. It is intended to be an exercise then don't expect the code to be good enough for real use. Also, I should mention that I have almost no experience with Julia, so it probably won't be idiomatic Julia but more Python-like Julia.

Cubic splines

Today we have cubic spline interpolation. Cubic splines are commonly used because it provides an approximation of a function with continuity up to second derivatives. For more details you can check this algorithm. Our main difference is that we will form the matrix and then solve the system. It is more common to directly solve the system of equation since it is a tridiagonal one.

Following are the codes.

Python

from __future__ import division, print_function
import numpy as np
import matplotlib.pyplot as plt


def spline_coeff(x, y):
    h = x[1:] - x[:-1]
    d_up = h.copy()
    d_up[0] = 0
    d_down = h.copy()
    d_down[-1] = 0
    d_cent = np.ones_like(x)
    d_cent[1:-1] = 2*(h[:-1] + h[1:])
    mat = np.diag(d_cent) + np.diag(d_up, 1) + np.diag(d_down, -1)
    alpha = np.zeros_like(x)
    alpha[1:-1] = 3/h[1:]*(y[2:] - y[1:-1]) - 3/h[:-1]*(y[1:-1] - y[:-2])
    c = np.linalg.solve(mat, alpha)
    b = np.zeros_like(x)
    d = np.zeros_like(x)
    b[:-1] = (y[1:] - y[:-1])/h - h/3*(c[1:] + 2*c[:-1])
    d[:-1] = (c[1:] - c[:-1])/(3*h)
    return y, b, c, d


n = 20
x = np.linspace(0, 2*np.pi, n)
y = np.sin(x)
a, b, c, d = spline_coeff(x, y)
for cont in range(n - 1):
    x_loc = np.linspace(x[cont], x[cont + 1], 100)
    y_loc = a[cont] + b[cont]*(x_loc - x[cont]) +\
            c[cont]*(x_loc - x[cont])**2 +\
            d[cont]*(x_loc - x[cont])**3
    plt.plot(x_loc, y_loc, "red")
plt.plot(x, y, "o")
plt.show()

Julia

using PyPlot


function spline_coeff(x, y)
    h = x[2:end] - x[1:end-1]
    d_up = copy(h)
    d_up[1] = 0.0
    d_down = copy(h)
    d_down[end-1] = 0.0
    d_cent = ones(x)
    d_cent[2:end-1] = 2*(h[1:end-1] + h[2:end])
    mat = diagm(d_cent) + diagm(d_up, 1) + diagm(d_down, -1)
    alpha = zeros(x)
    alpha[2:end-1] = 3./h[2:end].*(y[3:end] - y[2:end-1]) - 3./h[1:end-1].*(y[2:end-1] - y[1:end-2])
    c = mat \ alpha
    b = zeros(x)
    d = zeros(x)
    b[1:end-1] = (y[2:end] - y[1:end-1])./h - h./3.*(c[2:end] + 2*c[1:end-1])
    d[1:end-1] = (c[2:end] - c[1:end-1])./(3*h)
    return y, b, c, d
end


n = 21
x = collect(linspace(0, 2*pi, n))
y = sin.(x)
a, b, c, d = spline_coeff(x, y)
for cont = 1:n - 1
    x_loc = linspace(x[cont], x[cont + 1], 100)
    y_loc = a[cont] + b[cont]*(x_loc - x[cont]) +
            c[cont]*(x_loc - x[cont]).^2 +
            d[cont]*(x_loc - x[cont]).^3
    plot(x_loc, y_loc, "red")
end
plot(x, y, "o")
show()

In both cases the result is the plot below.

Spline interpolation.

Comparison Python/Julia

Regarding number of lines we have: 36 in Python and 37 in Julia. The comparison in execution time is done with %timeit magic command in IPython and @benchmark in Julia.

For Python:

%timeit a, b, c, d = spline_coeff(x, y)

with result

1000 loops, best of 3: 216 µs per loop

For Julia:

@benchmark a, b, c, d = spline_coeff(x, y)

with result

BenchmarkTools.Trial:
  memory estimate:  31.59 KiB
  allocs estimate:  52
  --------------
  minimum time:     18.024 μs (0.00% GC)
  median time:      26.401 μs (0.00% GC)
  mean time:        44.035 μs (3.94% GC)
  maximum time:     9.833 ms (0.00% GC)
  --------------
  samples:          10000
  evals/sample:     1

In this case, we can say that the Python code is roughly 10 times slower.