Re-insurance Deals
In this assignment, we study the Uncertain Mortality Model for the pricing of reinsurance deals. We only consider one type of default: the risk of death.
The considered reinsurance product is the following:
The insurance sells a large number of these contracts to subscribers. We assume that the times of death of the subscribers $\tau^D$ are independent, and identically distributed, and also independent of the underlying's stock price.
We assume that the underlying's risk neutral price dynamics is the Black-Scholes model with zero interest rate/repo/dividend yield
The Insurers' Approach and Risk-Neutral Pricing
This contract shows too types risk: the time of death of the subscribers and the changes in price of the underlying.
In this case, the issuer can apply the insurer's approach to the risk of death, i.e. the law of large numbers. The more people buy the contract, the less risk.
Choosing a risk-neutral measure under which the death times $\tau^D$ have the same distribution as under the historical probability measure is equivalent to applying the arbitrage-pricing approach to the financial risk insurer's rule on the risk of death. The price of the contract is then
$$ u(t, x) = E^\mathbb{Q}_{t, x} \left[u^\textrm{mat}(X_T) \mathbb{1}_{\tau^D \geq T} + u^D(\tau^D, X_{\tau^D}) \mathbb{1}_{\tau^D < T} \right]. $$Deterministic Death Rate
If the death intensity is a deterministic function $\lambda_t^D$ (i.e. $\tau^D$ has an exponential distribution with time-dependent intensity $\lambda_t^D$), then we have seen that $u$ is the solution to the linear PDE
$$ \left\{\begin{array}{l} \partial_t u + \frac{1}{2} \sigma^2 x^2 \partial_x^2 u + \lambda_t^D \cdot (u^D - u) = 0,\\ u(T, \cdot) \equiv u^\textrm{mat}. \end{array}\right. $$Uncertain Mortality Model
We now assume that the death rate is uncertain. We assume that it is adapted and remains in the deterministic interval $\left[\underline{\lambda}_t, \overline{\lambda}_t\right]$. The most conservative way to price the contract is to compute the (financially) worth death rate process $\lambda_t^D$ as being chosen so as to maximize the value of the contract. The resulting HJB equation is
$$ \left\{\begin{array}{l} \partial_t u + \frac{1}{2} \sigma^2 x^2 \partial^2_x u + \lambda^D(t, u^D - u) \cdot (u^D - u) = 0,\\ u(T, \cdot) \equiv u^\textrm{mat}, \end{array}\right. $$where $$ \lambda^D (t, y) = \left\{\begin{array}{l} \overline{\lambda}^D_t \quad \textrm{if} \ y \geq 0, \\ \underline{\lambda}^D_t \quad \textrm{otherwise.} \end{array}\right. $$
Link with $1$-BSDE and Numerical Schemes
From the Pardoux-Peng theorem, we know that the solution $u(0, x)$ can be represented as the solution $Y_0^x$ to the $1$-BSDE
$$ dY_t = -f(t, X_t, Y_t, Z_t) dt + Z_t dW_t, $$with the terminal condition $Y_T = u^\textrm{mat} (X_T)$, where $X_0 = x$ and
$$ f(t, x, y, z) = \lambda^D(t, u^D(t, x) - y) \cdot (u^D(t, x) - y). $$%matplotlib inline
from __future__ import print_function
import numpy as np
import matplotlib.pyplot as plt
Regression routines
# Non-parametric regression function
def reg_non_param(x, bdwidth, x_sample, y_sample):
"""Values of the non-parametric regression of Y wrt X using a Gaussian kernel.
Parameters
----------
x: numpy array, one dimensional
Values at which the regression is evaluated
bdwidth: positive float, value of the bandwidth parameter
x_sample: numpy array, one dimensional, non-empty
x values of the sample
y_sample: numpy array, one dimensional
y values of the sample, must have the same length as x_sample.
"""
def kern(u, x):
"""Gaussian kernel function"""
return np.exp(-(u[:, np.newaxis] - x) ** 2 / (2 * bdwidth ** 2))
return np.sum(kern(x_sample, x) * y_sample[:, np.newaxis], axis=0) \
/ np.sum(kern(x_sample, x), axis=0)
def basis(knots, x):
"""Values of order-1 B-spline basis functions.
For an increasingly sorted collection of knots and a collection of
query points x, returns a 2-dimensional array of values, of dimension
len(x) x len(knots).
Parameters
----------
knots: numpy array, one dimensional, increasingly sorted
Knots of the B-spline function
x: numpy array, one dimensional
Query points where to evaluate the basis functions.
"""
nb_knots = len(knots)
diag = np.identity(nb_knots)
res = np.empty((len(x), nb_knots))
for i in xrange(nb_knots):
res[:, i] = np.interp(x, knots, diag[i])
return res
def reg_param_coeffs(knots, x_sample, y_sample):
"""Computes the coefficients of the P-L regression of y_sample wrt. x_sample.
For an increasingly sorted collection of knots and two one-dimensional
samples x_sample and y_sample, computes the PL-regression.
Parameters
----------
knots: numpy array, one dimensional, increasingly sorted
Knots of the B-spline function
x_sample: numpy array, one dimensional
X sample
y_sample: numpy array, one dimensional
Y sample
Notes
-----
The length of the X and Y samples must be the same.
The length of the knots array should be at least one.
The knots must be increasingly sorted.
"""
bis = basis(knots, x_sample)
var = bis.T.dot(bis)
covar = y_sample.dot(bis)
return np.linalg.lstsq(var, covar.T)[0]
def eval_piecewise_linear(x, knots, coeffs):
"""Eveluates the piecewise linear function at the specified x for the
specified knots and coeffs.
This is simply a wrapper around np.interp.
"""
return np.interp(x, knots, coeffs)
Simulation routines for Black-Scholes paths
## Utility function for simulating Black-Scholes paths
def bs_path(nb_step, nb_mc, S0, sigma, T):
"""Simulate geometric Brownian motion
Parameters
----------
nb_step: integer, greater than 1
number of time steps
nb_mc: positive integer
sample size
S0: positive float
starting value
r: float
drift
sigma: float
volatility
T: positive float
maturity
"""
brownian = np.empty((nb_step + 1, nb_mc))
brownian[0] = 0.0
brownian[1:] = np.cumsum(np.random.randn(nb_step, nb_mc), axis=0) * np.sqrt(T / nb_step)
return S0 * np.exp(sigma * brownian + (- 0.5 * sigma ** 2) * np.linspace(0.0, T, nb_step + 1)[:, np.newaxis])
plt.figure()
nb_step=1000; nb_mc=100; S0=100; sigma=0.3; T=1.0
ss = bs_path(nb_step, nb_mc, S0, sigma, T)
plt.plot(ss, 'b', alpha=0.2)
plt.show()
Using the actual formula for $\overline{\lambda}$ and $\underline{\lambda}$, this comes to
$$ \begin{array}{l} Y_{t_{i - 1}} & = \left( E^\mathbb{Q}_{i - 1} [ Y_{t_i} ] \left( 1 - \overline{\lambda}^D \Delta t_i \right) + u^D(t_{i - 1}, X_{t_{i - 1}}) \overline{\lambda}^D \Delta t_i \right) \mathbb{1}_{u^D(t_{i - 1}, X_{t_{i - 1}}) \geq E_{i-1}[Y_{t_i}]}\\ & + \left( E^\mathbb{Q}_{i - 1} [ Y_{t_i} ] \left( 1 - \underline{\lambda}^D \Delta t_i \right) + u^D(t_{i - 1}, X_{t_{i - 1}}) \underline{\lambda}^D \Delta t_i \right) \mathbb{1}_{u^D(t_{i - 1}, X_{t_{i - 1}}) < E_{i-1}[Y_{t_i}]}. \end{array} $$The implicit scheme involves the $Y_{t_{i - 1}}$ on both sides of the equation, which generally requires numerically finding roots, but in this specific case, using the explicit formula for $\overline{\lambda}$ and $\underline{\lambda}$, this comes to
$$ \begin{array}{l} Y_{t_{i - 1}} & = \frac{1}{1 + \overline{\lambda}^D \Delta t_i} \left( E^\mathbb{Q}_{i - 1} [ Y_{t_i} ] + u^D(t_{i - 1}, X_{t_{i - 1}}) \overline{\lambda}^D \Delta t_i \right) \mathbb{1}_{u^D(t_{i - 1}, X_{t_{i - 1}}) \geq E_{i-1}[Y_{t_i}]}\\ & + \frac{1}{1 + \underline{\lambda}^D \Delta t_i} \left( E^\mathbb{Q}_{i - 1} [ Y_{t_i} ] + u^D(t_{i - 1}, X_{t_{i - 1}}) \underline{\lambda}^D \Delta t_i \right) \mathbb{1}_{u^D(t_{i - 1}, X_{t_{i - 1}}) < E_{i-1}[Y_{t_i}]}. \end{array} $$# Model
S0 = 100.0
sigma = 0.3
T = 10.0
# Payoff: Put
Kmat = 90.0
KD = 100.0
def umat(x):
return np.maximum(Kmat - x, 0)
def uD(x):
return np.maximum(KD - x, 0)
# Sample size and time discretization
nb_mc = 4000
nb_step = 10
lambdamin, lambdamax = 0.005, 0.01
# Native implicit BSDE discretization for uncertain mortality model
path = bs_path(nb_step, nb_mc, S0, sigma, T)
y = umat(path[-1])
deltat = T / nb_step
# Backward induction
for index in range(nb_step):
i = nb_step - index - 1
if i != 0:
ycond = reg_non_param(path[i], 0.2 * np.std(path[i]), path[i], y)
else:
ycond = np.mean(y)
death = uD(path[i])
y = np.where(death > ycond,
(ycond + death * lambdamax * deltat) / (1.0 + lambdamax * deltat),
(ycond + death * lambdamin * deltat) / (1.0 + lambdamin * deltat))
print('Price from backward induction (non-parametric): ', np.mean(y))
Deterministic Mortality Rate Model
Uncertain Mortality Rate Model
lambdamax) and $\underline{\lambda}$ (lambdamin). (How does the price move when the interval widens?)Including a fee in the product
1. # Native implicit BSDE discretization for uncertain mortality model
path = bs_path(nb_step, nb_mc, S0, sigma, T)
y = umat(path[-1])
deltat = T / nb_step
# Backward induction
for index in range(nb_step):
i = nb_step - index - 1
if i != 0:
ycond = reg_non_param(path[i], 0.2 * np.std(path[i]), path[i], y)
else:
ycond = np.mean(y)
death = uD(path[i])
y = np.where(death > ycond,
(ycond + death * lambdamax * deltat) / (1.0 + lambdamax * deltat),
(ycond + death * lambdamin * deltat) / (1.0 + lambdamin * deltat))
print('Price from backward induction (non-parametric): ', np.mean(y))
(lambdamin, lambdamax) price .005 .04 $31.60