CS计算机代考程序代写 scheme chain algorithm DERIVATIVES: ASSIGNMENT 3, PART B
DERIVATIVES: ASSIGNMENT 3, PART B
HARJOAT S. BHAMRA
Part B needs to be written up and submitted online. Part B is partly a computational exercise, so submit your code. The are Bonus Parts which you do not need to do, but doing them can boost your overall mark by compensating for errors and omissions elsewhere.
(1) Consider an investor who does not consume or work to earn any labour income. All she does is invest her financial wealth in N + 1 assets. The first asset is risk-free with return rt dt over the interval [t, t + dt). The remaining N assets are risky and risky asset n ∈ {1, . . . , N } has return dRn,t , where
dRn,t = μn,tdt + σn,tdZn,t.
Zn is a standard Brownian motion under the physical probability measure P, such that Et[dZi,tdZj,t] = ρij,tdt for i ̸= j.
The investor invests the fraction φn,t of her date-t wealth in risky asset n.
(a) Explain why the return on the investor’s portfolio over the interval [t, t + dt) is given by
(b) Hence, show that
where
NN
dRp,t = 1 − φn,t rdt + φn,tdRn,t.
n=1 n=1
dRp,t = rtdt + φ⊤t (μ − r1)dt + φtΣdZt,
μt =(μ1,t,…,μN,t)⊤, 1 = (1,…,1)⊤,
Σt =diag(σ1,t,…,σN,t), φt =(φ1,t,…,φN,t)⊤,
dZt =(dZ1,t,…,dZN,t)⊤. (c) The investor has the following objective function
1
2
where γ > 0. Explain the ideas behind the design of this objective function.
(d) Show that the optimal portfolio is obtained by solving the following linear-quadratic optimization
Et[dRp,t] −
γV art[dRp,t],
problem:
maxφ⊤t (μt −1rt)− 1γφ⊤t Vtφt, φt 2
where Vt is an N by N matrix such that
ρij,tσi,tσj,t
and ρij,t = ρji,t.
(e) Show that the optimal portfolio vector is
, i ̸= j ,i=j ,
[Vt]ij= σ2 i,t
φ = 1V−1(μ −1rt). tγtt
(f) Use a computer algebra package such as Mathematica to calculate V −1 when N = 3 for special t
cases of your choice. The obvious one to start with is ρ12,t = ρ13,t = ρ23,t = 0. 1
2
HARJOAT S. BHAMRA
Suppose the risk-free rate r is constant and the cum-dividend return on the SP500 over the interval [t, t + dt) is given by
(2)
where
dRt = (r + λvt)dt +
√
√
vtdZt,
dvt = κ(θ − vt)dt + σ
where Z and Zv are standard Brownian motions under P such that
vtdZv,t, Et[dZtdZv,t] = −ρdt, ρ > 0.
A household’s optimal consumption-portfolio choice problem is given by the following optimal sto- chastic control problem
∞ e−δ(s−t)u(Cs)ds
t
s.t.
where φt is the fraction of date-t wealth invested in the SP500,
Jt =
sup Et (Cs )s≥t ,(φs )s≥t
dWt = WtdRp,t − Ctdt,
√
x1−γ
vtdZt,
and
dRp,t = (r + φtλvt)dt + φt
u(x) =
The supremized objective function is known as the value function, which we denote via J. The
date-t value function depends on date-t wealth and the date-t instantaneous variance vt via Jt = H(vt)γu(Wt),
where H(v) satisfies the following ordinary differential equation (ode) 1 1 1 H′(v)2
0=1− δ+ 1− r+ γv λ2 −γ2ε2(1−ρ2) H(v)+κ′(θ′ −v)H′(v) γγ2 H(v)
1−γ
.
(1)
+ 1ε2vH′′(v), 2
and
(a)
(b) (c)
For the special case γ = 1, solve the ordinary differential equation. Ensure your solution remains finite as v → ∞. Hence, find the household’s optimal portfolio and consumption expenditure choices. Explain the economics underlying your solution.
Use your solution to provide wealth and expenditure management advice for a client with current wealth of 1M GBP.
,
By defining
k(v)= δ+ 1− r+ γv λ2 −γ2ε2(1−ρ2)
κ′ = κ − (γ − 1)λρε , γ
θ′ = κ θ. κ′
The optimal portfolio and consumption expenditure choices are given by φt = 1λ−ρεH′(vt)
Ct =
Wt.
γ H(vt)
1 H(vt)
1 1 1 H′(v)2
(2)
γγ2 H(v) μ ′v ( v ) = κ ′ ( θ ′ − v ) ,
σv2(v) = ε2v,
we can write the ode (1) as
0 = 1 − k(v)H(v) + μ′v(v)H′(v) + 1σv2(v)H′′(v). 2
DERIVATIVES: ASSIGNMENT 3, PART B 3
To solve (2) we shall assume that v can take N + 1 equally spaced values: v1 , . . . vN +1 , which the coder needs to choose. Define ∆v = v2 − v1. We have thereby discretized the state space of the
stochastic process v.
Define the N + 1 by N + 1 matrix S via
p1,2 0 ··· ··· −(p2,1+p2,3) p2,3 0 ···
.···············.
0 ··· 0 pN−1,N−2 −(pN−1,N−2 +pN−1,N) pN−1,N 0
−p1,2 p2,1 0
··· 0
··· 0
··· 0
p3,2 −(p3,2 +p3,4) p3,4 0
S=. .,
.
0 ··· ··· ··· 0 pN+1,N −pN+1,N
0 ··· ··· 0 pN,N−1 −(pN,N−1 +pN,N+1) pN,N+1
where
pn,n+1 = μ′v(vn)I{μ′ (vn)>0} + 1 σv2(vn) ∆v v 2 (∆v)2
pn,n−1 = −μ′v(vn)I{μ′ (vn)<0} + 1 σv2(vn). ∆v v 2 (∆v)2
.
Define the N + 1 by N + 1 matrix K via
K = diag(k(v1), . . . , k(vN+1)),
where
H(vn+1)−H(vn) ∆v
H(vn)−H(vn−1) I{μ′v(vn)>0} + ∆v
H(vn)
I{μ′v(vn)<0}
2
k(vn)= 1δ+ 1− 1 r+ 1γvn
λ2 −γ2ε2(1−ρ2)
.
γ
(d) (e)
(f) (g)
(h)
(i)
(j) (k)
γ 2
(3)
To pin down k(v1) and k(vN+1) you could either assume (1) H(vN+2) = H(v0) = 0 or (2) k(v1) = k(v2) k(vN+1) = k(vN ).
Define the 1 by N + 1 vector of ones 1 = (1,...,1)⊤.
We can find H = (H(v1), . . . , H(vN+1))⊤ via the following recursive procedure
Hk+1 = (I − S∆t)−1[1∆t + (I − K∆t)Hk].
The above equation starts with Hk and then provides an updated version of H, labelled as Hk+1. We start with an initial guess, labelled as H1 and then use the above equation to generate H2, H3, and so on until ||Hk+1 − Hk|| = maxn∈{1,...,N+1} |Hk+1(vn) − Hk(vn)| < ε for some ε > 0 which the coder needs to choose.
Use the above iterative procedure to write code that finds the vector H.
Check your code converges for the special case ρ = ±1. You will need to play with ∆t and N. Bonus Part: If you are familiar with the Method of Variation of Parameters, you can derive an integral expression for the solution of (2) for the special case ρ = ±1. Use this expression to check your code.
Make sure your code converges for the general case ρ2 ̸= 1. You will need to play around some more with ∆t and N.
Assume the stochastic discount factor process Λ is given by
dΛt =−rdt−λ√vtdZt, Λt
where λ > 0 is a constant. Derive an expression for dvt under the risk-neutral measure Q.
Bonus Part: Find a way to calibrate your model. You can use two sets of data: the SP500 and
dividends (you can find monthly data on Robert Shiller’s website) and vanilla options on the SP500.
You can price the options using Heston’s model.
Findthemyopicandhedgingdemandcomponentsoftheoptimalportfolioandplotthemasfunctions
√
of
Use your results from the previous subquestion to provide wealth and expenditure management advice for a client with current wealth of 1M GBP.
Bonus Part: Use the reinforcement learning algorithm described in Section 3 of ‘Reinforcement Learning for Continuous Stochastic Control Problems’ by Munos and Bourgine (on the Hub with the assignment) to solve for the optimal policies without assuming a particular stochastic process for v. Compare the optimal policies with those you found in the previous subquestion.
vt for the parameter values you have chosen (look online!) or calibrated.
4
(4)
HARJOAT S. BHAMRA
Additional Notes
From the ode and the Feynman-Kac Theorem, it follows that
P′∞ −uksds H(vt)=Et et du,
t
where P′ is some probability measure we need to determine. Observe that dvt=μ′(vt)dt+σv(vt)dZP′ ,
′
where ZP v
is a standard Brownian motion under P. We also know that dvt = μv(vt)dt + σv(vt)dZv,t.
From Girsanov’s Theorem
v v,t
′ dMt′ μv(vt)dt = μv(vt)dt + Et Mt′ dvt
,
where M′ is an exponential martingale under P, which we shall now identify. From the above equation,
we obtain
Hence,
because κ′θ′ = κθ. Therefore,
′ ′ dMt′ κ(θ −vt)dt=κ(θ−vt)dt+Et Mt′ dvt .
dMt′ ′
Et Mt′ dvt =−(κ −κ)vtdt,
dMt′ (γ − 1)λρε EtMt′dvt= γ vtdt.
Therefore, one possibility for M′ is
dMt′ (γ − 1)λ √
M t′ = − γ v t d Z t . ′ M′
We can use M′ to define P′ via
where A is an event realized at date-T .
EP[I]=EP TI, tA tMt′A
We shall derive the recursive scheme for finding H via (4). Note that (4) implies H(vt) = dt + e−ktdtEP′ [H(v )]
(5)
′
t t+dt
= EP [dt + (1 − ktdt)H(v )].
t t+dt
We now discretize the state space for the stochastic process v. This means using a continuous time Markov chain under P′ to approximate v under P′. We assume the Markov chain can take values in the set {v1, . . . , vN+1}. The Markov chain which approximates v has a generator matrix S, where pn,n+1dt the probability under P′ of the Markov chain taking the value vn at the end of the interval [t, t + dt), conditional on the current value being vn. Hence, we obtain the discretized state-space version of (5):
N+1
Hi = [eSdt]ij[dt+(1−kidt)Hj],
j=1
where Hi = H(vi), ki = k(vi) and [eSdt]ij is the probability under P′ of transitioning from state i (where the chain takes value vi ) to state j (where the chain takes value vj ) over the interval [t, t + dt).
In vector-matrix form, we have
H = eSdt[1dt + (I − Kdt)]H
Observe that eSdt[1dt + (I − Kdt)] is an operator, but is not a linear operator, because K depends on H. The above equation tells us that H is a fixed point of this nonlinear operator.
Now observe that eSdt = (e−Sdt)−1. Hence, when we discretize time, we obtain in vector-matrix form
H = (e−S∆t)−1[1∆t + (I − K∆t)]H.
DERIVATIVES: ASSIGNMENT 3, PART B 5
−S∆t ∞ n Sn(∆t)n
Using the expansion e = I + n=1(−) n! , we obtain
H = (I − S∆t)−1[1∆t + (I − K∆t)]H + o(∆t).
We can start with a guess for H and thereby obtain the recursive scheme (3).