Var Bootstrap
-
Upload
happy24471 -
Category
Documents
-
view
220 -
download
0
Transcript of Var Bootstrap
-
8/12/2019 Var Bootstrap
1/61
A Comparison Of Value-At-Risk Methods For
Portfolios Consisting Of Interest Rate Swaps And
FRAs
Robyn Engelbrecht
Supervisor: Dr. Graeme West
December 9, 2003
-
8/12/2019 Var Bootstrap
2/61
AcknowledgementsI would like to thank Graeme West and Hardy Hulley for all their help, andespecially Dr. West for the many consultations, as well as for providing mewith his bootstrapping code. Thanks also to Shaun Barnarde of SCMB forproviding the historical data.
1
-
8/12/2019 Var Bootstrap
3/61
Contents
1 Introduction 5
1.1 Introduction to Value at Risk . . . . . . . . . . . . . . . . . . 5
1.2 Problem Description . . . . . . . . . . . . . . . . . . . . . . . 7
2 Assumptions and Background 9
2.1 Choice of Return . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2 The Holding Period . . . . . . . . . . . . . . . . . . . . . . . 10
2.3 Choice of Risk Factors when Dealing with the Yield Curve . . 11
2.4 Modelling Risk Factors . . . . . . . . . . . . . . . . . . . . . . 11
2.5 The Historical Data . . . . . . . . . . . . . . . . . . . . . . . 12
3 The Portfolios 15
3.1 Decomposing Instruments into their Building Blocks . . . . . 15
3.2 Mapping Cashflows onto the set of Standard Maturities . . . 17
4 The Delta-Normal Method 18
4.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2
-
8/12/2019 Var Bootstrap
4/61
4.2 Description of Method . . . . . . . . . . . . . . . . . . . . . . 18
5 Historical Value at Risk 21
5.1 Classical Historical Simulation . . . . . . . . . . . . . . . . . 21
5.1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . 21
5.1.2 Description of Method . . . . . . . . . . . . . . . . . . 22
5.2 Historical Simulation with Volatility Updating (Hull-White
Historical Simulation) . . . . . . . . . . . . . . . . . . . . . . 23
5.2.1 Background . . . . . . . . . . . . . . . . . . . . . . . . 23
5.2.2 Description of Method . . . . . . . . . . . . . . . . . . 23
6 Monte Carlo Simulation 25
6.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
6.2 Monte Carlo Simulation using Cholesky decomposition . . . . 26
6.3 Monte Carlo Simulation using Principal Components Analysis 26
7 Results 30
8 Conclusions 36
A Appendix 39
A.1 The EWMA Model . . . . . . . . . . . . . . . . . . . . . . . . 39
A.2 Cash Flow Mappping . . . . . . . . . . . . . . . . . . . . . . . 40
A.3 The Variance of the Return on a Portfolio . . . . . . . . . . . 41
B Appendix 42
3
-
8/12/2019 Var Bootstrap
5/61
B.1 VBA Code: Bootstrapping Module . . . . . . . . . . . . . . . 42
B.2 VBA Code: VaR Module . . . . . . . . . . . . . . . . . . . . 44
B.3 VBA Code: Portfolio Valuation Module . . . . . . . . . . . . 54
4
-
8/12/2019 Var Bootstrap
6/61
Chapter 1
Introduction
1.1 Introduction to Value at Risk
Value at Risk, or VaR, is a widely used measure of financial risk, whichprovides a way of quantifying and managing the risk of a portfolio. Ithas, in the past few years, become a key component of the management ofmarket risk for many financial institutions1. It is used as an internal risk
management tool, and has also been chosen by the Basel Committee onBanking Supervision as the international standard for external regulatorypurposes2.Definition 1.1The VaR of a portfolio is a function of 2 parameters, a timeperiod and a confidence interval. It equals the loss on a portfolio that willnot be exceeded by the end of the time period with the specified confidencelevel [2].
Estimating the VaR of a portfolio thus involves determining a probabilitydistribution for the change in the value of the portfolio over the time period(known as the holding period). Consider a portfolio of financial instrumentsi= 1, . . . , n, whose values at time t depend on the k market variables (risk
factors)xt :=
x(1)t , . . . , x
(k)t
1Market risk is the risk associated with uncertainty about future earnings relating to
changes in asset prices and market rates.2The capital that a bank is required to hold against its market risk is based on VaR
with a 10-day holding period at a 99% confidence level. Specifically, the regulatory capitalrequirement for market risk is defined as max(VaRt1, k Avg{V aRti|i = 1, . . . , 60}).Herek is the multiplication factor, which is set to between 3 and 4 depending on previousbacktest results, and VaRt refers to a VaR estimate for day t based on a 10 day holdingperiod [1].
5
-
8/12/2019 Var Bootstrap
7/61
These market variables could be exchange rates, stock prices, interest rates,etc. Denote the monetary positions in these instruments by
W := (W1, . . . , W n)
(A fundamental assumption of VaR is that the positions in each instrumentremain static over the holding period, so the subscript t above has beenomitted.) Let the values of the instruments at time tbe given by
Pt :=
P(1)t (xt), . . . , P
(n)t (xt)
(1.1.1)
whereP(i)t (xt), i {1, . . . , n} are pricing formulae, such as the Black-Scholes
pricing formula. The value of the portfolio at timetis given by
Vt(Pt, W) =ni=1
WiP(i)t (xt) (1.1.2)
Let t be the length of the holding period, and (0, 1) be the level ofsignificance3. VaR can be defined in terms of either the distribution of theportfolio valueVt+t, or in terms of the distribution of the arithmetic returnof the portfolioRt+t = (Vt+t Vt)/Vt. Consider the (100 )th percentileof the distribution of values of the portfolio at t+t. This is given by thevaluev in the expression
4
P [Vt+t v] =LetR be the arithmetic return corresponding to v. In other words v =Vt(1 +R). The relative VaR of the portfolio is defined by
VaR(relative) := Et[Vt+t] v= Vt(R Et[Rt+t]) (1.1.3)
(1.1.3) gives the monetary loss of the portfolio over the holding period rela-tive to the mean of the distribution. Sometimes, the value of the portfolio attime t,Vt, is used instead ofEt[Vt+t]. This is known as the absolute VaR:
VaR(absolute) :=Vt
v=
VtR (1.1.4)
(1.1.4) gives the monetary loss relative to zero, or without reference to the ex-pected value. The equations above follow from the fact that Et[Vt+t]v=Vt(1 +Et[Rt+t])Vt(1 + R) andVt v= VtVt(1 + R). When the timehorizon is small, the expected return can be quite small, and so the results of(1.1.3) and (1.1.4) can be similar, but (1.1.3) is a better estimate in general,
3Typically = 0.05 or = 0.01, which correspond to a confidence level of 95% and99% respectively.
4This percentile always corresponds to a negative value, but VaR is given by a positiveamount, hence the greater than or equals to sign here.
6
-
8/12/2019 Var Bootstrap
8/61
since it accounts for the time value of money, and pull-to-par effects whichmay become significant towards the end of the life of a portfolio [3].
Thus the problem of estimating VaR is reduced to the problem of esti-mating the distribution of the portfolio value, or that of the portfolio return,at the end of the holding period. Typically this is done via the estimationof the distribution of the underlying risk factors. However, there is no sin-gle industry standard used to do this. The general techniques commonlyused include analytic techniques (the Delta-Normal method and the Delta-Gamma method), Historical simulation, and Monte Carlo simulation. Asdescribed in [4], the techniques differ along two lines:
Local/Full valuation This refers to whether the distribution is estimatedusing a Taylor series approximation (local valuation), or whether themethod generates a number of scenarios and estimates the distributionby revaluing a portfolio under these scenarios (full valuation).
Parametric/NonparametricParametric techniques involve the selectionof a distribution for the returns of the market variables, and the esti-mation of the statistical parameters of these returns. Nonparametrictechniques do not rely on these estimations, since it is assumed thatthe sample distribution is an adequate proxy for the population dis-tribution.
For large portfolios, the decision of which method is chosen presents a trade-off of speed against accuracy, with the fast analytic methods relying on roughapproximations; and the most realistic approach, Monte Carlo simulation,often considered too slow to be practical.
1.2 Problem Description
The aim of this project is to implement various VaR methods, and to com-pare these methods on portfolios consisting of the linear interest rate deriva-
tives, forward rate agreements (FRAs) and interest rate swaps. The perfor-mance of a VaR method depends very much on the type of portfolio beingconsidered, and the aim of this project is therefore to determine which ofthe methods is best for this type of portfolio. These derivatives representan important share of the interest rate component of a banks trading port-folios, since the more complicated interest rate derivatives dont trade thatmuch in this country. The methods to be considered are:
1. The Delta-Normal method (classic RiskMetrics approach)
7
-
8/12/2019 Var Bootstrap
9/61
2. Historical Simulation
3. Historical Simulation with Volatility updating
4. Monte Carlo simulation using Cholesky decomposition
5. Monte Carlo Simulation using Principal components analysis
Although the Delta-Normal method and Monte Carlo simulation are para-metric, and Historical simulation is nonparametric, direct comparison is pos-sible since the distributional parameters will be estimated using the samehistorical data as will be used to generate scenarios for the Historical sim-
ulation methods. The methods will be tested on various hypothetical port-folios, by estimating VaR for these portfolios over the historical period, andcomparing the estimates to the actual losses that occurred. VaR will beestimated for these portfolios at both the 95% and 99% confidence levels.Implementation of the methods is to be done in VBA. This was decided onmostly for convenience, due to the large amount of data that needs to beread from and written to spreadsheet.
Historical swap curve interest rate data for a period of 2 years will beused, from 2 July 2001 to 30 June 2003. For the historical simulation tech-niques, a 1-day moving window of 250 days of data will be used when esti-mating VaR. This is in accordance with the Basel Committee requirementsfor length of historical observation period. VaR can therefore be determinedfrom day 251 onwards that is, from 2 July 2002 to 30 June 2003. For theDelta-Normal method and the Monte Carlo simulation methods, VaR canbe determined for the entire historical period, since all that is required isa set of volatility and correlation estimates, which can be obtained fromday one. All volatility and correlation estimates are performed using theExponentially Weighted Moving Average technique, with a decay factor of= 0.94, as described in Appendix A.1.
8
-
8/12/2019 Var Bootstrap
10/61
Chapter 2
Assumptions andBackground
2.1 Choice of Return
The arithmetic return on an instrument at time t +t is given by
Ra(P(i)t+t) = (P(i)t+t
P(i)t )/P(i)tThe equivalent geometric return is given by
Rg(P(i)t+t
) = ln(P(i)t+t
/P(i)t )
We define the arithmetic returns in the risk factors,Ra(x(i)t+t
) andRg(x(i)t+t
)in exactly the same way. Both formulations have their own advantagesand disadvantages. The arithmetic return of an instrument is needed whenaggregating assets across portfolios, since the arithmetic return of a portfoliois a weighted linear sum of the arithmetic returns of the individual assets
(the corresponding formula for the return of a portfolio in terms of geometricreturns of the assets is not linear). However, the geometric return aggregatesbetter across time (the n day return is equal to a linear sum of the 1 dayreturns, whereas the corresponding formula for the n day arithmetic returnis not so simple). See [4] for more detail on this.
Also the geometric return is more meaningful economically, since, forexample, if the geometric returns of an instrument, or of a market variable,are normally distributed, then the corresponding instrument prices, or themarket variables themselves, are lognormally distributed, and so will neverbe negative. However, if arithmetic returns are normally distributed, then,
9
-
8/12/2019 Var Bootstrap
11/61
looking at the left tail of the distribution of these returns, Ra(P(i)
t+t) implies thatP(i)t+t
-
8/12/2019 Var Bootstrap
12/61
2.3 Choice of Risk Factors when Dealing with theYield Curve
The distribution used to estimate the VaR of a portfolio is determined fromthe distribution of the risk factors to which the portfolio is exposed. Thefirst step to measuring the risk of a portfolio is to determine what these riskfactors are. When dealing with the yield curve, the question which arises,due to the nontraded nature of interest rates, is whether the underlying riskfactors are in fact the rates themselves, or the corresponding zero couponbond prices. In this project, they were taken to be the zero coupon bondprices2. However, both methods are used in practice.
2.4 Modelling Risk Factors
The easiest and most common model for risk factors is the geometric Brow-nian motion model, in other words, the risk factors follow the process
dx(i)t =ix
(i)t dt+ix
(i)t dW
(i)t
for 1 i k, wherei andi represent the instantaneous drift and volatil-
ity, respectively, of the process for the ith risk factor and (W
(i)
t )t0 areBrownian motions, which will typically be correlated. To determine an ex-
pression for x(j)t , let g(xj) = ln xj . Then
gxi
= ij1xi
and 2g
xixj=ij 1x2
i
whereij is the indicator function. By the multidimensional Ito formula weget
dg(xjt) =k
i=1
g
xidx
(i)t +
1
2
ki=1
kj=1
2g
xixjdx
(i)t dx
(j)t (2.4.2)
=k
i=1
g
xidx
(i)t +
1
2
ki=1
2g
x2i(dx
(i)t )
2
= 1x(j)t
dx(j)t +12 1
(x(i)t )
2 (dx(i)t )2
=
j 1
22j
dt+jdW
(j)t
Thus, discretizing this, over t,
x(j)t+t = x
(j)t exp
j 1
22j
t+jW
(j)t
2This is the method used in [4].
11
-
8/12/2019 Var Bootstrap
13/61
= x
(j)
t expj 12 2j t+jtZj x(j)t exp
j
tZj
(2.4.3)
where Z k(0, ) and is the variance-covariance matrix which will bedefined in (4.2.2). (2.4.3) follows since the holding periodtis only one day.The major problem with the normality assumption is that, as describedin [5], the distribution of daily returns of any risk factor would in realitytypically show significant amounts of positive kurtosis. This leads to fattertails and extreme outcomes occurring much more frequently than would bepredicted by the normal distribution assumption, which would lead to anunderestimation of VaR (since VaR is concerned with the tails of the distri-bution). But the model is generally considered to be an adequate descriptionof the process followed by risk factors such as equities and exchange rates.
However, when dealing with the yield curve, the assumption of normalityof returns clearly does not give an adequate description of reality, since inter-est rates are known to be mean reverting, and zero coupon bonds are subjectto pull-to-par effects towards the end of their life. Although a simple inter-est rate model, such as a short rate model, may price an instrument moreaccurately (since it would capture the mean reverting property of interestrates), short rate models assume that all rates along the curve are perfectlycorrelated with the short rate, and thus will all move in the same direction.Since this is of course not the case in reality, it doesnt give much of an in-
dication of the risk of a portfolio to moves in the yield curve. A multi-factormodel increases complexity dramatically, and still does not give us what wewant. To illustrate this, suppose that rates have been on the increase for awhile. An interest rate model would model a decrease in rates due to themean reverting effect which would come into play. However, what a riskmeasurement technique is in fact interested in, is a continued upward trend.So although the normality assumption would lead to unrealistic term struc-tures over longer time horizons, since we are dealing with a holding period ofa day, the assumption seems to be appropriate for risk measurement, sinceto quote [3], for risk management purposes, what matters is to capture therichness in movements of the term structure, not necessarily to price todays
instruments to the last decimal point.
2.5 The Historical Data
The historical data provided consists of the following array of swap curvemarket rates:
12
-
8/12/2019 Var Bootstrap
14/61
overnight, 1 month, and 3 month rates (simple yields)
3v6, 6v9, 9v12, 12v15, 15v18, 18v21 and 21v24 FRA rates (simpleyields) and
3, 4, 5,. . . , 10, 12, 15, 20, 23, 25, 30 year swap rates (NACQ),
for the period from 2 July 2001 to 30 June 2003. Each array of rates wasbootstrapped to determine a daily NACC swap curve. This was done usingthe Monotone Piecewise Convergence (MPC) interpolation method3. Giventhat a large portfolio could depend on any number of the rates on the curve,and that measuring the risk of a portfolio typically involves the estimation of
a variance-covariance matrix of the underlying risk factors, it can become in-feasible, if not impossible, to estimate the risk of a portfolio to each of theserates individually. A standard set of maturities needs to be defined in orderto implement a risk measurement technique with a fixed set of data require-ments. The rates at these standard maturities are then assumed to describethe interest rate risk of the entire curve, and all calculations (of volatilities,correlations, etc) are done based on this standard set of maturities. Thestandard maturities selected consists of the following 24 maturities, in days(these correspond as closely as possible to the actual/365 daycount used inSouth Africa):
1, 30, 91, 182, 273, 365, 456, 547, 638, 730, 1095, 1460, 1825, 2190, 2555,2920, 3285, 3650, 4380, 5475, 6205, 7300, 9125 and 10950 days
Figure 2.1 shows daily zero coupon yield data over the 2 year period forthe standard set of maturities. From this graph it is evident that interestrates were quite volatile during this period. In particular, note the spike inalmost all the rates which occurs in December 2001. (This event significantlyaffects the results seen). Also, note that in June 2002, the entire yield curveinverts, resulting in the term structure changing from being approximatelyincreasing to approximately decreasing.
3For an explanation of this interpolation method, see [6]. The bootstrapping code wasprovided by Graeme West.
13
-
8/12/2019 Var Bootstrap
15/61
Figure 2.1: Daily zero coupon yields (NACC)
14
-
8/12/2019 Var Bootstrap
16/61
Chapter 3
The Portfolios
3.1 Decomposing Instruments into their BuildingBlocks
Since it is impossible to measure the risk of each of the enormous number ofrisk factors which a portfolio could depend on, simplifications are required.
The first step to measuring the risk of a portfolio is to decompose the in-struments making up a portfolio into simpler components. Both FRAs andswaps can be valued by decomposing them into a series of zero-coupon in-struments1. The next step is to map these positions onto positions for whichwe have prices of the components. Finally, we estimate VaR of the mappedportfolio.
Aforward rate agreement(FRA) is an agreement that a certain interestrate will apply to certain notional principal during a specific future periodof time [7]. If the contract, with a notional principal ofL, is entered intoat time t, and applies for the period T1 to T2 (T2 - T1 is always 3 months),then the party on the long side of the contract agrees to pay a fixed rater (the FRA rate) and receive a floating rate rf (3 month JIBAR, a simple
yield rate) overT1 to T2. This is equivalent to the following cashflows:
time T1:Ltime T2: +L(1 +r(T2 T1))
So at any time s, where t s T1, a FRA can be valued by discount-1Although FRAs and swaps can both be represented in terms of implied forward rates,
decomposing them into zero coupon bonds makes them linear in the underlying risk factors.
15
-
8/12/2019 Var Bootstrap
17/61
ing these cashflows. In other words
V =LB(s, T2)(1 +r(T2 T1)) LB(s, T1)
where B(s, T) is the discount factor at s for the period s to T. The valueto the short side is just the negative of this amount.
A swap is an agreement between two counterparties to exchange a seriesof future cashflows. The party on the long side of the swap agrees to make aseries of fixed payments and to receive a series of floating payments, typicallyevery 3 months, based on the notional principal L. The floating paymentsare based on the floating rate (typically 3 month JIBAR) observed at thebeginning of that 3 month period. Payments always occur in arrears, i.e.
they follow the natural time lag which many interest rate derivatives follow,allowing us to value a swap in terms of bonds. The value of the swap tothe long side can be determined as the exchange of a fixed rate bond for afloating rate bond.
V =Bfloat BfixThe value to the short side is just the negative of this. LetL be the notionalprincipal and Rthe effective swap rate per period, let there be n paymentsremaining in the life of the swap, and let the time till the ith payment beti, 1 i n. Then the value of the fixed leg at time t is
B
fix
= RLn
i=1 B(t, ti)Ift is a cashflow date, then the value of the floating leg is just
Bfloat= L(1 B(t, tn))
Ift is not a cashflow date, then the value of the fixed leg is unchanged, butthe value of the floating leg is determined using prespecified JIBAR rate forthe next payment. Supposeti1 < t < ti. Let the JIBAR rate for (ti1, ti]be rf(simple yield rate). Then the value of the floating leg is
Bfloat
= L1 +rf(ti t)365 B(t, ti) LB(t, tn)The formula above is derived in [8]. This method of analysing risk by meansof discounted cashflows, enables us to get an idea of the sensitivity of themarket value of an instrument to changes in the term structure of interestrates.
16
-
8/12/2019 Var Bootstrap
18/61
3.2 Mapping Cashflows onto the set of StandardMaturities
In general, we have a cashflow occurring at time t2, and two standard matu-ritiest1 andt3 which brackett2. We need a way of determining the value ofthe discount factor att2, which will necessarily involve some type of interpo-lation. There are various procedures for mapping cashflows of interest ratepositions. The technique of cash flow mapping used here, again consistentwith [4, Ch 6], splits each cashflow into two cashflows occurring at its closeststandard vertices, and has the property that it assigns weights to these twovertices in such a way that
Market value is preserved i.e. the total market value of the two stan-dardized cash flows is equal to the market value of the original cashflow.
Variance is preserved i.e. the variance of the portfolio of the twostandardized cash flows is equal to the variance of the original cashflow.
Sign is preserved i.e. the standardized cash flows have the same signas the original cash flow.
The method is described in Appendix A.2. This mapping technique is donefor all cashflows in a portfolio, except of course in the case where the cash-flow corresponds exactly to one of the standard vertices, where mappingbecomes redundant. Using the mapped cashflows, we are able to determineVaR for the portfolio, since we have the required volatilities and correlationsof the bond prices for these maturities.
In the (nonparametric) Historical Simulation techniques, rather thanmapping the position, a cashflow occurring at a non-standard maturity t2is valued by a simple linear interpolation between the zero coupon yield tomaturities which occur at standard maturitiest1 and t3, and, in the case ofHull-White Historical simulation, also between the volatilities which occurat the standard maturities.
17
-
8/12/2019 Var Bootstrap
19/61
Chapter 4
The Delta-Normal Method
4.1 Background
The Delta-Normal method (classic RiskMetrics Approach) is a parametric,analytic technique where the distributional assumption made is that thedaily geometric returns of the market variables are multivariate normallydistributed with mean return zero. The major advantage of this is that we
can use the normal probability density function to estimate VaR analyti-cally, using just a local valuation.
The normality assumption for returns of the underlying risk factors wasdiscussed in Section 2.4. The more problematic assumption in general isthe linearity assumption, since a first order approximation wont be able toadequately describe the risk of portfolios containing optionality. However,since this project is only concerned with linear instruments, the assumptionis not problematic here.
The advantages of this method include its speed (since it is a local valua-tion technique) and simplicity, and the fact that the distribution of returnsneed not be assumed to be stationary through time, since volatility updating
is incorporated into the parameter estimation.
4.2 Description of Method
VaR is determined as the absolute VaR based on the distribution of portfolioreturns, defined in (1.1.4). This in effect means that Delta-Normal VaR isa direct application of Markowitz portfolio theory, which measures the risk
18
-
8/12/2019 Var Bootstrap
20/61
of a portfolio as the standard deviation of returns, based on the variancesand covariances of the returns of the underlying instruments. The onlyreal difference here is that in Markowitz portfolio theory, the underlyingvariables are taken to be the instruments making up a portfolio, whereasin Delta-Normal VaR, because the distributional assumptions are based onthe returns of the risk factors and not on the instruments, the underlyingvariables are taken to be the risk factors themselves. As a consequenceof the linearity assumption, we can in fact consider a portfolio of holdingsin n instruments as a portfolio of holdings, W = (W1, . . . , W k), in the kunderlying risk factors, where k is not necessarily equal to n. Scaling theseto determine the relative holdings i =
Wi
P(i)t
, i = 1, . . . , k so that the latter
sum to unity, we get the vector = (1, . . . , k),
LetR(i)t denote the arithmetic return of theith risk factor at time t. The
(arithmetic) return of the portfolio at timetis
Rt =k
i=1
wiR(i)t
The expected return of the portfolio at time t is
E[Rt] = E
ki=1
wiR(i)t
=
ki=1
wiE[R(i)t ] = 0
using the approximation in (2.1.1) and the assumption that the geometricreturns have mean zero. The variance of the portfolio return at time t isgiven by
Var[Rt] =T (4.2.1)
where is the Variance-Covariance matrix given by
=
21 121,2 . . . 1k1,k211,2
22 . . . 2k2,k
... ...
. . . ...
k11,k k22,k . . . 2k
(4.2.2)
i is the standard deviation (volatility) of the ith risk factor and i,j is
the correlation coefficient ofR(i)t and R
(j)t , defined by i,j :=
Cov[R(i)t ,R(j)t ]
ij.
See Appendix A.3 for the derivation of this. Now, under the assumptionof normality of the distribution of Rt+t, given the value of the standarddeviation ofRt+t, SDev[Rt+t] =
Var[Rt+t], the VaR can be determined
analytically via the (100 )-th percentile ofRt+t. It should first be notedthat volatility is always an annual measure, so to measure daily VaR we needto include a scaling factor of
250 (where 250 is the approximate number
19
-
8/12/2019 Var Bootstrap
21/61
of business days in a year). VaR is determined as
VaR(absolute) = |Vt| z SDev[Rt+t] 1250
= |Vt| z
T
250
= z
WTW
250 (4.2.3)
wherez is the inverse of the cumulative normal distribution function. Thevariance-covariance matrix is estimated using the Exponentially WeightedMoving Average scheme.
20
-
8/12/2019 Var Bootstrap
22/61
Chapter 5
Historical Value at Risk
5.1 Classical Historical Simulation
5.1.1 Background
Here the distribution of the returns of the risk factors is determined by
drawing samples from the time series of historical returns. This is a non-parametric technique, since no distributional assumptions are made, otherthan assuming stationarity of the distribution of returns of market variables(in particular their volatility), through time. If this assumption did nothold, the returns would not bei.i.d., and the drawings would originate fromdifferent underlying distributions. The assumption does not hold in reality,since as described in [4], volatility changes through time, and periods of highvolatility and low volatility tend to cluster together. Historical simulationis a full revaluation technique. A possible problem with the implementationof this method could be a lack of sufficient historical data. Also, comparingthis method to Monte Carlo simulation, whereas Monte Carlo will typically
simulate about 10000 sample paths for each risk factor, Historical simulationconsiders only one sample path for each risk factor (the one that actuallyhappened).
The advantage of Historical simulation is that since it is nonparametric,all aspects of the actual distribution are captured, including kurtosis and fattails. Also, the full revaluation means that all nonlinearites are accountedfor, although it does make this technique more computationally intensivethan the Delta-Normal method.
21
-
8/12/2019 Var Bootstrap
23/61
5.1.2 Description of Method
The approach is straightforward. Let the window be of size N 250, tobe in accordance with the regulatory requirements for minimum length ofobservation window. Scenarios are generated by determining a time series ofhistorical day-to-day returns of the market variables over the last Nbusinessdays, and then assuming that the return of each market variable from t tot+ 1 is the same as the return was over each day in the time series.
In other words, as described in [9], ifx(j)t xt is the value of a particular
market variable at time t, then a set of hypothetical values x(j,i)t+1 for all
i {t-N , . . . , t-1} is determined by the relationship
ln
x(j,i)t+1
x(j)t
= ln
x(j)i+1
x(j)i
(5.1.1)
x(j,i)t+1 = x(j)t .x(j)i+1
x(j)i
This is done for all j {1, . . . , k}, to determine the matrix x(j,i)t+1 , 1 j k,1i(N 1), and a full revaluation of each instrument in the portfoliois done for each column using (1.1.1), to determine Pit+1, i {t-N , . . . , t-1}.From these, we determine Vit+1, i {t-N , . . . , t-1} and sort the results togenerate a histogram describing the pmf for the future portfolio value. VaRis then determined using definition (1.1.3), by determining the mean andappropriate percentile of this distribution.
Since interest rates are not taken to be the risk factors, and zero couponbond prices are, to relate these to the above, let B(t, ) denote the price of aperiod zero coupon bond at timet, which corresponds to the continuouslycompounded annual yield to maturity rt(), so B(t, ) = exp(rt()).Then (5.1.1) becomes
ln
B(i)(t+ 1, )
B(t, )
= ln
B(i+ 1, )
B(i, )
r(i)t+1() rt() = [ri+1() ri()]
r(i)t+1() = rt() +ri+1() ri()
22
-
8/12/2019 Var Bootstrap
24/61
5.2 Historical Simulation with Volatility Updating(Hull-White Historical Simulation)
5.2.1 Background
This method is proposed in [2]. It is an extension of traditional Histori-cal Simulation, which does away with the drawback of assuming constantvolatility. [2] mentions that the distribution of a market variable, whenscaled by an estimate of its volatility, is often found to be approximatelystationary. If the volatility of the return of a market variable at the current
time t is high, then due to the tendency of volatility to cluster, one wouldexpect a high return at time t+ 1. If historical volatility was in reality lowrelative to the value at t, however, the historical returns would underesti-mate the returns one would expect to occur from t to t+ 1. The reverse istrue if the volatility is low at time t and the historical volatility is high rela-tive to this. This approach incorporates the exponentially weighted movingaverage volatility updating scheme, used in the Delta-Normal method, intoclassical Historical Simulation.
5.2.2 Description of Method
Rather than assuming that the return of each market variable from tto t +1is the same as it was over each day in the time series, it is now the returnsscaled by their volatility which we assume will reoccur. This is described in
[9]. Ifx(j)t xt is the value of a particular market variable at time t, andt,j
is the volatility at time t of this market variable, then a set of hypothetical
values x(j,i)t+1 for all i {t-N , . . . , t-1}, is determined by the relationship
1
t,jln
x(j,i)t+1
x(j)t
=
1
i,jln
x(j)i+1
x(j)i
(5.2.2)
x(j,i)t+1 = x(j)tx(j)i+1
x(j)i
t,ji,jand we proceed as in (section 5.1.2). Relating this to interest rates, usingthe same notation as above, (5.2.2) becomes
1
t,jln
B(i)(t+ 1, )
B(t, )
=
1
i,jln
B(i+ 1, )
B(i, )
23
-
8/12/2019 Var Bootstrap
25/61
1
t,j r(i)t+1() rt() = 1i,j [ri+1() ri()] r(i)t+1() = rt() +
t,ji,j
[ri+1() ri()]
24
-
8/12/2019 Var Bootstrap
26/61
Chapter 6
Monte Carlo Simulation
6.1 Background
Monte Carlo simulation techniques are by far the most flexible and powerful,since they are able to take into account all non-linearities of the portfoliovalue with respect to its underlying risk factors, and to incorporate all desir-able distributional properties, such as fat tails and time varying volatilities.
Also, Monte Carlo simulations can be extended to apply over longer holdingperiods, making it possible to use these techniques for measuring credit risk.However, these techniques are also by far the most expensive computation-ally. Typically the number of simulations of each random variable needed,in order to get a sample which reasonably approximates the actual distribu-tion, is around 10000.
Like Historical simulation, this is a full revaluation approach. Here, how-ever, rather than drawing samples from the distribution of historical returns,a stochastic process is selected for each of the risk factors, the parametersof the returns process are estimated (again using exponentially weightedmoving averages here), and scenarios are determined by simulating price
paths for each of these risk factors, over the holding period. The portfoliois revaluated under each of these scenarios to determine a pmf of portfoliovalues, given by a histogram. VaR is determined as in historical simulation,using definition (1.1.3).
Apart from the computational time needed, another potential drawbackof this method is that since specific stochastic processes need to be selectedfor each market variable, the method is very exposed to model risk. Also,sampling variation, due to only being able to perform a limited number ofsimulations, can be a problem.
25
-
8/12/2019 Var Bootstrap
27/61
6.2 Monte Carlo Simulation using Cholesky de-composition
Each bond price corresponding to the set of standard maturities is simu-lated according to its variance-covariance matrix, using (2.4.3). Returnsare generated by performing a Cholesky decomposition of the matrix to de-termine the lower triangular matrix L such that = LLT. (A derivation
of Cholesky decomposition is contained in [10]). Then, we can simulatenormal random numbers according to this distribution by simulating the in-dependant normal random vector Z k(0, I), and calculating the matrixproduct X=LZ, since we have that E[X] =LE[Z] = 0, and
Var[X] = E[(LZ)(LZ)T]
= E[LZZTLT]
= LE[ZZT]LT
= LLT
=
An issue to note here is that Cholesky decomposition can only be performedon matrices which are positive semi-definite1. Theoretically, this will be thecase, since the formula for the variance of a portfolio, given by (4.2.1), wherew is a (nonzero) vector of portfolio weights, can never be negative. How-ever, in practice, when dealing with large covariance matrices with highlycorrelated components, this theoretical property can break down. Thus co-variance matrices should always be checked for positive semi-definitenessbefore trying to apply Cholesky decomposition (or principal componentsanalysis, which is discussed below). A solution to the problem is describedin [11].
6.3 Monte Carlo Simulation using Principal Com-ponents Analysis
Principal Component Analysis (PCA) provides a way of decreasing the di-mensionality of a set of random variables which are highly correlated. It isin fact often possible to describe a very large proportion of the movements
1A symmetric matrix A is positive semidefinite if and only ifbAb 0 for all nonzero b
26
-
8/12/2019 Var Bootstrap
28/61
in these random variables using a relatively small number of principal com-ponents, which can very effectively decrease the computational time neededfor simulation. As could be seen in Figure 2.1, the movements of rates alongthe yield curve are highly interdependant, thus PCA is often applied to theyield curve. The random variables are now taken to be the returns in zerocoupon yieldsat all 24 standard maturities, as opposed to using the bondprice returns, as was done in the previous methods considered.
Principal components are hypothetical random variables that are con-structed to be uncorrelated with one another. Suppose we have a vec-tor of returns in zero coupon yields (dropping the subscript t for now)x = (x1, . . . , xk)
T. The first step is to determine the normalized k1vector of returns y = (y1, . . . , yk)
T where yi = xi/i2. y is transformed
into a vector of principal components, d = (d1, . . . , dk), where each princi-pal component is a simple linear combination of the original random vector.At the same time, it is determined how much of the total variation of theoriginal vector is described by each of the principal components.
Suppose thek kcorrelation matrix ofy is given byy = [ij], 1 i, jk. It is determined from the covariance matrix (4.2.2) ofx, found by expo-nentially weighted moving averages. To define the principal components ofy, as described in [12], define the the k 1 vector := (1, . . . , k)T to bethe vector of eigenvalues of y, ordered in decreasing order of magnitude.Also define the k k matrix := [(1), . . . , (k)], where the columns (i),1 i k are the orthonormal3 eigenvectors of y, corresponding to i.The principal components are defined by
d:= Ty (6.3.1)
Since is orthogonal, premultiplying by gives
Ty= y = d (6.3.2)
yi = d1(1)i +d2(2)i +. . .+dk(k)i (6.3.3)In other words, y can be determined as a linear combination of the compo-nents. Now, since the new random variablesdi are ordered by the amountof variation they explain4, considering the ith entry ofy, we get
yi = d1(1)i +d2
(2)i +. . .+dm
(m)i +i
d1(1)i +d2(2)i +. . .+dm(m)i (6.3.4)2The normalized random variables are defined as yi = (xi i)/i where i and i
are the mean and standard deviation ofxi, but as before, we assume a mean return of 0.The analysis is done on these normalized variables, and once the components have beendetermined, we simply transform back to the original form of the data.
3Ak k matrixU is orthonormal iffUTU=I=U UT.4The total variation described by the ith eigenvector is measured as i/(
n
j=1j).
27
-
8/12/2019 Var Bootstrap
29/61
wherem < kand the error term i is introduced since we are only using thefirstm components. Thesem components are then considered to be the keyrisk factors, with the rest of the variation in y being considered as noise.
As shown in [13], the principal components di are also orthogonal, inother words they are uncorrelated, so their covariance matrix is just thediagonal matrix of their variances. Also, their variances are equal to theircorresponding eigenvaluesi. This means they can be easily simulated inde-pendantly, without the need for Cholesky decomposition, which cuts downthe computational requirements even further. Now, using the approximation(6.3.4), we can rewrite (6.3.2) as
yd (6.3.5)whereis thekmmatrix of the firstmeigenvectors, andd= (d1, . . . , dm)Tis an m 1 vector. Simulatingd enables us to obtain an approximation fory. The more components that are taken, the better the approximation.Here the assumption was made that the yields are lognormally distributed,so by simulating the components according to the normal distribution, weobtain an approximation for the vector of normalized returns y as a linearcombination of these (this is also normally distributed). This is transformedback into the form of the returns x, from which we can determine a scenariovector for the yields themselves.
When considering a vectorxof yield curve returns, typically mis taken tobe 3 (the first 3 components are assumed to describe most of the movementin the yield curve). The eigenvectors ofy describe the modes of fluctuationofy [12]. Specifically, the first eigenvector describes a parallel shift of allrates on the curve, the second eigenvector describes a twist or steepening ofthe yield curve, and the third describes a butterfly move, i.e. short rates andlong rates going in one direction, and intermediate rates going in the oppositedirection. The remaining eigenvectors describe other modes of fluctuation.To illustrate this, Figure 6.1 shows the correlation matrix for 30 June 2003,and the decomposition of this matrix into its eigenvectors. Note how thesigns of the first eigenvector are all the same - this is corresponding to aparallel shift in the yield curve. Likewise, the second component correspondsto half the rates going in one direction and the other half going in the other
direction (a twist in the yield curve), and the third corresponds to a butterflymove. Also, the total variation described by the first 3 eigenvectors is seenhere to be approximately 90%.
28
-
8/12/2019 Var Bootstrap
30/61
Figure 6.1: Correlation matrix of yield returns, and the corresponding eigen-values and eigenvectors (ordered in decreasing magnitude of eigenvalue). Itis easily seen that the first three eigenvectors correspond to a parallel shift,a twist, and a butterfly move of the yield curve.
29
-
8/12/2019 Var Bootstrap
31/61
Chapter 7
Results
The way in which a VaR model is assessed statistically is by performing abacktest, which determines if the number of times VaR is exceeded is consis-tent with what is expected for the model. The Basel Committee regulatoryrequirements for the backtesting window for Historical simulation methodsis 250 trading days. This is over and above the 250 trading day observationwindow needed to estimate VaR itself, which means that backtesting canonly begin 500 days into the data. There is therefore insufficient historical
data to perform a backtest and so a qualitative assessment is done instead.The methods will be tested on various hypothetical portfolios, by estimatingVaR for these portfolios over the historical period, and comparing the esti-mates to the actual losses that occurred. Test portfolios consist of individualinstruments (a swap or a FRA). A variety of maturities were considered sothat some portfolios were only exposed to short term interest rate risk, andothers to more medium term interest rate risk. Both long and short posi-tions were considered.
Note that we are interested in the ability of a method to estimate VaR fora specific portfolio, so, for example, if we are estimating VaR of a portfolioconsisting of a 3v6 FRA at time t, we are estimating the value of that port-
folio one business day into its life, at time t + 1. When we are at timet + 1we are interested in estimating the VaR of a new 3v6 FRA, one businessday into its life. In all graphs, the histogram shows the daily profit-and-loss (P&L), and the line graphs show the negative daily VaR of the variousmethods (negative in order to correspond with the losses of the P&L). Fromhere on, methods are abbreviated as follows.
DN Delta-Normal method
HS Classical Historical simulation
30
-
8/12/2019 Var Bootstrap
32/61
HW Hull-White Historical simulation
MC Monte Carlo simulation using Cholesky decomposition
My code for Monte Carlo simulation using PCA unfortunately has a bug init which, due to time constraints, I was unable to fix, so the results for thismethod have not been graphed. Looking at the trends of results, though,the method seems to correspond fairly well to MC, but results are on a muchsmaller scale.
Figure 7.1 and Figure 7.2 give results for a long position in a 3v6 FRAwith a notional principal of R10m, using all methods, at the 95% and 99%confidence levels respectively. Notice that the high period of volatility to-
wards the end of 2001, which we saw in Figure 2.1, is very prevalent here,and in all the results. In both the graphs, apart from very slight samplingerrors in MC (which was done using 8000 simulations), DN and MC trackone another exactly. This is because the FRA is completely linear in theunderlying zero coupon bond prices; both methods assumed that the dailybond price returns were multivariate normally distributed; and both meth-ods are using the same volatility and correlation estimates. In this case,therefore, the use of MC is not justified at all, since the computational timerequired was enormous in comparison to that of DN. This was the case forall portfolios considered, since swaps are completely linear in the underlyingbond prices as well.
Now, to consider the performance of the methods, notice how quicklyMC and DN react to increases in volatility. This is due to the exponentiallyweighted moving average technique which places a very high weighting onrecent events. Often these methods can overreact to a sudden increase involatility because of this. Considering Figure 7.1 and Figure 7.2, DN andMC do seem to be overreacting to large losses, relative to the other methods.DN and MC seem to be performing similarly at both the 95% and 99% con-fidence levels (although of course 99% VaR is necessarily higher than 95%VaR).
Consider the performance of the HS. At the 95% confidence level, al-though VaR is not underestimated or overestimated very badly, the method
just doesnt react to changes in P&L. At the 99% confidence level, themethod is reacting slightly more, but is overestimating VaR badly. The rea-son it is reacting more, is because the 99% confidence level takes us furtherinto the tails of the distribution. Essentially, what HS does, is to take anunweighted average of the 1st or the 5th percentile of the estimated value dis-tribution, based on a 250 day window. Considering Figure 7.2, the volatileperiod towards the end of 2001 will therefore contribute significantly, upuntil the day it drops out of the observation window, at which point theVaR drops dramatically.
The drop in VaR in December 2002 appearsto be corresponding to the
31
-
8/12/2019 Var Bootstrap
33/61
drop in VaR of the other methods. However, the reason for the drop isbecause this is exactly the point where the very volatile from a year agobegins to drop out of the window. By the time we get to February 2003, HSis at its lowest, whilst the other methods are at their highest. At this point,however, it begins to increase again, since there are now enough days in thenew window which had relatively high returns. HS then remains high, eventhough the last four months were very quiet.
Finally, considering HW, at the 95% confidence level, we see that thismethod is a major improvement on HS. In this particular graph, it performsbetter than all the other methods. It is reacting well to changes in volatility,yet not overreacting, which is the case with DN and MC.
At the 99% confidence level, HW isnt performing well at all, however,
and is overreacting to changes in volatility even more than DN and HS.There is no obvious explanation for this, and it perhaps merits further in-vestigation.
Figure 7.3 and Figure 7.4 show results for a short position in a 3 year in-terest rate swap, with a notional principal of R1m and quarterly payments,using the 4 methods, at the 95% and 99% confidence levels respectively.Again we see that December 2001 is prevalent. Considering the 95% Con-fidence level, we see that DN and MC are performing very well over mostof the period, except for a few months in early 2002 where it is overesti-mating VaR. The performance of HS in this case is almost identicle to DNand MC, and, in this graph, even HS seems to be performing reasonably,
although it is overestimating VaR until the volatile period drops out of thewindow. The 99% confidence level shows up bigger differences in the meth-ods, with DN and MC definitely performing better than HS and HW. HS isagain completely overestimating VaR, for the reasons explained above. Thedeeper into the tails we go, the greater the effect of December 2001 will beon HS, since the 1st percentile would correspond to worse loss than the 5thpercentile, which means VaR is overestimated even more.
Figure 7.5 shows results for a long position in a 5 year interest rateswap, with a notional principal of R1m and quarterly payments, at the 95%confidence level. (MC is not included since it is equivalent to DN again.)This longer dated swap is exposed to bond prices at three monthly intervals
out to five years. We see a similar pattern to before, with HW reacting bestto changes in the P&L at the 95% confidence level, but the performance ofall methods seems to be satisfactory.
All in all, for the portfolios which were considered, it seems as thoughthe methods are commonly overestimating VaR. Although a VaR methodis assessed by the number of times VaR was exceeded, to few exceedencesis not a good thing either, since an overestimation of VaR would lead to acapital requirement for market risk which is higher than it should be (thebank should, in this case, rather be spending some of its capital on risktaking).
32
-
8/12/2019 Var Bootstrap
34/61
Figure 7.1: Long 3v6 FRA at 95% confidence level
Figure 7.2: Long 3v6 FRA at 99% confidence level
33
-
8/12/2019 Var Bootstrap
35/61
Figure 7.3: Short 3 year swap at 95% confidence level
Figure 7.4: Short 3 year swap at 99% confidence level
34
-
8/12/2019 Var Bootstrap
36/61
Figure 7.5: Long 5 year swap at 95% confidence level
35
-
8/12/2019 Var Bootstrap
37/61
Chapter 8
Conclusions
The objective of this project was to implement the various VaR methods,and then to compare the performance of the methods on linear interestrate derivatives. Once the methods had been implemented, it proved tobe difficult to draw conclusive results of which method is best, using onlytwo years worth of historical data (which contains one incredible volatileperiod). However, a lot of the typical characteristics of these methods wereevident in the results obtained. One conclusion that can be drawn, is that
HS performs the worst. This is despite the fact that the vast majorityof banks worldwide use this approach. HW seems to be, in general, animprovement on HS. Also, it is probably considered more feasible than MC(based on Cholesky decomposition) when dealing with large portfolios, sincethe computational time required for the latter method is incredibly high.Monte Carlo simulation using PCA seems like a promising alternative toCholesky decomposition, and it would have been very interesting to be ableto compare the performance of these two methods, since the PCA code runssignificantly faster. To expand on the work done in this project, furtherresearch into the performance of these VaR methods for nonlinear interestrate derivatives could be done. However, a longer period of historical data
should definitely be considered in order to be able draw more conclusiveresults.
36
-
8/12/2019 Var Bootstrap
38/61
Bibliography
[1] Basel Committee on Banking Supervision. Amendment to the capi-
tal accord to incorporate market risks. www.bis.org/publ/bcbs24.htm,January 1996.
[2] John Hull and Alan White. Incorporating volatility up-dating into thehistorical simulation method for V@R. Journal of Risk, Fall 1998.
[3] Philippe Jorion. Value at Risk: the new benchmark for controllingmarket risk. McGraw-Hill, second edition, 2001.
[4] J.P.Morgan and Reuters. RiskMetrics - Technical Document.J.P.Morgan and Reuters, New York, fourth edition, December 18, 1996.www.riskmetrics.com.
[5] John Hull and Alan White. V@R when daily changes are not normal.Journal of Derivatives, Spring 1998.
[6] James M. Hyman. Accurate monotonicity preserving cubic interpola-tion. SIAM Journal of Statistics and Computation, 4(4):645654, 1983.
[7] John Hull. Options, Futures, and Other Derivatives. Prentice Hall, fifthedition, 2002.
[8] Robert Jarrow and Stuart Turnbull. Derivative Securities. Secondedition, 2000.
[9] Graeme West. Risk Measurement. Financial Modelling Agency, [email protected].
[10] Gene Golub and Charles Van Loan. Matrix Computations. Third edi-tion, 1996.
[11] Nicholas J. Higham. Computing the nearest correlation matrix - aproblem from finance. IMA Journal of Numerical Analysis, 22:329343, 2002.
37
-
8/12/2019 Var Bootstrap
39/61
[12] Glyn A. Holton. Value at Risk: Theory and Practice. Academic Press,2003.
[13] Carol Alexander. Orthogonal methods for generating large positivesemi-definite covariance matrices. Discussion Papers in Finance, 2000-06, 2000.
38
-
8/12/2019 Var Bootstrap
40/61
Appendix A
Appendix
A.1 The EWMA Model
The exponentially weighted moving average (EMWA) model is the modelused by [4] to determine historical volatility and correlation estimates. Amoving average of historical observations is used, where the latest observa-tions carry the highest weight in the estimates. This is taken from [9]. If we
have historical data for market variables x0, x1, . . . , xn, first determine thegeometric returns of these variables
ui(x) = ln xixi1
, 1 i n
For time 0, define
0(x)2 = 10
ni=1
ui(x)2
Cov0(x, y) = 10ni=1
ui(x)ui(y), 1 i n
For 1
i
n, the volatilities and covariances are updated recursivelyaccording to the decay factor, which, in this case, is defined to be = 0.94.The updating equations are
i(x)2 =2i1+ (1 )u2i 250
Covi(x, y) =Covi1(x, y) + (1 )ui(x)ui(y)250These equations give an annualized measure of the volatility and covariance.To determine correlations, for 0 i n, set
i(x, y) = Covi(x, y)
i(x)i(y)
39
-
8/12/2019 Var Bootstrap
41/61
A.2 Cash Flow Mappping
This is adapted from the description in [4]. Suppose a cashflow occurs at thenon-standard maturityt2, where t2 is bracketed by the standard maturitiest1 and t3. The cashflow occurring at time t2 is mapped onto two cashflowsoccurring att1andt3 as follows. Firstly, since bond prices dont interpolatewithout further assumptions, the interpolation is done on yields. Lety1,y2,andy3 be the continuously compounded yields corresponding to maturitiest1, t2, and t3 respectively, so B(t1) = exp(y1t1), B(t2) = exp(y2t2),B(t3) = exp (y3t3). Let 1, 2 and 3 be the volatilities of these bondprices. The procedure is to firstly linearly interpolate between y1 and y3 to
determine y2, and to linearly interpolate between 1 and 3 to determine2. We want a portfolio consisting of the two assets B(t1) and B(t3) withrelative weights and (1 ) invested in each, such that the volatility ofthis portfolio is equivalent to 2. The variance of the portfolio is given by(4.2.1), and so we have
22 =
(1 ) 21 13133113
23
(1 )
= 221+ 2(1 )1313+ (1 )223where13 is the correlation coefficient ofB (t1) and B (t3). Rearranging the
above equation gives a quadratic equation in :
221+ 2(1 )1313+ (1 )223 22 = 0 (21 21313+23)2 + (21313 223)+ (23 22) = 0
a2 +b+c = 0
where
a = 21 21313+23b = 21313 223c = 23 22
which is then solved for . is taken to be the smaller of the two roots ofthis equation, in order to satisfy the third condition of the map, i.e. thatthe standardized cash flows have the same sign as the original cash flow.
40
-
8/12/2019 Var Bootstrap
42/61
A.3 The Variance of the Return on a Portfolio
The Variance of the return of a portfolio of assets, based on Markowitzportfolio theory, is given by the following, which is derived in [9].
Var[Rt] = E[(Rt E[Rt])2]
= E
ki=1
wi(Rt,i E[Rt])2
= Ek
i=1k
j=1 wiwj(Rt,i E[Rt])(Rt,j E[Rt])=
ki=1
kj=1
wiwjE[(Rt,i E[Rt])(Rt,j E[Rt])]
=ki=1
kj=1
wiwjCov[Rt,i, Rt,j]
=ki=1
kj=1
wiwji,jij
wherei is the standard deviation (volatility) of the ith risk factor and i,j
is the correlation coefficient ofRt,i andRt,j, defined byi,j :=Cov[Rt,i,Rt,j]
ij.
(A.3.1) can be written in matrix form as
Var[Rt] =T
where is the Variance-Covariance matrix given by
=
21 121,2 . . . 1k1,k211,2
22 . . . 2k2,k
... ...
. . . ...
k11,k k22,k . . . 2k
41
-
8/12/2019 Var Bootstrap
43/61
Appendix B
Appendix
B.1 VBA Code: Bootstrapping Module
Option Explicit
Public Sub generateCurves()
Dim numRates As Integer number of rates included in the array of market yields
Dim i As Integer, j As Integer, col As IntegerDim writerow, writecol As Integer
Dim d As Date
Dim curveCount As Integer, readrow As Integer
Dim SCB() As Object
Dim curve() As Double
declare an array of standard days (out to 30 years) for which NACC rates will be written
to the spreadsheet (these correspond approximately to the 365 day count used in SA)
Dim stdVerts As Variant
stdVerts = Array(1, 30, 91, 182, 273, 365, 456, 547, 638, 730, 1095, 1460, 1825, _
2190, 2555, 2920, 3285, 3650, 4380, 5475, 6205, 7300, 9125, 10950)
Const stdVertCount As Integer = 24
Application.ScreenUpdating = False
numRates = 24
determine the number of curves in the historical data
With Sheets("Hist Data")
readrow = 2
curveCount = 0
Do Until .Cells(readrow + 1, 1) = ""
readrow = readrow + 1
curveCount = curveCount + 1
Loop
42
-
8/12/2019 Var Bootstrap
44/61
ReDim SCB(1 To curveCount)
For i = 1 To curveCount
Set SCB(i) = CreateObject("YieldCurve.SwapCurveBootstrap")
Next i
read data into the SCB objects
For i = 1 To curveCount
Application.StatusBar = "Processing data for curve " & i
col = 1
SCB(i).Effective_Date = .Cells(2 + i, 1)
SCB(i).IntMethod = "MPC"
For j = 1 To numRates
avoid leaving gaps in the arrays in the case where some rates are missing
If .Cells(2 + i, 1 + j) "" Then
SCB(i).Values(col) = .Cells(2 + i, 1 + j)
SCB(i).Rate_Type(col) = .Cells(2, 1 + j)
SCB(i).Rate_Term_Description(col) = .Cells(1, 1 + j)
c o l = c o l + 1
End If
Next j
SCB(i).Top_Index = col - 1
Next i
End With sheets("Hist Data")
generate the NACC spot and forward curves and output
With Sheets("NACC Swap Curves")
.Range(.Cells(1, 1), .Cells(11000, 256)).ClearContents
For writecol = 1 To stdVertCount
.Cells(1, writecol + 1) = stdVerts(writecol - 1)
Next writecol
writerow = 2
For i = 1 To curveCount
Application.StatusBar = "Generating curve " & i
ReDim curve(1 To (SCB(i).Curve_Termination_Date - SCB(i).Effective_Date + 1))
For d = SCB(i).Effective_Date + 1 To SCB(i).Curve_Termination_Date
curve(d - SCB(i).Effective_Date) = SCB(i).Output_Rate(d)
Next d
.Cells(writerow, 1) = SCB(i).Effective_Date
.Cells(writerow, 1).NumberFormat = "d-mmm-yy"
For writecol = 1 To stdVertCount
.Cells(writerow, writecol + 1) = curve(stdVerts(writecol - 1))
Next writecol
writerow = writerow + 1
Next i
End With Sheets("NACC Swap Curves")
For i = 1 To curveCount
Set SCB(i) = Nothing
Next i
Application.StatusBar = "Done!"
Beep
End Sub
43
-
8/12/2019 Var Bootstrap
45/61
B.2 VBA Code: VaR Module
Option Explicit
Private P() As Double
Private vol() As Double, covol() As Double
Private volR() As Double, covolR() As Double
Private dates() As Date
Private n As Integer # rows
Private stdVerts() As Variant
Const m As Integer = 24 # cols
Const window = 250
Public Sub main()Const lambda As Double = 0.94, confidenceLevel As Double = 0.95 0.95
Dim i As Integer, j As Integer, k As Integer, readrow As Integer, readcol As Integer
Dim data1 As Range, data2 As Range, data3 As Range
data1 is a row of dates, data2 is the matrix of market variables through time.
stdVerts = Array(1, 30, 91, 182, 273, 365, 456, 547, 638, 730, 1095, 1460, 1825, _
2190, 2555, 2920, 3285, 3650, 4380, 5475, 6205, 7300, 9125, 10950)
With Sheets("NACC swap curves")
readrow = 2
n = 0
Do Until .Cells(readrow + 1, 1) = ""
readrow = readrow + 1
n = n + 1
Loop
n = n + 1Set data1 = .Range(.Cells(2, 1), .Cells(n, 1))
Set data2 = .Range(.Cells(2, 2), .Cells(n, m))
ReDim dates(0 To n)
F o r i = 0 T o n - 1
dates(i) = data1.Cells(i + 1, 1)
Next i
dates(n) = dates(n - 1) + 1
End With
Application.StatusBar = "Estimating Parameters..."
Call emwa(data2, lambda)
With Sheets("Disc factors")
Set data3 = .Range(.Cells(3, 2), .Cells(n + 1, m))
End With
Call profitAndLoss(data2)Application.StatusBar = "Calculating Historical VaR..."
Call histVar(data2, confidenceLevel)
Call deltaNormal(data2, confidenceLevel)
Call monteCarlo(data2, 0.99)
Call doPCA(data2, confidenceLevel)
Application.StatusBar = "Done!"
End Sub
Private Sub emwa(data As Range, lambda As Double)
Dim i As Integer, j As Integer, k As Integer
Dim sum() As Double, sum2() As Double
44
-
8/12/2019 Var Bootstrap
46/61
ReDim P(1 To n - 1, 1 To m)
ReDim vol(0 To n - 1, 1 To m)
ReDim covol(0 To n - 1, 1 To m, 1 To m)
F o r i = 1 T o n - 1
F o r j = 1 T o m
P(i, j) = (data.Cells(i, j) - _
data.Cells(i + 1, j)) * stdVerts(j - 1) / 365 bond price log returns
Next j
Next i
get initial vols and covols
ReDim sum(1 To m)
ReDim sum2(1 To m, 1 To m)
F o r k = 1 T o 2 5
F o r j = 1 T o m
sum(j) = sum(j) + P(k, j) ^ 2
F o r i = 1 T o m
sum2(j, i) = sum2(j, i) + P(k, j) * P(k, i)
Next i
Next j
Next k
F o r j = 1 T o m
volR(0, j) = (10 * sum(j)) ^ 0.5
F o r k = 1 T o m
covol(0, j, k) = sum2(j, k) * 10
Next k
Next j
rolling calculator for vols and covols
F o r i = 1 T o n - 1
F o r j = 1 T o m
volR(i, j) = (lambda * (volR(i - 1, j) ^ 2 + (1 - lambda) * (P(i, j) ^ 2) * 250)) ^ 0.5
F o r k = 1 T o m
covol(i, j, k) = lambda * covol(i - 1, j, k) + _
(1 - lambda) * P(i, j) * P(i, k) * 250
Next k
Next j
Next i
Call displayVols("vols", vol, dates, n, m)
Call displayCovars("rate covar", covolR, n, m)
End Sub
Public Sub histVar(data As Range, confidenceLevel As Double)
Dim i As Integer, j As Integer, today As Integer, vposn As Integer, k As Integer
Dim hs_v(1 To window) As Double portfolio values for each scenario
Dim hw_v(1 To window) As Double
Dim hsPercentile As Double, hwPercentile As Double
Dim hsRateScenario(0 To m - 1) As Double Historical simulation
Dim hwRateScenario(0 To m - 1) As Double Hull White Historical
Dim hsVar() As Double, hwVar() As Double
Dim hsSum As Double, hwSum As Double, percentile As Double
ReDim hsVar(1 To n - window)
ReDim hwVar(1 To n - window)
dont need to value the new instrument each day, only 1 day later under each scenario
For today = window + 1 To n
45
-
8/12/2019 Var Bootstrap
47/61
Application.StatusBar = "Busy with historical VaR for " & dates(today - 1)
vposn = 1
hsSum = 0
hwSum = 0
For i = today - window To today - 1
F o r j = 1 T o m
hsRateScenario(j - 1) = data.Cells(today, j) + _
data.Cells(i + 1, j) - data.Cells(i, j)
hwRateScenario(j - 1) = data.Cells(today, j) + _
vol(today - 1, j) / vol(i - 1, j) * (data.Cells(i + 1, j) - data.Cells(i, j))
Next j
contract entered into today, we are interested how much we could lose on it tomorrow:
hs_v(vposn) = valueFRA(hsRateScenario, stdVerts, today, dates(today - 1), dates(today)
hs_v(vposn) = valueSwap1(hsRateScenario, stdVerts, today, dates(today - 1), dates(toda
hs_v(vposn) = valueSwap2(hsRateScenario, stdVerts, today, dates(today - 1), dates(today
hw_v(vposn) = valueFRA(hwRateScenario, stdVerts, today, dates(today - 1), dates(today)
hw_v(vposn) = valueSwap1(hwRateScenario, stdVerts, today, dates(today - 1), dates(toda
hw_v(vposn) = valueSwap2(hwRateScenario, stdVerts, today, dates(today - 1), dates(today
hsSum = hsSum + hs_v(vposn)
hwSum = hwSum + hw_v(vposn)
vposn = vposn + 1
Next i
quickSort hs_v
quickSort hw_v
percentile = window * (1 - confidenceLevel)
If Round(percentile) = percentile Then
hsVar(today - window) = hsSum / (n - window) - hs_v(percentile)
hwVar(today - window) = hwSum / (n - window) - hw_v(percentile)
Else interpolate
hsVar(today - window) = hsSum / (n - window) - 0.5 * (hs_v(Round(percentile)) _
+ hs_v(Round(percentile) - 1))
hwVar(today - window) = hwSum / (n - window) - 0.5 * (hw_v(Round(percentile)) _
+ hw_v(Round(percentile) - 1))
End If
Next today
With Sheets("long swap")
For i = 1 To n - window
.Cells(window + 3 + i, 2) = hsVar(i)
.Cells(window + 3 + i, 4) = hwVar(i)
Next i
End With
End Sub
Public Sub doPCA(data As Range, confidenceLevel As Double)
Dim i As Integer, readrow As Integer, corr() As Double
Dim today As Integer, pcaCorr() As Double, pcaCov() As Double, k As Integer
Dim v() As Double the 5 eigenvects corresp to the largest evals
Dim sqrtLambda() As Double, lambdaTruncated(1 To 3, 1 To 3) As Double
Dim rates() As Double, j As Integer, sum As Double, percentile As Double
Dim rateScenario() As Double, vols() As Double
Dim u(1 To 3) As Double, z(1 To 3, 1 To 1) As Double
Dim vals() As Double, mcVar() As Double, result As Variant
ReDim pcaCorr(1 To m, 1 To m), pcaCov(1 To m, 1 To m)
46
-
8/12/2019 Var Bootstrap
48/61
ReDim sqrtLambda(1 To m, 1 To m)
ReDim v(1 To m, 1 To 3), rates(0 To m - 1)
ReDim rateScenario(0 To m - 1), vals(1 To 8000)
ReDim mcVar(1 To n), vols(0 To m - 1)
Dim A As Double
Dim summ As Double
Dim l As Integer
Randomize
readrow = 2
For today = 1 To n
sum = 0
Application.StatusBar = "Busy with PCA for " & dates(today - 1)
With Sheets("rate covar")
Call covarToCorr(corr, .Range(.Cells(readrow, 1), .Cells(readrow + m, m)), m)
F o r j = 1 T o m
rates(j - 1) = data.Cells(today, j)
vols(j - 1) = volR(today - 1, j)
Next j
A = 0
F o r j = 1 T o 2 4
A = A + v o l s ( j - 1 ) ^ 2
Next j
A = Sqr(A / 24 / 250)
Call findComponents(sqrtLambda, lambdaTruncated, pcaCorr, v, m, corr)
need to convert the corr matrix into a cov matrix
F o r i = 1 T o m
F o r j = 1 T o m
pcaCov(i, j) = pcaCorr(i, j) * vols(i - 1) * vols(j - 1)
Next j
Next i
now we can simulate the components (which are independant)
For k = 1 To 8000
F o r i = 1 T o 3
u(i) = Rnd()
If u(i) > 0.999999 Then u(i) = 0.999999
If u(i) < 0.000001 Then u(i) = 0.000001
z(i, 1) = Application.WorksheetFunction.NormSInv(u(i)) std normal random numbers
Next i
F o r i = 1 T o 5
lambdaTruncated(i, i) = lambdaTruncated(i, i) ^ 0.5
Next i
result = Application.WorksheetFunction.MMult(v, lambdaTruncated)
result = Application.WorksheetFunction.MMult(result, z)
summ = 0
F o r j = 1 T o m
rateScenario(j - 1) = rates(j - 1) + A * result(j,1)
Next j
vals(k) = valueFRA(rateScenario, stdVerts, today, dates(today - 1), dates(today))
vals(k) = valueSwap1(rateScenario, stdVerts, today, dates(today - 1), dates(today))
vals(k) = valueSwap2(rateScenario, stdVerts, today, dates(today - 1), dates(today))
sum = sum + vals(k)
Next k
47
-
8/12/2019 Var Bootstrap
49/61
readrow = readrow + m
End With
quickSort vals
percentile = 8000 * (1 - confidenceLevel)
If Round(percentile) = percentile Then
mcVar(today) = sum / 8000 - vals(percentile)
Else interpolate
mcVar(today) = sum / 1000 - 0.5 * (vals(Round(percentile)) _
+ vals(Round(percentile) - 1)) sum / 8000 - 0.5 * (vals(Round(percentile)) _
End If
Next today
With Sheets("long swap")
F o r i = 1 T o n
.Cells(3 + i, 15) = mcVar(i)
Next i
End With
End Sub
Public Sub findComponents(ByRef sqrtLambda() As Double, ByRef lambdaTruncated() _
As Double, ByRef pcaCorr() As Double, ByRef v() As Double, size As Integer, _
corr() As Double)
Dim evals As Variant, evecs As Variant
Dim i As Integer, j As Integer
Dim result As Variant, inv As Variant
Dim eigval() As Double, count As Double
ReDim eigval(1 To size)
result = Application.Run("NTpca", corr) get eigenvalues and eigenvectors
For i = 1 To size
eigval(i) = result(1, i)
For j = 1 To size
If i = j Then
sqrtLambda(i, i) = result(1, i) ^ 0.5
End If
Next j
Next i
For count = 1 To 3
F o r i = 2 T o s i z e + 1
v(i - 1, count) = result(i, count)
Next i
Next count
Call getCovPC(lambdaTruncated, eigval)
With Application.WorksheetFunction
result = .MMult(v, lambdaTruncated)
inv = .Transpose(v)
result = .MMult(result, inv)
End With
return the pca correlation matrix
For i = 1 To size
For j = 1 To size
pcaCorr(i, j) = result(i, j)
Next j
Next i
End Sub
48
-
8/12/2019 Var Bootstrap
50/61
Private Sub getCovPC(newCov() As Double, eigval() As Double)
create a new covariance matrix for the principal components
Dim i As Integer, j As Integer
F o r i = 1 T o 3
F o r j = 1 T o 3
If i = j Then
newCov(i, j) = eigval(i)
Else: newCov(i, j) = 0
End If
Next j
Next i
End Sub
Public Sub monteCarlo(data As Range, confidenceLevel As Double)
Dim z(1 To 24, 1 To 8000) As Double, covar(1 To m, 1 To m) As Double
Dim currRates(0 To m - 1) As Double, v(0 To m - 1) As Double
Dim i As Integer, j As Integer, k As Integer
Dim rateScenario(0 To m - 1) As Double, vals(1 To 8000) As Double
Dim sum As Double, vposn As Integer, mcVar() As Double, today As Integer
Dim seed1 As Long, seed2 As Long, percentile As Double
Dim u1 As Double, u2 As Double, z1 As Double, z2 As Double
ReDim mcVar(1 To n)
Randomize
For today = 1 To n
sum = 0
Application.StatusBar = "Busy with Monte Carlo VaR for " & dates(today - 1)
get the rates and bond price vols for today
F o r j = 1 T o m
currRates(j - 1) = data.Cells(today, j)
v(j - 1) = vol(today - 1, j) / Sqr(250) daily vols
Next j
determine the covariance matrix for today
F o r i = 1 T o m
F o r j = 1 T o m
covar(i, j) = covol(today - 1, i, j) / 250
Next j
Next i
generate 10000 rate scenarios for today and revalue the portfolio under each
Call generate(z, 24, 8000)
Call getCorrNums(z, covar, v) converts N(0,I) numbers to N(0,covar) numbers
For k = 1 To 8000
F o r j = 1 T o m
rateScenario(j - 1) = currRates(j - 1) - 365 / stdVerts(j - 1) * v(j - 1) * z(j, k)
Next j
vals(k) = valueFRA(rateScenario, stdVerts, today, dates(today - 1), dates(today))
vals(k) = valueSwap1(rateScenario, stdVerts, today, dates(today - 1), dates(today))
vals(k) = valueSwap2(rateScenario, stdVerts, today, dates(today - 1), dates(today))
sum = sum + vals(k)
Next k
quickSort vals
percentile = 8000 * (1 - confidenceLevel)
If Round(percentile) = percentile Then
49
-
8/12/2019 Var Bootstrap
51/61
mcVar(today) = sum / 8000 - vals(percentile)
Else interpolate
mcVar(today) = sum / 8000 - 0.5 * (vals(Round(percentile)) _
+ vals(Round(percentile) - 1))
End If
Sheets("long swap").Cells(3 + today, 9) = mcVar(today)
Next today
End Sub
Public Sub generate(ByRef nrmlNums() As Double, rows As Integer, cols As Integer)
nrmlNums has dimensions 24x8000
Dim i As Integer, j As Integer, u As Double
For i = 1 To rows
For j = 1 To cols
u = Rnd()
If u > 0.999999 Then
u = 0.999999
Else: If u < 0.000001 Then u = 0.000001
End If
nrmlNums(i, j) = Application.WorksheetFunction.NormSInv(u)
Next j
Next i
End Sub
Public Sub getCorrNums(ByRef nrmlNums() As Double, cov() As Double, vol() As Double)
Dim i As Integer, j As Integer
converts N(0,I) numbers to N(0,cov) numbers using Cholesky decomposition
Dim chol As Variant
Dim X() As Double
Dim corr(1 To n, 1 To n) As Double
ReDim X(1 To n, 1 To k)
F o r i = 1 T o n
F o r j = 1 T o n
corr(i, j) = cov(i, j) / (vol(i - 1) * vol(j - 1))
Next j
Next i
chol = cholesky(corr)
get corresponding covar matrix
F o r i = 1 T o n
F o r j = 1 T o n
cov(i, j) = corr(i, j) * vol(i - 1) * vol(j - 1)
Next j
Next i
Call matProd(X, chol, nrmlNums)
F o r i = 1 T o n
F o r j = 1 T o k
nrmlNums(i, j) = X(i, j) these are distributed N(0,cov)
Next j
Next i
F o r i = 1 T o 2 4
F o r j = 1 T o 2 4
With Sheets("test")
.Cells(i, j) = corr(i, j)
50
-
8/12/2019 Var Bootstrap
52/61
.Cells(i, j + 26) = chol(i, j)
End With
Next j
Next i
End Sub
Function cholesky(Mat)
performs the Cholesky decomposition A=L*L T
Dim A, l() As Double, S As Double
Dim rows As Integer, cols As Integer, i As Integer, j As Integer, k As Integer
A = Mat
rows = UBound(A, 1)
cols = UBound(A, 2)
begin Cholesky decomposition
ReDim l(1 To rows, 1 To rows)
For j = 1 To rows
S = 0
F o r k = 1 T o j - 1
S = S + l ( j , k ) ^ 2
Next k
l(j, j) = A(j, j) - S
If l(j, j)
-
8/12/2019 Var Bootstrap
53/61
Dim covar(1 To m, 1 To m) As Double, variance
Dim dnVar() As Double, stddev As Double
ReDim dnVar(1 To n)
vposn = 1
For today = 1 To n window + 1 To n - 1
Application.StatusBar = "Busy with Delta-Normal VaR for " & dates(today - 1)
F o r j = 1 T o m
rates(j - 1) = data.Cells(today, j)
v(j - 1) = vol(today - 1, j) / Sqr(250)
Next j
determine the covariance matrix for today
F o r i = 1 T o m
F o r j = 1 T o m
covar(i, j) = covol(today - 1, i, j) / 250
Next j
Next i
stddev = valueFRA(rates, stdVerts, today, dates(today - 1), dates(today), v, covar)
stddev = valueSwap1(rates, stdVerts, today, dates(today - 1), dates(today), v, covar)
stddev = valueSwap2(rates, stdVerts, today, dates(today - 1), dates(today), v, covar)
dnVar(vposn) = 2.326 * stddev
vposn = vposn + 1
Next today
With Sheets("long swap")
For i = 1 To n 1 T o n - window
.Cells(3 + i, 6) = dnVar(i)
Next i
End With
End Sub
Private Sub profitAndLoss(data As Range)
Dim rates(0 To m - 1) As Double, vols(0 To m - 1) As Double
Dim j As Integer, k As Integer, today As Integer, vposn As Integer
Dim val1 As Double, val2 As Double
Dim valNewInstr() As Double, valOldInstr() As Double
valOldInstr values the instrument initiated 1 day previously
ReDim valNewInstr(1 To n)
ReDim valOldInstr(1 To n)
vposn = 1
Application.StatusBar = "Determining P&L..."
For today = 1 To n window + 1 To n - 1
F o r j = 1 T o m
rates(j - 1) = data.Cells(today, j)
Next j
If Not (today = n) Then
valNewInstr(vposn) = valueFRA(rates, stdVerts, today, dates(today - 1), dates(today - 1)
valNewInstr(vposn) = valueSwap1(rates, stdVerts, today, dates(today - 1), dates(today -
valNewInstr(vposn) = valueSwap2(rates, stdVerts, today, dates(today - 1), dates(today - 1
End If
If Not (today = 1) Then (today = window + 1) Then
valOldInstr(vposn) = valueFRA(rates, stdVerts, today - 1, dates(today - 2), dates(today
valOldInstr(vposn) = valueSwap1(rates, stdVerts, today - 1, dates(today - 2), dates(toda
valOldInstr(vposn) = valueSwap2(rates, stdVerts, today - 1, dates(today - 2), dates(today
52
-
8/12/2019 Var Bootstrap
54/61
End If
vposn = vposn + 1
Next today
With Sheets("long swap")
F o r j = 1 T o n - 1 1 T o n - w i n d o w - 1
.Cells(4 + j, 8) = valOldInstr(j + 1) - valNewInstr(j)
Next j
End With
End Sub
Public Sub quickSort(ByRef v() As Double, _
Optional ByVal left As Long = -2, Optional ByVal right As Long = -2)
quicksort is good for arrays consisting of several hundred elements
Dim i, j, mid As Long
Dim testVal As Double
If left = -2 Then left = LBound(v)
If right = -2 Then right = UBound(v)
If left < right Then
mid = (left + right) \ 2
testVal = v(mid)
i = left
j = right
Do
Do While v(i) < testVal
i = i + 1
Loop
Do While v(j) > testVal
j = j - 1
Loop
If i j
sort smaller segment first
If j
-
8/12/2019 Var Bootstrap
55/61
v(item1) = temp
End Sub
Public Sub displayVols(sheetname As String, ByRef vols() As Double, ByRef dates() As Date, _
rows As Integer, cols As Integer)
Dim i As Integer, j As Integer
With Sheets(sheetname)
For i = 1 To rows
.Cells(i + 1, 1) = dates(i - 1)
.Cells(i + 1, 1).NumberFormat = "d-mmm-yy"
For j = 1 To cols
.Cells(i + 1, j + 1) = vols(i - 1, j)
Next j
Next i
End With
End Sub
Public Sub displayCovars(sheetname As String, ByRef covol() As Double, _
rows As Integer, m As Integer)
writes the covariances to spreadsheet
Dim i As Integer, j As Integer, k As Integer, writerow As Integer, writerow2 As Integer
writerow = 1
writerow2 = 1
For i = 1 To rows
Sheets("dates").Cells(writerow2, 1) = dates(i - 1)
F o r j = 1 T o m
Sheets(volsheet).Cells(writerow2, j) = vols(i - 1, j)
F o r k = 1 T o m
convert from annual to daily measure of covariance
Sheets(sheetname).Cells(writerow + j, k) = covol(i - 1, j, k) / 250
Next k
Next j
writerow = writerow + m
writerow2 = writerow2 + 1
Next i
End Sub
B.3 VBA Code: Portfolio Valuation Module
Option Explicit
Private Function valueSwap(ByRef rateScenario() As Double, ByRef stdVerts() As Variant, _
startDate As Date, valDate As Date, deltat As Integer, n As Integer, NP As Double, _
effSwapRate As Double, jibar As Double, longOrShort As String) As Double
Dim T As Integer, daysTillNextFlow As Integer, i As Integer
Dim rate As Double, B() As Double, vFix As Double, vFloat As Double, sum As Double
Dim nextFlow As Date, Bfloat As Double, R As Double
54
-
8/12/2019 Var Bootstrap
56/61
nextFlow = DateSerial(Year(startDate), Month(startDate) + deltat, day(startDate))
ReDim B(1 To n)
F o r i = 1 T o n
daysTillNextFlow = nextFlow - valDate
R = interpRates(rateScenario, stdVerts, daysTillNextFlow)
B(i) = Exp(-R * daysTillNextFlow / 365)
nextFlow = DateSerial(Year(nextFlow), Month(nextFlow) + deltat, day(nextFlow))
Next i
If valDate = startDate Then
vFloat = NP * (1 - B(n))
Else
nextFlow = DateSerial(Year(startDate), Month(startDate) + deltat, day(startDate))
daysTillNextFlow = nextFlow - valDate
R = interpRates(rateScenario, stdVerts, daysTillNextFlow)
Bfloat = Exp(-R * daysTillNextFlow / 365)
vFloat = NP * (1 + jibar * daysTillNextFlow / 365) * Bfloat - NP * B(n)
End If
sum = 0
F o r i = 1 T o n
sum = sum + B(i)
Next i
vFix = effSwapRate * NP * sum
valueSwap = vFloat - vFix
If longOrShort = "short" Then
valueSwap = -valueSwap
End If
End Function
Private Function interpRates(ByRef rateScenario() As Double, ByRef stdVerts() As Variant, _
daysTillNextFlow As Integer) As Double
given a set of rates at the set of stdVerts, determines the rate at the specified # days
by interpolating between 2 closest nodes in stdVerts
Dim R As Double, i As Integer, curr As Integer current position in stdVerts
i = 0
curr = stdVerts(i)
While daysTillNextFlow > curr find the closest standard vertex occurring after days
i = i + 1
curr = stdVerts(i)
Wend
If daysTillNextFlow = curr Then
R = rateScenario(i) no interpolation needed
Else
R = interp((stdVerts(i - 1)), (stdVerts(i)), rateScenario(i - 1), _
rateScenario(i), (daysTillNextFlow))
End If
interpRates = R
End Function
Private Function splitOntoStdVerts(daysTillNextFlow As Integer, _
ByRef stdVerts() As Variant, ByRef splitVals() As Double, _
val As Double, Optional ByRef vols, Optional ByRef cov) As Integer
given the number of days till a cashflow and its value, and vols for the std vertices,
split the cashflow onto 2 nodes and determine the 2 values at each node (splitVals array)
return the index in stdVerts of the first of the 2 nodes onto which it was split
55
-
8/12/2019 Var Bootstrap
57/61
Dim v1 As Double, v2 As Double, v3 As Double
Dim i As Integer, curr As Integer current position in stdVerts
Dim W As Double, sigma As Double
Dim posn As Integer
i = 0
curr = stdVerts(i)
While daysTillNextFlow > curr find the closest standard vertex occurring after days
i = i + 1
curr = stdVerts(i)
Wend
If daysTillNextFlow = curr Then
v2 = vols(i) no interpolation needed
W = 1
splitVals(0) = val
splitVals(1) = 0
p o s n = i - 1
Else
v1 = vols(i - 1)
v3 = vols(i)
v2 = interp((stdVerts(i - 1)), (stdVerts(i)), v1, v3, (daysTillNextFlow))
W = quadratic(v1, v2, v3, (cov(i, i + 1)))
splitVals(0) = val * W
splitVals(1) = val * (1 - W)
p o s n = i - 1
End If
End Function
Public Function interp(C As Double, d As Double, f_c As Double, f_d As Double, _
X As Double) As Double
i n t e r p = ( X - C ) / ( d - C ) * f _ d + ( d - X ) / ( d - C ) * f _ c
End Function
Public Function quadratic(sig_a As Double, sig_b As Double, sig_c As Double, sig_ac As Double)
As Double
Dim alpha As Double, beta As Double, gamma As Double
Dim root1 As Double, root2 As Double, k As Double
alpha = sig_a ^ 2 + sig_c ^ 2 - 2 * sig_ac
beta = 2 * sig_ac - 2 * sig_c ^ 2
gamma = sig_c ^ 2 - sig_b ^ 2
k = Sqr(beta ^ 2 - 4 * alpha * gamma)
root1 = (-beta + k) / (2 * alpha)
root2 = (-beta - k) / (2 * alpha)
If (root1 < root2) Then
quadratic = root1
Else
quadratic = root2
End If
End Function
Public Function valueSwap1(ByRef rateScenario() As Double, ByRef stdVerts() As Variant, _
startDatePosn As Integer, startDate As Date, valDate As Date, _
Optional ByRef vols, Optional ByRef cov) As Double
56
-
8/12/2019 Var Bootstrap
58/61
short position in a 3 yr vanilla interest rate swap, quarterly payments
startDatePosn determines the rate to be used, and valDate is the valuation date (this
will either correspond exactly to dateposn, or to 1 day (the holding period) later
if vols are provided ie for delta-normal, returns the std dev, else returns the price
Const deltat As Integer = 3
Const n As Integer = 12
Const NP = 10000000 notional principal
Dim swapRate As Double, effSwapRate As Double, jibar As Double
swapRate = Sheets("Hist Data").Cells(startDatePosn + 2, 12) NACQ swap rate
effSwapRate = swapRate / 4 effective rate per quarter
jibar = Sheets("Hist Data").Cells(startDatePosn + 2, 4) 3 month JIBAR for the first period
can use this because we are only ever valuing 1 day into the contract in this case
If Not (IsMissing(vols)) Then
valueSwap1 = valueThreeYrSwapDN(rateScenario, stdVerts, startDate, valDate, deltat, n, NP,
effSwapRate, jibar, "short", vols, cov) delta normal - returns the std dev
Else
valueSwa