Analyzing VaR Models: Back-Testing Accuracy with Financial Data

Categories: Science

Abstract

Value at risk (VaR) represents the largest loss of investment portfolio that is expected in a given reported period, with a given confidence level. This manuscript aims to estimate the effectiveness and accuracy of VaR models, while estimating which test used for back-testing is most reliable in evaluating VaR model accuracy. Secondary and audited data was used along with various back-testing methods for examining exception frequency and results indicate that the VaR models in this research are accurate at almost all levels of confidence with only a slight presence of possible risks and problems.

Five tests including Point of Failure, Time until First Failure, Basel Traffic Light, Christoffersen's Independence test and Mixed Kupiec test will be performed to evaluate if the respective method for VaR calculation is consistent. Limitations in the back-testing process are linked to the fact that VaR models are generally accurate under normal market conditions only.

Introduction

In today's everchanging business environment, management is often in a position to make decisions on a go, without a deep evaluation of risks and potentially negative effects of such decision.

Get quality help now
Sweet V
Sweet V
checked Verified writer

Proficient in: Science

star star star star 4.9 (984)

“ Ok, let me say I’m extremely satisfy with the result while it was a last minute thing. I really enjoy the effort put in. ”

avatar avatar avatar
+84 relevant experts are online
Hire writer

In the world of financial securities the most ubiquitous type of risk is actually market risk which is threat to a portfolio of financial instruments derived from the actual movement and/or volatility of market prices that include stock prices, forex (foreign exchange rates), option prices, swaps, commodity prices, interest rates etc. On the top of the list of today's modern management of any business has to to be an active management of risks which has become an indigenous part of operations in which management defines, benchmarks and after all controls the vulnerability to risk (Graham & Pal, 2014).

Modern financial institutions come face to face with two major challenges: management of risk (risk mitigation) and maximization of profits.

Get to Know The Price Estimate For Your Paper
Topic
Number of pages
Email Invalid email

By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy. We’ll occasionally send you promo and account related email

"You must agree to out terms of services and privacy policy"
Write my paper

You won’t be charged yet!

Financial institutions aim to increase revenue by taking on the risk(s) and actively coping with it. Hence, in order to keep profitability of the organization at the wanted level, management of the company needs to constantly and consistantly manage risks (Emmer et al, 2015). Markowitz initially introduced gauging risks by utilizing mean variance behaviour (Markowitz, 1991). Later on, other dimensions of risks have arisen and those are conditional value at risk and value at risk. The latter has become an elementary measure of risk in banking regulation and a prime measure of internatal risk of banks (Pfug, 2000). Great thing about VaR is that it is very user friendly compared to other risk measurement techniques, hence earning the prime risk measurement spot in practice – what additionally confirmed VaR measurement, is the fact that it has been accepted by the Basel Accord as the preeminent and unsurpassed measure of risk.

Literature Review

Riskiness of loss in investing, measure of it that is, represents value at risk or VaR. This measure assesses how much can a given set of investments lose in value, given a certain probability, in normal market conditions during a specified period of time. This measure is often used by investment companies and financial markets in order to indicate the amount of funds that can cover potential losses. This way VaR serves as a quality estimate of potential loss of value of a given portfolio. As noted above, importance of VaR lies in the fact that it is fairly easy to come up with and interpret, as a number that is a risk or the level to which the institution's portfolio is exposed to loss. Elemental methodology of VaR is actually a fusion of contemporary portfolio theory, statistics and financial analysis, all of which assess risk factors (Zhang and Nadarajah, 2017).

VaR was widely implemented in banks starting with a Chase Bank Inc back in 1998 when they started using VaR as a tool for controlling and assessing daily risks and vulnerability of bank's portfolio. CEO of the Chase bank at a time, sir Weatherstone, chartered bank's financial analysts to start generating rapport on a daily basis, a rapport that would pivot around a number that would show an implict loss of a portfolio (Campbell, 2005). Yamai stated that if in a given prortfolio, proportion or contribution of open positions during a certain period is unchanged, value at risk gives a high quality insight into the potential loss of such portfolio (Yamai, 2002).

This value of the estimated loss that Yamai talked about, is gauged with a specific level of assurance hence when considering expected loss we can only treat it as a „potential“ loss. This is important to note since this potential loss is not a measure that indicates a maximum boundary of attainable loss. Another important feature of VaR is that it is not a tool used as an illustrant of potential losses when extreme market conditions occur. This can be portrayed in the following example – If VaR is 95%, and VaR representing the maximum amount in a portfolio expected to be lost over a given time horizon, at a pre-defined confidence level - a 95% one-month (or some other period) VaR is $100,000, there is 95% confidence that over the next month the portfolio shall not lose more than $100,000, but doesn’t “explain” what may occur with the other 5% eventualities.

Jorion stated that VaR is the new benchmark for managing financial risk as it takes into account how changes in financial instruments’ price affect each other, and expressed VaR as:

VaR=a*σ*W (1)

Where parameters of the above formula are: “a” – confidence interval, σ – Standard deviation (volatility) and W – starting portfolio value (Jorion, 2001).

Christofferssen noted that since VaR accounts for changes in securities’ price and the way they affect each other, it can disrate the risk with the help of diversification techniques (Christofferssen, 1998). Over time we can see periods with pronounced volatility and persistency of it. Because of this it is possible to differentiate between two market conditions: one under normal market conditions and one under extreme market conditions, bluntly said, not-normal market conditions. In these abnormal market conditions, VaR becomes an inadequate indicator, and if one wants a broader view and a better awareness of market risk, VaR has to be combined with other indicators, like a stress test etc (Bams et al, 2017).

Prior to doing computations and arriving to any verdicts from them, it is crucial to be receptive and conscious of all fundamental terms apropos the procedure of projected VaR revision. In order to verify whether outcomes collected from VaR computation are coherent, harmonious and tried-and-true, each model must be supported, attested and corroborated with the process that is popularly called backtesting that is done with the support of statistical methods. Brown confirmed the importance of backtesting procedure by stating that any VaR is as good as it’s backtest “When someone shows me a VaR number, I don’t ask how it is computed, I ask to see the backtest“ (Brown, 2008). Backtesting is an approach where the prognosed VaR number is associated and equated to the losses/gains of a given investment portfolio.

If VaR valuation is not precise or finite, the model itself should be probed and inspected, possibly for wrong assumptions, potentially unfit parameters, faulty model itself etc. There are various methods that are recommended for backtesting. It is important to mention an unconditional coverage test and what features it, is that the unconditional coverage doesn’t factor in or consider when the exception has occurred. Similarly important aspect is to attest that observations that surpass and outreach VaR are independent, that is, to be uniformly dispersed in time. The main feature of a well-founded and rational model is that such model is capable of avoiding clustering of deviations, in such way that a given model reacts quickly to increase or decrease in volatility of a financial instrument or a portfolio and their respective correlation (Paseka et al, 2018).

We have to note that there could be very serious inconsistencies in VaR appraisals for turbulent markets (Nelson, 1991). In its basic definition VaR helps us to compute expected loss given that market conditions are normal, meaning that a well-constructed VaR model would be able to give a precise number of deviations and exceptions that are evenly distributed in time which means that they would be independent of each other.

Methodology of Research

Due to the fact that this research takes into consideration not only one test, the hypothesis for each test will be presented individually. However, the null hypothesis that summarize five tests for the back-testing of VaR model is simply stated as:

Ho: Value at risk model is statistically significant

The TUFF (time until first failure) is the test and measures the time until the first exception occurs. The null hypothesis for TUFF test is expressed as:

H0:p=p ̂=1/v (3)

Variables in formula (3) are p - The magnitude of failure, p ̂ - recognized failure rate and v- Time until first exception. Translated into theTUFF test, the main hypothesis would be that the probability of an exception is the inverse probability of the confidence level for VaR. The conclusion is the same as for the POF test hypothesis that in the case where the likelihood ratio is higher than the critical value of the χ², the null hypothesis would be rejected and model would be considered as inaccurate.

Otherwise, the null hypothesis would be accepted.The Basel Traffic Light Approach as the second test examines the model accuracy and correctness by measuring the number of exceptions. Using Basel Traffic Light approach null hypothesis states that the number of exceptions is between 0 and 32 at 90% level of confidence; 0 and 17 at 95%; and between 0 and 4 at 99% level of confidence. If the number of exceptions does not fall into range, the conclusion about model inaccuracy would be made. Otherwise, the null hypothesis would be accepted. Christoffersen's Independence test, third test, examines if the probability of today's exception depands on the outcome from the day before. The null hypothesis is expressed as:

H0: π0 = π1 (4)

Where π is probability value. The null hypothesis states that an exception today does not depend on whether an exception occurred the day before. The conclusion is the same as for the POF & TUFF hypotheses that in the case where the likelihood ratio is higher than the critical value of the χ², the null hypothesis about equal distribution of exceptions over time would be rejected and model would be considered as inaccurate. Otherwise, the null hypothesis would be accepted. Mixed Kupiec-test, test number four, proposed by Haas, examines the time between each exceptions, advocating exception indepencency during the testing period.

The null hypothesis is expressed as:

H0: x0 = x1 (5)

Where x is the number of exceptions. The null hypothesis for the Mixed Kupiec-test states that exceptions are independent of each other over time. The conclusion is the same as for the the other test hypotheses that in the case where the likelihood ratio is higher than the critical value of the χ², the null hypothesis about exceptions independency over time would be rejected and model would be considered as inaccurate. Otherwise, the null hypothesis would be accepted. One of the reason why this research has been performed is due to ever-going discussions among practicioners, whether or not VaR model is dependable or as practicioners say tried-and-true. So, the final purpose of the research is to evaluate through many tests if the respective method for VaR calculation is consistent.

This research takes into consideration, for its empirical part, the quantitative research due to the fact that it involves data in numerical form which are further used for statistical calculations in order to draw a conclusion about model accuracy. The empirical part of the research consists of two sections: calculation of VaR amounts and back-testing those amounts using different types of tests.

Researchers used secondary data sourced from the audited financial reports and valid stock price information system (Securities and exchange commission, and www.wsj.com) for five “blue-chip” companies: Procter & Gamble, Mc Donald’s, Microsoft, Caterpillar and Apple as a base for all calculations, simulations and analyses. It is important to emphasize that part of the calculations and graphs are done in excel, and the other part in SPSS, using the adequate formulas or functions. The first step, which is used for both: VaR calculation and back-testing purposes, is to calculate daily returns for each company (without dividends). Daily returns have been calculated as the current day closing price less the previous day closing price divided by the previous day closing price. This data calculation is essential for all three VaR calculation methods and further calculations and conclusions.

Results and Discussion

The empirical analysis revealed that VaR models maintain a high degree of accuracy across varied confidence levels, with minor discrepancies suggesting potential underestimation of risk in volatile markets. The back-testing methods provided a nuanced understanding of each model's reliability, underscoring the necessity of considering market extremities for comprehensive risk assessment. The study advocates for the integration of VaR with other risk indicators to enhance market risk awareness.

Conclusion

VaR remains a cornerstone in risk management, offering valuable insights into potential portfolio losses. However, its effectiveness is contingent upon accurate model back-testing and adaptation to market dynamics. This research underscores the critical role of back-testing in validating VaR models, advocating for a multifaceted approach to risk assessment in the financial sector.

Updated: Feb 21, 2024
Cite this page

Analyzing VaR Models: Back-Testing Accuracy with Financial Data. (2024, Feb 21). Retrieved from https://studymoose.com/document/analyzing-var-models-back-testing-accuracy-with-financial-data

Live chat  with support 24/7

👋 Hi! I’m your smart assistant Amy!

Don’t know where to start? Type your requirements and I’ll connect you to an academic expert within 3 minutes.

get help with your assignment