Friday, July 16, 2010

Thursday, July 9, 2009

Stochastic Volatility Models

Every stochastic volatility model assumes yes stochastic volatility. All the stochastic volatility models I have looked into however assume constant volatility of volatility. Empirical research (mostly unpublished) shows the volatilRead more at Collector's Blog »


Comments (2)

  1. Daniel Howard

    The models are phenomenological and require input of the parameters and so can never be predictive for all scenarios. Sometimes (as in Hull's book) you find that people use a model such as (if I remember correctly) Black's model (rather than Black and Scholes) and forgive me if I got the name wrong, to model for example products that depend on interest rates (which according to Hull have mean reversion and so are not behaving like a Brownian motion) and somehow this model "works out" or "works better" than the B-S because although not intended for this it makes the right outputs. The point to remember is that all of these models are phenomenological and depend on estimates of the inputs such as "historical volatility" and volatility of volatility, so that they are both of academic interest and of engineering interest (see how well they might work in practice). As an analogy consider turbulence modelling which makes some assumptions to close the equations - it works for some geometries but needs a lot of adjustment or fails for others. So I guess the thing to do is construct and study these models and then somehow evaluate them for different scenarios and issue recommendations as to how and when to use them? More than one model (and quite differently motivated models) may give the same outputs. In recent years ways of speaking in averages (fuzzy logic) have been as effective as complex control theories in practical engineering.

    By Daniel Howard Director at Howard Science Limited

    posted 54 minutes ago

  2. Jonathan Kinlay, PhD (jkinlay@investment-analytics.com)

    I am going to be presenting a paper on Volatility Modeling and Trading at the upcoming Quant USA conference in New York next week in which I discuss a very effective stochastic volatility of volatility model, the ARFIMA-GARCH model. It models volatility as a long memory process which is disturbed by shocks from the volatility of volatility process, which evolves in GARCH form.
    The paper evaluates the performance of the model in trading S&P options.

    More on the conference here:
    http://web.incisive-events.com/rma/2009/07/quant-congress-usa/index.html

    More details on my Quantitative Investment and Trading blog to come:
    http://quantinvestment.blogspot.com/

Sunday, July 5, 2009

Using Volatility to Predict Market Direction

Decomposing Asset Returns

We can decompose the returns process Rt as follows:

While the left hand side of the equation is essentially unforecastable, both of the right-hand-side components of returns display persistent dynamics and hence are forecastable. Both the signs of returns and magnitude of returns are conditional mean dependent and hence forecastable, but their product is conditional mean independent and hence unforecastable. This is an example of a nonlinear “common feature” in the sense of Engle and Kozicki (1993).

Although asset returns are essentially unforecastable, the same is not true for asset return signs (i.e. the direction-of-change). As long as expected returns are nonzero, one should expect sign dependence, given the overwhelming evidence of volatility dependence. Even in assets where expected returns are zero, sign dependence may be induced by skewness in the asset returns process. Hence market timing ability is a very real possibility, depending on the relationship between the mean of the asset returns process and its higher moments. The highly nonlinear nature of the relationship means that conditional sign dependence is not likely to be found by traditional measures such as signs autocorrelations, runs tests or traditional market timing tests. Sign dependence is likely to be strongest at intermediate horizons of 1-3 months, and unlikely to be important at very low or high frequencies. Empirical tests demonstrate that sign dependence is very much present in actual US equity returns, with probabilities of positive returns rising to 65% or higher at various points over the last 20 years. A simple logit regression model captures the essentials of the relationship very successfully.

Now consider the implications of dependence and hence forecastability in the sign of asset returns, or, equivalently, the direction-of-change. It may be possible to develop profitable trading strategies if one can successfully time the market, regardless of whether or not one is able to forecast the returns themselves.

There is substantial evidence that sign forecasting can often be done successfully. Relevant research on this topic includes Breen, Glosten and Jaganathan (1989), Leitch and Tanner (1991), Wagner, Shellans and Paul (1992), Pesaran and Timmerman (1995), Kuan and Liu (1995), Larsen and Wozniak (10050, Womack (1996), Gencay (1998), Leung Daouk and Chen (1999), Elliott and Ito (1999) White (2000), Pesaran and Timmerman (2000), and Cheung, Chinn and Pascual (2003).

There is also a huge body of empirical research pointing to the conditional dependence and forecastability of asset volatility. Bollerslev, Chou and Kramer (1992) review evidence in the GARCH framework, Ghysels, Harvey and Renault (1996) survey results from stochastic volatility modeling, while Andersen, Bollerslev and Diebold (2003) survey results from realized volatility modeling.

Sign Dynamics Driven By Volatility Dynamics

Let the returns process Rt be Normally distributed with mean m and conditional volatility st.

The probability of a positive return Pr[Rt+1 >0] is given by the Normal CDF F=1-Prob[0,f]



For a given mean return, m, the probability of a positive return is a function of conditional volatility st. As the conditional volatility increases, the probability of a positive return falls, as illustrated in Figure 1 below with m = 10% and st = 5% and 15%.

In the former case, the probability of a positive return is greater because more of the probability mass lies to the right of the origin. Despite having the same, constant expected return of 10%, the process has a greater chance of generating a positive return in the first case than in the second. Thus volatility dynamics drive sign dynamics.

Figure 1

Email me at jkinlay@investment-analytics.com.com for a copy of the complete article.


EuroPlace Finance Forum Paris

I have just returned from the EuroPlace Finance Forum in Paris, accompanied by my colleague and business partner Ron Henley.

The event, which has been running for several years, was very well organized and attended, with interesting and highly topical presentations and discussions of current financial and economic events, focusing primarily on the current financial crisis and plans for economic recovery. Attendees included finance ministers from Eu and non-Eu countries, Heads of Investment and Commercial banks, Chief Economists and Heads of Research from buy- and sell-side institutions, and Chief Investment Officers from several well-known asset management firms.


Monday, May 18, 2009

Quant Congress USA NY July 14-16, 2009

I will be speaking at this year's conference in New York, which features Bruno Dupire, Emmanuel Derman and others.
The subject of my presentation will be Forecasting and Trading Volatility.
Details here: http://web.incisive-events.com/rma/2009/07/quant-congress-usa/index.html

Volatility Forecasting and Trading Strategies Summit May 20, 2009 New York

I will be making a presentation at the Volatility Forecasting and Trading Strategies Summit in New York on May 20, 2009. The subject of the presentation will be Using Volatility Forecasts to Predict Market Direction. Details here: http://www.iglobalforum.com/conference_live.php?r=1

Thursday, May 7, 2009

Volatility Metrics

For a very long time analysts were content to accept the standard deviation of returns as the norm for estimating volatility, even though theoretical research and empirical evidence dating from as long ago as 1980 suggested that superior estimators existed.
Part of the reason was that the claimed efficiency improvements of the Parkinson, Garman-Klass and other estimators failed to translate into practice when applied to real data. Or, at least, no one could quite be sure whether such estimators really were superior when applied to empirical data since volatility, the second moment of the returns distribution, is inherently unknowable. You can say for sure what the return on a particular stock in a particular month was simply by taking the log of the ratio of the stock price at the month end and beginning. But the same cannot be said of volatility: the standard deviation of daily returns during the month, often naively assumed to represent the asset volatility, is in fact only an estimate of it.

All that began to change around 2000 with the advent of high frequency data and the concept of Realized Volatility developed by Andersen and others (see Andersen, T.G., T. Bollerslev, F.X. Diebold and P. Labys (2000), “The Distribution of Exchange Rate Volatility,” Revised version of NBER Working Paper No. 6961). The researchers showed that, in principle, one could arrive at an estimate of volatility arbitrarily close to its true value by summing the squares of asset returns at sufficiently high frequency. From this point onwards, Realized Volatility became the "gold standard" of volatility estimation, leaving other estimators in the dust.

Except that, in practice, there are often reasons why Realized Volatility may not be the way to go: for example, high frequency data may not be available for the series, or only for a portion of it; and bid-ask bounce can have a substantial impact on the robustness of Realized Volatility estimates. So even where high frequency data is available, it may still make sense to compute alternative volatility estimators. Indeed, now that a "gold standard" estimator of true volatility exists, it is possible to get one's arms around the question of the relative performance of other estimators. That was my intent in my research paper on Estimating Historical Volatility, in which I compare the performance characteristics of the Parkinson, Garman-Klass and other estimators relative to the realized volatility estimator. The comparison was made on a number of synthetic GBM processes in which the simulated series incorporated non-zero drift, jumps, and stochastic volatility. A further evaluation was made using an actual data series, comprising 5 minute returns on the S&P 500 in the period from Jan 1988 to Dec 2003.

The findings were generally supportive of the claimed efficiency improvements for all of the estimators, which were superior to the classical standard deviation of returns on every criterion in almost every case. However, the evident superiority of all of the estimators, including the Realized Volatility estimator, began to decline for processes with non-zero drift, jumps and stochastic volatility. There was even evidence of significant bias in some of the estimates produced for some of the series, notably by the standard deviation of returns estimator.

Finally, analysis of the results from the study of the empirical data series suggested that there were additional effects in the empirical data, not seen in the simulated processes, that caused estimator efficiency to fall well below theoretical levels. One conjecture is that long memory effects, a hallmark of most empirical volatility processes, played a significant role in that finding.
The bottom line is that, overall, the log-range volatility estimator performs robustly and with superior efficiency to the standard deviation of returns estimator, regardless of the precise characteristics of the underlying process.

Send me an email at jkinlay@investment-analytics.com if you would like to receive a copy of the paper.