Digital Assets Report

Newsletter

Like this article?

Sign up to our free newsletter

A brief history of performance ratios

Related Topics

Early performance ratios grew out of the mean variance framework, were adapted for downside risk but have given way to newer me

Early performance ratios grew out of the mean variance framework, were adapted for downside risk but have given way to newer measures better suited for hedge fund data.

Introduction: a background in economics

Investment performance measures have grown out of financial economics, a subject which has tended to emphasise formal rigour and analytical tractability over empirical realism. By assuming that mean and variance were sufficient to describe rational investor decision making framework, economists were able to develop the edifice of Modern Portfolio Theory (MPT) and the Capital Asset Pricing Model (CAPM).

These impressive theoretical frameworks dominate the world of applied corporate finance and investment decision making. This is despite the evidence of Mandelbrot and others that most financial data cannot be adequately described as Gaussian normal. Nor does CAPM work particularly well empirically. But MPT and CAPM are still widely used because they offer convenient and relatively easy ‘recipes’ to follow.

The Sharpe ratio and its descendants
Out of the mean variance framework, one performance measure came to dominate investment appraisal, the Sharpe ratio, which divides the excess return of a portfolio (usually relative to the risk free rate) by its standard deviation, as a proxy for risk: 

ratio1.jpg

Rp is the return on the portfolio; Rf is the risk-free rate and σp is the standard deviation of the returns of the portfolio. This ratio shows the ‘price’ of excess return in terms of risk (volatility). Sharpe revisited the ratio in 1994 and reinterpreted it as the return of a portfolio relative to some other portfolio (equivalent to being long one and short the other). This revised version is known as the information ratio.

Sharpe’s work has been extended and built on, for example by Treynor. The Treynor ratio replaces σp, the volatility of the portfolio, with a measure of systematic risk. The intuition here is from the CAPM approach, namely that investors should only expect risk compensation for exposure to nondiversifiable or systematic risk.

Another set of developments was   begun by Modigliani and Modigliani (the Nobel laureate Franco and his granddaughter). Known as M2, their measure compares portfolios by leveraging or deleveraging them to the point where they have the same volatility (normally chosen to be the market volatility). This allows the portfolios to be compared simply by looking at the resulting returns. The fund with the highest M2 will have the highest return for a given amount of risk. There is a further extension of this, developed by Muralidhar and called M3, which additionally looks at differences in the correlations of the various portfolios being compared (see Muralidhar’s contribution in this issue).

The Sharpe ratio is still the bread and butter tool of investment management. It is easy to compute, relatively easy to understand, and has a fine theoretical pedigree behind it. The higher the Sharpe ratio for a portfolio, the better. (Unfortunately it is often quoted without any measure of statistical significance, even though this may render the numbers useless).

Downside Risk Measures
Many of the early hedge fund investors were high net worth individuals for whom capital preservation was paramount. So low volatility was less important than low downside volatility. This has given rise to two sorts of measures based on the downside risk: the Sorting ratio and Maximum Drawdown (MDD).

The Sortino ratio is closely related to the Sharpe ratio. It compares the return of a portfolio with a chosen Minimum Acceptable Return or MAR (which could be the risk free rate but need not be), and divides it by the downside semi-standard deviation, which measures only the volatility of returns below the MAR: 

ratio2.jpg

Where σDD is the downside deviation of return. Sortino’s ratio appealed to the practitioners in the hedge fund world but was initially criticised for not being rooted in sound market theory, although that turned out to be a false criticism1. 

Maximum Drawdown (MDD) measures are often quoted in standard tables of hedge fund performance and describe the worst peak- totrough fall in asset values experienced by the fund to date. Although widely quoted by practitioners, MDD was not widely analysed by academics until fairly recently. Some limitations are relatively clear: MDD can only be compared for funds with the same time scale and similar reporting frequency. One perverse result is that a very new fund almost certainly has a smaller MDD than a long established one, but it hardly follows that one should only invest in new funds.

Moreover MDD is a very inefficient statistic for describing a fund’s performance and carries a high potential level of error i.e. an investor should not reject or accept a fund on MDD alone. A measure such as the worst five trading days might be a superior measure because it uses more information about the historic returns.

MDD can be derived analytically as a formula for some return distributions but researchers have tended to preferred to use a Monte Carlo numerical approach. In other words, the estimated parameters of the return distribution (typically only the mean and variance) are used to generate a very high number of possible outcomes, of which the actual outcome is only one. Then the researcher (or investor) can pick a confidence level, say 95% or 99%, and see what the MDD would be. This should be a better guide to the underlying downside risk compared with the actual MDD.

Of course this approach only works if the underlying returns can be accurately captured by those parameters and they can be reasonably assumed to be stable. These are both strong assumptions and MDD is not a very reliable predictor (see article by Acar and Middleton in this issue).

MDD has also been incorporated in another variation on the Sharpe ratio, the Calmar ratio:     
 

ratio3.jpg

The compound return can be over any period but the norm is three years. An extension is the Sterling ratio, of which the most common form is:
ratio4.jpg

Limitations of Mean Variance
The mean variance framework was adopted because it was tractable mathematically and seemed to fit with the empirical reality of financial data. In an age when computing power was expensive, it was highly desirable that empirical work could be largely confined to estimating just two parameters.

But there were always difficulties with this approach. The fundamental economic theory of decision making under uncertainty, pioneered by Von Neumann and Morgenstern (VNM) was only consistent with the mean-variance portfolio approach under rather limiting conditions, namely:

i) The investor’s utility function is quadratic (which implies they don’t care about higher moments, but also that investors have increasing absolute risk aversion, which seems implausible); or
ii) The investment returns are normal; or
iii) The risks are ‘small’ in the sense that a second order Taylor approximation to the utility function is satisfactory.

It is commonly taken for granted that conventional financial data are normal distributed or near enough. Mandelbrot’s recent book2 emphasises that this is a false assumption. But for much economic work the assumption of ‘small risks’ has justified the use of mean variance analysis. Paul Samuelson, one of the greatest of economists, argued that in this case, ‘the mean-variance result is a very good approximation’3.

But for hedge funds, the mean variance framework is insufficient. Not only are the data markedly different from normally distributed. But the argument for the approximation of ‘small risks’ is untenable too. Investors care about average volatility but they also can be presumed to care about larger and more abrupt moves in their portfolio.

Dealing with higher moments
The biggest problem with the Sharpe ratio and its spin offs remains their failure adequately to capture the higher moments of the distribution. If we reject the assumptions that investors don’t care about higher moments and that investment data can be characterised as Gaussian normal, then we need a new approach. Sharma (2005) shows that the Sharpe ratio can be extended in a useful way by replacing the denominator by the value at risk (VaR) at say the 99% level. VaR, the expected return in a defined statistical set of worst cases, is typically based on a mean variance normal distribution but can be modified to incorporate skew ness and kurtosis using the Cornish-Fisher expansion to yield:
ratio5.jpg

Although this is an improvement it still only takes the third and fourth moments into account. Sharma therefore proposes a measure called Alternative Investment Risk Adjusted Return (AIRAP), which draws on the economic theory of expected utility.

Expected utility theory, the outcome of VNM’s work, is a rigorous (although empirically questionable) model. In geometric terms, the shape of the utility function captures the investor attitude to risk. Figure 1 (taken from Milind Sharma’s article in this issue) shows the classic representation of expected utility as a concave function i.e. the investor is risk averse – he or she prefers a certain amount z to a risky combination of outcomes with the same mathematical expected value E(z). Thus U(E(z)) › E(u(z)), where z is a linear combination of two possible outcomes z1 and z2.

graph1.jpg

Different utility functional forms allow economists to express different theories about risk. Investor risk aversion is largely taken for granted, but this is a very weak and general assumption, captured by the concavity of the utility function. Absolute risk aversion is a measure of this concavity (it is the ratio of the second to the first derivatives). Common sense suggests that absolute risk aversion declines with wealth ie a millionaire would pay more for a fair lottery ticket than a beggar.

Relative risk aversion takes the investor’s wealth into account. So a millionaire might have the same risk aversion, or even higher, than a beggar, when the downside risk is expressed as a percentage of his or her total wealth, rather than an absolute amount.

Economic theory allows for increasing, constant or decreasing absolute or relative risk aversion: what works best is an empirical matter, although not an easy one to settle decisively. But most often economists have tended to use a Constant Relative Risk Aversion (CRRA) model, which offers a reasonable compromise between empirical plausibility and mathematical convenience.

CRRA is the jumping off point for Sharma’s Alternative Investment Risk Adjusted Performance measure (AIRAP):
ratio6.jpg

where TRi is the fund total return of the ith period and pi is the probability of that return; c is the coefficient of relative risk aversion, which captures the shape of the utility function, and can take a number of values, so long as the same value is used for comparing different fund returns (see Sharma article in this issue).
 

Nonparametric approaches: Omega
The appeal of the Gaussian normal distribution is that it can be described by just two parameters, the mean and standard deviation, which simplifies computation enormously. Other parametric distributions, such as the Weibull or Gumbel, have the same usefulness. But financial return data don’t necessarily fit any of these distributions. Choosing parameters may amount to excessive simplification. The parameters may not even be well defined4.

An alternative approach is therefore the nonparametric one. Most statistics textbooks tend to have a brief section on nonparametric methods, though often limited to such relatively easy cases as Spearman’s rank correlation coefficient. Nonparametric tests are sometimes called distribution-free to emphasise that they  don’t depend on any particular underlying distribution function behind the data. This is particularly apt when we don’t actually know the form of the underlying distribution function.

The main nonparametric approach to investment returns is the Omega function, created by Shadwick and Keating. This is a function which is mathematically equivalent to the distribution of returns and which allows an ordering of investments without specifying any utility function5.


The starting point is the cumulative distribution function, which plots the ordered returns for an investment in a cumulative way. Figure 2 shows a density function for some daily hedge fund returns. This looks a bit like a normal distribution but with negative skew and high kurtosis ie a lot of extreme results but particularly on the downside. Perhaps unsurprisingly, this is data from a merger arbitrage fund.

graph2.jpg

The same data can be shown in cumulative form. Figure 3 plots the returns ranked from worst daily return to best, with the cumulative distribution summing to 1 (which can therefore be thought of as ex post probabilities). Note that all of the data is retained here, we haven’t summarised it (and lost information) through the use of parameters.

graph3.jpg
The Omega function uses the information in the cumulative distribution to compare, for each chosen threshold return, the distribution weighted returns above that threshold versus the distribution weighted returns below. That ratio is then plotted graphically against the threshold returns. The Omega ratio is:
ratio7.jpg

where (a, b) is the interval of returns and F(r) is the cumulative distribution of returns. The Omega function involves no estimation and therefore loses no data. A higher Omega is preferred to a lower for any specified return threshold. At the mean of the distribution the Omega function is one.

Omega is not uniquely designed for hedge fund use but in its ability to adjust automatically for higher moments it is particularly well suited to hedge fund data (see article in this issue by Keating).

Factor analysis

Drawing on the history of the Capital Asset Pricing Model (CAPM), researchers have tried to explain hedge fund and fund of fund performance in terms of a number of ‘factors’ that might be equity market indices, interest rates, currency baskets and so on.

One motive for doing this is the attempt to show that hedge funds’ performance can be entirely explained by a combination of beta exposures that could easily be replicated statistically. If this were the case there would be no real alpha being generated and the investor is being charged excessive fees.

Another motive is to understand where the funds’ risk exposure is coming from, in order to better understand what the manager is doing and how the fund might be matched with others for optimal risk/return performance.

Famously, Fama and French showed empirically that the textbook CAPM explained stock returns rather poorly and that incorporating market capitalisation and book to price value did rather better.6 Fung and Hsieh (2002) and Naik and Aggarwal (2004), among others, have extended the Fama-French work into the world of alternative assets. Fund returns are regressed on a range of factors capturing equity, fixed income, currency, commodity and interest rate returns. Momentum and trend following effects can also be captured, the latter by using the concept of a ‘lookback straddle’. This is an option strategy that pays the maximum difference between the highest and lowest prices of an asset over the maturity period. It therefore pays out what a trend follower would achieve with perfect foresight.

By modelling fund returns as a linear combination of these various explanatory factors, the residual or unexplained return of the fund can be interpreted as alternative alpha, a version of Jensen’s alpha7. The fact that various authors including Bacmann and Jeanneret in this issue find positive alpha even after an exhaustive list of explanatory factors, should provide encouragement for investors.

Footnotes:
1. C. Pedersen & S. Satchell ‘On the foundation of performance measures under asymmetric returns’ (2002).
2. B. Mandelbrot & R. Hudson (2005) The misbehaviour of markets London: Profile Books.
3. P. Samelson (1970) ‘The fundamental approximation theorem of portfolio analysis in terms of means, variances and higher moments’ Review of Economic Studies Vol.37, October.
4. For example the gamble described in the famous St. Petersburg paradox, which Bernoulli used to demonstrate risk aversion, does not have a defined mean or variance.
5. A utility function would need to be specified if the difference between two portfolios were to be quantified.
6. Fama and French have recently argued that ‘despite its seductive simplicity, the CAPM’s empirical problems probably invalidate its use in application’ Journal of Economic Perspectives (2004).
7. In the conventional CAPM model, Jensen’s alpha is the return generated by an asset or portfolio in excess of that predicted by its beta and the market risk premium ie excess return that indicates superior risk-adjusted investment performance. ‘The performance of mutual funds in the period 1945-64’, Journal of Finance (1968).

taylor.jpgAuthor: Simon Taylor is a Senior Research Associate of the Judge Institute of Management Studies, Cambridge University. Prior to this he spend fourteen years in the investment banking world, as an equities analyst at BZW and  Citigroup and then as Deputy Head of European Equities Research at JPMorgan. He was previously a Fellow in economics at St.
Catharine’s college, Cambridge and an Overseas Development Institute Fellow in the Central Bank of Lesotho.

Please click here to download the full hedgequest report The Global Reach of Investable Hedge Fund Indices

Like this article? Sign up to our free newsletter

Most Popular

Further Reading

Featured