In my mind a strategy attempting to capture momentum should be aware of the return of the counters it’s exposing capital to. Simple relative performance, as in the Consistent Momentum strategy, is clearly not going to work when the entire market is losing value as was the case in 2008 – buying stocks that are losing less than their peers will not produce positive returns. Therefore, one obvious filter to consider is something that tracks the current market regime. We’re interested in a simple binary overly that can distinguish between rising and falling markets in the medium term. During falling markets we’ll cease trading and move to cash and vice versa. The S&P 500 is a good proxy of market performance, but we could use others, such as the Russel 1000. The general idea is to apply a simple moving average to the index, a 200-period moving average for instance, and then use that to classify rising markets – the index is above its average, keep trading – and falling markets – the index is below its average, cease trading. Our filter should mitigate some of the nasty drawdown during the 2008 financial crisis and help boost our overall performance stats.

First off, the logic associated with out regime filter is sound. Mathematically, a series can never move above its average without rising, and the converse holds. It’s a good idea to ensure that any overlays we apply are intuitive. First box checked. Second, we hope to see meaningful improvements. For every additional rule, a degree of freedom is lost, which means robustness somewhat deteriorates. Here again we check the second box – improvements in performance are significant. By moving the portfolio to cash during major market inflection points, we were able to halve our drawdown from -66% to -33% while maintaining the same return profile. Said differently, our regime filter doubled our risk-adjusted metrics: the MAR ratio jumped from 0.28 to 0.66, not too shabby. Here’s a comparison of both the equity and drawdown curves – a notable improvement during the 2008 financial crisis.

*** A 200-period moving average was applied to the S&P 500 for the regime filter.*

In the final bit of testing I wanted to examine whether the parameters used in the model are optimal. Recall that we employed a six-month rate of change in price to rank for momentum and a 6-month time stop. Further, we selected the top 100 counters based on consistent momentum for possible portfolio inclusion. These were the default parameters used in the initial strategy, but there are any number of parameter combinations which could have been employed and possibly perform better. However, a more important aspect of performing this test is to confirm model robustness – we hope to see performance positive over a broad combination of parameters to ensure our model is not fitted to a single combination. This is important because it implies that our model will be better able to adapt to changes in market conditions, in other words, it displays robustness. Here the news is good yet again: performance remains positive across a broad spread of parameters. Some combinations performed significantly better than our default set, but it’s pleasing to observe that regardless of the parameters used the model’s positive performance persisted. Here’s a quick look at the top ten performing parameter combinations – the best combination being a 3 month momentum and time stop parameter and 25 used for counter selection.

***Parameter testing as follows: momentum parameter (3,6,12); monthly time stop parameter (3,6,12); counter selection top (25, 50, 75, 100)*

I’ll end this post by sharing a powerful method to mitigate the effects luck in trading to ensure the greatest correlation between live performance expectations and backtested data. The secret to this is accepting that no single parameter combination will remain optimal through time, therefore, allocating 100% of equity to a single, and seemingly optimal, parameter combination is senseless – over any given timeframe, the best performing parameter combination is attributed to blind luck. Rather, we should allocate a portion of capital to a good spread of parameter combinations. That way we have a piece of everything and so we don’t care which combination is optimal, we’ll participate to some extent with its success. In theory, this approach should yield the optimal performance through time since we capture the average performance of the algorithm instead of portions that may or may not perform well due to luck.

In this post we took a simple momentum strategy and reworked it into a powerful and robust portfolio that is more likely to stand up to the test of time. There are however many more things one should consider to further convert this into a fully formed quant process. Other overlays may include specific return interval targets before considering a counter for inclusion, correlation between the counters in the portfolio, sector allocation restrictions and position sizing algorithms that allow for volatility in each stock. One could possibly control risk further with the use of stop losses, or relative performance ranking daily. No cash returns were included during periods of cash allocations, which would further improve returns. Finally, one may seek ways to hedge the portfolio from time-to-time to smooth performance further.

Hope you enjoyed reading.

Happy Trading,

PJ Sutherland

]]>*Source: Chris Muller; Style Engine*

*For reference, the strategy is based on the paper by Chen, Chou, Hsieh: Persistency of the Momentum Effect: The Role of Consistent Winners and Losers; published on www.ssrn.com.*

Momentum is one of the most well-known and broadly accepted phenomena in the market. The general idea is that stocks that have recently performed well are likely to continue to do so. In other words, their recent returns carry momentum. There are a great many methodologies that attempt to exploit this anomaly, some employ cross-sectional momentum, that is stock returns are compared with their peers to measure relative outperformance, while others employ time-series momentum, which is the study of stock returns relative to the stock’s recent past. Today I’m going to examine cross-sectional momentum by comparing monthly returns of individuals stocks with their peers. The basic idea is we’ll buy stocks that have been outperforming their peers and hold them for a predetermined timeframe. I’m going to focus exclusively on the long side in this post, but the same can be done for shorts and indeed their paper includes shorts. But before we get to the nuts and bolt, let’s look at why momentum exists.

As with most market anomalies, there is usually a behavioural bias that causes the pricing inefficiency. Specifically relating to momentum, investors act irrationally by under-reacting to new information which results in slow price adjustments creating the momentum effect. As discussed in their paper, this anomaly is most prevalent in stocks with higher idiosyncratic volatility and lower percentage of institutional participation. Which makes sense since perceived risk associated with these stocks is probably over-compensated for resulting in slow price adjustments as market participants digest postitive news flow and movements in price that results in progressively more attractive views toward the stock and subsequent buying, delivering momentum.

This is where things get interesting. Their paper employs a relatively complex strategy by most retail standards. I had to extend my C++ library slightly to be able to replicate their strategy. For the most part the rules I employ follow theirs, but I made some minor changes. Here’s the rules that I employed:

Database: All listed and delisted equities on the NYSE, NASDAQ and AMEX.

Test Period: 2003/01/01 to 2018/07/31

Rules:

- At the end of the month, rank all the equities by their 12 month average traded dollar amount – Average( (volume * close), 12 months) – in descending order. Choose the top 1000 most liquid equities.
- Rank the liquid list by their 6-month return – RateOfChange(Close, 6 months) – in descending order and select the top 100.
- Now rank the same liquid list (the 1000 equities) by their 6 month return from the prior month – Ref(RateOfChange(Close, 6 months) , t-1) – in descending order and select the top 100.
- Take the intersection of 2 and 3. In other words, filter for stocks that belong to both lists generated in point 2 and point 3. These are stocks that are showing consistent momentum.
- Buy all these stocks and allocate 100% capital equally across all the equities.
- Hold the portfolio for 6 months and repeat the process.

I haven’t spent a lot of time working with long-term strategies that operate on monthly intervals partly because engineering my C++ engine to accurately test and mark positions to market, along with a plethora of other complex stuff I hadn’t thought of, was a job I had not found time to complete; and partly because I have not had much luck uncovering truly robust and consistent edges in longer-time frames. However, as this paper clearly shows, there is reason to spend time researching this space. So, let’s explore the performance:

Although performance is impressive, the risk one had to endure, as measured by drawdown, was less so. That said, all the ugly risk was tied to the financial crisis in 2008. No surprises there, and I doubt anyone would continue to buy stocks outperforming peers but still generating steep negative returns. Which means that some form of regime filter (the S&P 500 is in bull mode) or absolute return filter (the stock’s 6 month return is positive by some margin) will probably go a long way in mitigating this risk and improving the overall performance profile. All-in-all this is a pretty neat strategy, and given the broad academic findings tied to momentum, coupled with the intuitive nature of why this happens, it’s going to be pretty robust.

In my next post I’ll examine adding regime and absolute return filters and further optimise the strategy by adjusting return and holding periods among other things. I’m certain that we’ll be able to improve upon these findings further. I hope you enjoyed reading and look forward to my next post with you.

Happy Trading,

PJ Sutherland

]]>The first indicator is the percentage of equities trading above their 200-day moving average. We know that a necessary condition for a price series to be in a bear trend (moving lower) is the current price to be trading below its 200-day simple average of prices. This is a mathematical fact. Therefore, if we determine the percentage of equities in bear trends (trading below their 200-day moving average) we have a very good read on the market. We can further expand this to include the global environment by assessing the percentage of major indices in bear trends, thereby gaining a macro view of the world economy. We use both these measures in our platforms to control risk, and recently some ominous changes have taken place.

First, form the 37 global indices we track, 55% are now in bear trends. This coming off a reading of more than 90% in bull trends all last year and the beginning of this year. That’s an important change. Second, 70% of liquid JSE equities (the top 60 ranked by liquidity) are now in bear trends. This off a reading above 60% in bull trends recently. Another important change.

The second indicator tracks the state of volatility in the market with an algorithm I developed called the “Synthetic Volatility Index”. Both the global and local environment are showing signs of volatility expansions, albeit marginal currently. But this can change swiftly, and I expect we’ll see such a change soon. A boost of volatility to the upside will throw the JSE firmly into bear territory.

In addition to the above, the JSE Top 40 Index dipped back below its 200-day moving average a couple of days ago, posting its third lower high. The fact that the index has not managed to move above the 200-day moving average may point to additional weakness.

It’s important to keep in mind that we do not attempt to predict the market, that’s a futile endeavour. We instead react to what the market is telling us; and currently our indicators are suggesting that it’s time to scale back risk. Our platforms are doing precisely that, moving progressively more into cash and focusing on high quality opportunities.

**Learn how to focus on high quality opportunities in QT : Free Trial.**

***Past performance is not indicative of future performance.*

Safe Trading,

PJ Sutherland

]]>

The secret to the rule is that there is no secret. It’s really very simple. Avoid stocks on the long side trading below their 200-day moving average, period. It’s a mathematical fact that a stock needs to first deteriorate below its average 200-day price before running into significant trouble. Generally, stocks don’t implode from new highs overnight, rather, there tends to be a gradual deterioration that starts to accelerate. For example, business conditions may begin to decline, which affects earnings and profitability, which in turn triggers insider and astute analysts selling the stock. As the difficulties mount, financial reporting trickery may develop to boost perceived performance. Eventually, the truth washes out and markets respond accordingly with massive selling. The 200-day moving average is highly effective (I quantified it in the post linked above) because it tends to trigger in the early part of the move, or when insiders and astute analysts start to sell. Have a look at the chart below for Steinhoff. We removed the stock from our tradeable universe in September last year. As a result, not one of our clients were exposed to the stock.

Here’s another example in ABIL. The reasons for failure are different, but the price behaviour is always the same, and price never lies. The landscape is littered with such examples and the evidence is clear, this simple technique really works. In fact, had you employed it, you would have outsmarted literally the entire professional asset management space in South Africa, that are now reeling from the losses. So next time you se a stock dip below it’s 200-day moving average, remember, the downside risks have risen considerably.

]]>In this post, I’ll explore the performance profile of mean reversion, examine tail risk and share some methods that can be used to mitigate tail risk. Interestingly, the common approaches to controlling risk, such as the use of stop losses, actually make matters worse. I’ll share alternatives that prove more effective.

Before we discuss methods to mitigate the tail risk inherent in mean reversion, lets first take a look at the performance profile of a simple mean reversion strategy and discuss what is meant by tail risk. The return distribution below is from a long only mean reversion strategy that enters stocks on short-term weakness and exits on short-term strength. It does not use stop losses. We’ll examine the use of stops next, for now, I want to focus on the trade return distribution, which is typical of mean reversion.

Examining the return distribution below, it’s clear that short-term mean reversion strategies enjoy high winning rates – the green bars represent positive returns, and make up the majority of the distribution – but small average returns– the most frequent return is captured by the tallest green bar which represents returns between +2% and +3%. This combination of high winning rates and small returns is what feeds the compounding machine and which leads to relatively low volatility and consistent performance. This is a very desirable attribute which resulted in my researching and trading this approach for the last decade.

There is however a dark side to mean reversion (isn’t there always; trading is about compromise) which is associated with the distribution’s strong negative skew in the left tail. This is represented by the greater number of more extreme negative returns relative to the positive returns. For instance, the best performing trade generated +14%, while the worst performing trade resulted in a -23% loss. Moreover, there are 21 positive returns above +10%, but 78 negative returns less than -10%! This is the nasty negative skew in the left tail of mean reversion. For every return greater than +10%, there are 4 negative returns that exceed -10%. These extreme left tail losses can result in significant portfolio damage if not controlled for properly. So how do we manage the risk associated with the left tail? What about stop losses?

An obvious starting point to control risk is the use of a stop loss. This seems intuitive since we’re looking to contain extreme losses, but as is often the case in trading, what seems logical does not always work. This is one of those instances. In fact, stop losses make matters far worse, often halving returns and doubling drawdown. Below are the return distributions when implementing a 5% and 10% stop loss respectively.

The distributions clearly show the problem when stops are applied to mean reversion – they lock in the loss of multiple trades that would have otherwise resulted in positive returns or smaller losses if closed with the original exit strategy, which waited for the trade to start its reversion. I’ve run this analysis to include a stop loss as far as 50% away from the entry point, and incredibly performance still deteriorates relative to no stops, albeit marginally. Basically, stop losses are not an effective way to control left tail losses because they tend to be triggered by extreme intraday moves driven by emotion that have a high propensity to reverse. All a stop loss does is guarantee the loss without the ability to participate in the likely recovery. That said, a stop loss may be useful when used in the context of a catastrophic loss. This can be achieved by setting the stop far enough away from price so as not to erode performance, but close enough to prevent catastrophic losses. In our case, 50% would work well.

If stop losses don’t work, how do we control for tail risk in mean reversion? Let’s examine a couple of techniques that have proven to be effective.

Set your trade size at a level that would not result in material losses if the worst hypothetical trade return were exceeded by a factor of two or three. The strategy discussed above experienced a worse loss of -23%, so by this measure we should allow for losses in our trading of around -50%. With this figure in hand, we can now set a position size that ensures we remain within our loss tolerance band. For instance, if we intend to restrict our worst losses to no more than -10% of equity, then we would allow ourselves a position size of 20% of equity (on a R100K account, that would amount to a R20K position, which would result in a R10K loss, or -10%, if the trade fell -50%). However, keep in mind that multiple extreme losses could occur together, which we need to make provision for.

Set limits for the strategy in terms of the amount of exposure that it’s allowed to assume in a single sector. Market moving news tends to effect sectors in their entirety so allowing a strategy to expose itself 100% to a given sector will amplify the effects of the left tail when the sector experiences material game changing events.

Markets tend to see very high levels of correlation in the short-term during significant broad market moves, especially when driven by fear to the downside. Therefore, allowing a strategy to gain 100% of its exposure in a single day increases mean reversion’s left tail risk as the strategy is sucked into multiple correlated positions on the same day. It’s far more effective to restrict a strategy’s allocation on any one day to a percentage of equity.

This is something our professional platform, QuantLab, does exceptionally well. Instead of allocating capital to a single entry or exit point in each trade, rather look to divide the capital into portions and allocate to different entry and exit points. We will never capture the perfect bottom and top consistently through time, so why allocate capital in such a manner? By spreading capital across multiple entry and exit points, we capture the average trade through time. This has some incredibly powerful performance attributes, not least of which is it helps reduce the impact of tail risk.

Diversify in every possible way to reduce equity exposure to any one idea. Include many mean reversion strategies in many different global markets. Include different non-correlated strategies, like for instance trend following strategies. The idea here is to have as many small positions as possible spread across as many ideas as possible. When you reach this level of diversification, position sizes are so small relative to total equity that even if a trade moves 100% against your portfolio the losses are so small to be almost insignificant.

Employing some or all of these techniques will help to reduce the effects of the nasty left tail inherent in mean reversion. Taken to the extreme, when implementing all the methods discussed, the risk of the left tail is essentially eliminated, or at the very least, significantly reduced. What is however clear is the ineffectiveness of stop losses in short-term mean reversion strategies.

Happy Trading,

PJ

In the Podcast I discuss mean reversion in detail as well as some of the powerful ideas that we use within our platforms. If you’re looking to gain a better understanding of our approach, or simply to broaden your trading knowledge, then this Podcast is well worth listening to. Hope you enjoy, and if you have any questions I’d be happy to answer them.

http://bettersystemtrader.com/062-mean-reversion-strategies-pj-sutherland/

]]>Over the years I’ve tested and analysed the performance of countless trading strategies. Through the process I’ve learned that the performance profile of any strategy falls within either of the following:

- Moderate to high activity, high win rates, low average gains and consequently low risk/reward ratios and fat left tails.
- Low activity, low win rates, high average returns and consequently high risk/reward ratios and fat right tails.

The first profile is typical of mean reversions strategies, while the second trend following strategies. It doesn’t matter whether you’re employing fundamental, technical, economic or any of form of data to drive the decision making, the performance profile will resemble one of the above. This essentially has to do with the way trades are closed – if the exit strategy capitalises on long pronounced trends, then you’re going to see a performance profile that resembles that of trend following. On the other hand, if a strategy seeks to lock in small and frequent gains, the performance profile will more closely resemble that of mean reversion.

The stark differences in performance statistics across each of these approaches leads to a unique set of risks, which in turn provide some insight into the suitability of each approach with a given set of markets. Next we’ll explore these risks and then look for markets that are more conducive to reducing these risks, providing each approach with the best set of market conditions for success.

Mean reversion strategies do not let profits run since the target exit point is the mean. Essentially, they cut profits short which results in many small gains but infrequent and large losses – make small gains every month and then loose a fortune in a single month. Therefore, the single most significant risk to mean reversion lies in the left tail, or the probability that the market will trend severely against us (price shocks).

Trend following strategies let profits run, but since trends are rare, they experience many small losses and few large gains. Although losses are small, their frequency can result in large overall losses to a portfolio. Therefore, the primary risk to trend following is the cumulative effect of many consecutive losses, or said differently, the market’s inability to trend.

We can then conclude that mean reversion is better suited to markets that are less susceptible to powerful trends, while trend following is better suited to markets that tend to display powerful trends. As a result, we tend to find that either mean reversion or trend following work at any given moment, but not at the same time, that is, they’re mutually exclusive.

I’m now specifically examining the equity markets. Let’s see if we can uncover segments of the market that are better suited to each approach.

Which market segment is more prone to trend? What about large cap stocks? Well, for one thing these stocks are broadly followed, have already disrupted their respective markets and are well established. Therefore, the ability of large cap stocks to continually deliver products or services with massive market impact deteriorates, reducing the probability of significant future price trends.

What about small cap and mid cap stocks? These companies are still in the process of establishing themselves, are not as broadly followed and may provide technologies or services with the potential to significantly disrupt markets resulting in massive growth and powerful price trends.

The above premises are intuitive and make economic sense. Moreover, they bear themselves out in the data. I’ve quantified this extensively and found this to universally hold, not matter which exchange from the global markets we consider. With this knowledge we can now assign the most suitable approach to each market segment thereby boosting our chances of success.

Trend following strategies are far more effective in the mid cap and small cap market segment (long only – shorting the equity market to capture trends is exceedingly difficult due to the strong upward bias that equities display). These market segments provide the best hope of capturing extended price trends that can easily offset the many small losses that result from high losing rates and are consequently perfectly suited to trend following.

On the other hand, mean reversion strategies work much better on large cap stocks. These stocks have reduced price shock risk and their strong following means professionals actively support stocks during sell-offs (institutions love buying dips) and often engage in profit taking during short-term bursts to the upside, which results in precisely the behaviour we’re after for successful mean reversion.

Trends take time to mature, which is why trend following approaches are better suited to longer time frames or longer holds. In fact, using weekly or monthly data yields better results than daily data. Because mean reversion is actively seeking to avoid long powerful trends, they tend to work better in shorter time frames. Therefore, daily data is more appropriate, and unless you have access to fundamental data that you can use as an overlay to gauge the health of a stock, mean reversion does not work well on weekly or monthly data because price is given too much room to mature into a powerful trend against us.

The unique performance characteristics of mean reversion and trend following make them ideal complements within a single portfolio. Mean reversion works well to bring some consistency to a portfolio, while trend following keeps the door open for the rare but significant right tail trends that can lead to fantastic outsized returns. Blending the two approaches in a single portfolio yields very desirable trade return distributions that enjoy both higher win rates and right skew. As a result, it’s my view that a blended approach is as close to holy grail as we can get. And the exciting news is that you can expect to see a multitude of powerful trend following strategies added to QuantLab within the next twelve months. Including trend following in our diverse offering will greatly improve our diversification abilities and further empower clients to build truly powerful and robust portfolios that enjoy exceptional trade return distributions.

Happy Trading,

PJ

A well-known and often quoted measure of risk is the Sharpe ratio. Developed in 1966 by Stanford Finance Professor William F. Sharpe, it measures the desirability of an investment by dividing the average period return in excess of the risk-free rate by the standard deviation of the return generating process. In simple terms, it provides us with the number of additional units of return above the risk-free rate achieved for each additional unit of risk (as measured by volatility). This characteristic makes the Sharpe ratio an easy and commonly used statistic to measure the skill of a manager and can be interpreted as follows: SR >1 = lots of skill, SR 0.5-1= skilled, SR 0-0.5 = low skilled, SR = 0 = no skill and conversely for negative numbers. Although the Sharpe ratio can be an effective means of analysing investment performance, it has several shortcomings that one needs to be aware of and which I’ll discuss below. But before I do, here is the formula for calculating the Sharpe ratio:

**(Mean Portfolio Return – Risk-Free Rate) / Standard Deviation of Portfolio Return**

The most obvious and glaring flaw is the fact that the Sharpe ratio does not differentiate between upside (good) and downside (bad) volatility. Thus, a performance stream that experiences more positive outliers (a good thing for investors) will simultaneously experience elevated levels of volatility which will decrease the Sharpe ratio. This means that one can improve the Sharpe ratio for strategies that exhibit a positive skew in their return distribution (many small losses with large infrequent gains), for instance trend following strategies, by simply removing some of the positive returns, which is nonsensical because investors generally welcome large positive returns.

On the flipside, strategies with a negative skew in their return distribution (many small gains with large infrequent losses), for instance option selling strategies, are much riskier than the Sharpe ratio would have us believe. They often exhibit very high Sharpe ratios while they are “working” because they tend to produce consistent small returns that are punctuated by rare but painful negative returns.

The reason for the shortcomings discussed above can be attributed to the fact that the Sharpe ratio assumes a normal distribution in returns. Although strategy and market returns can resemble that of a normal distribution, they generally are not; if they were then we would expect some of the market moves we’ve experienced within the last decade to occur once in a blue moon, but they evidently do not. This is the result of the phenomena referred to as “fat tails”, or the market’s higher probability of realising more extreme returns than one would expect from a normal distribution. This, in and of itself, is reason enough to be dubious of blindly evaluating a manager or strategy’s performance based on a Sharpe ratio without an understanding of exactly how the returns are made.

One also needs to place the reason for the Sharpe ratio’s initial development into perspective. It was conceived as a measure for comparing mutual funds, not as a comprehensive risk/reward measure. Mutual funds are a very specific type of investment vehicle that represent an unleveraged investment in a portfolio of stocks. Thus, a comparison of mutual funds in the 60’s, when the Sharpe ratio was developed, was one between investments in the same markets and with the same basic investment style. Moreover, mutual funds at the time held long-term positions in a portfolio of stocks. They did not have a significant timing or trading component and differed from each other only in their portfolio selection and diversification strategies. The Sharpe ratio therefore was an effective measure to compare mutual funds when it was first developed. It is however not a sufficient measure for comparing alternative investments such as many hedge funds because they differ from unleveraged portfolios in material ways. For one thing, many hedge funds employ short-term trading strategies and leverage to enhance returns, which means when things go wrong money can be lost at a far greater rate. Moreover, they often do not provide the same level of internal diversification nor have lengthy track records.

Investors that do not understand the difference between long-term buy-and-hold investing and trading, often incorrectly measure risk as smoothness in returns with the Sharpe ratio. Smoothness does not equal risk. In fact, there is often an inverse relationship between smoothness and risk – very risky investments can offer smooth returns for a limited period. One need only consider the implosion of Long-Term Capital Management which provided very smooth and consistent returns (excellent Sharpe ratio) before being caught in the Russian default on bonds which created a financial crisis.

The strategies that we employ in QuantLab would be categorised as alternative in nature and do not mimic typical mutual funds. Therefore, the Sharpe ratio is not the most suitable measure to assess our performance. So, let’s examine a couple of alternatives to the Sharpe ratio.

The Sortino ratio is like the Sharpe ratio but differs in that it takes account of the downside deviation of the investment as opposed to the standard deviation – i.e., only those returns falling below a specific target, for instance a benchmark. Formula:

**(Mean Portfolio Return – Risk-Free Rate) / Standard Deviation of Negative Portfolio Returns**

The Sortino ratio in effect removes the Sharpe ratio’s penalty on positive returns and focuses instead on the risk that concerns investors the most, which is volatility associated with negative returns. It is interesting to note that even Nobel laureate Harry Markowitz, when he developed Modern Portfolio Theory (MPT) in 1959, recognized that because only downside deviation is relevant to investors, using it to measure risk would be more appropriate than using standard deviation.

We can see the effects of removing the penalty on positive outliers with the Sortino ratio by examining our live performance in QuantLab, which to date exhibits a strong positive skew – we’ve enjoyed several large positive outliers – so the Sharpe ratio unfairly penalises our performance. In fact, if we remove the effect of positive volatility (good for investors), QuantLab’s risk-adjusted performance improves from 1.11 (Sharpe) to 1.85 (Sortino). However, since the return stream of QuantLab is asymmetric, that is it displays skew and is not symmetric around the mean, the standard deviation is not an adequate risk measure (as discussed above). Although the Sortino ratio improves on the Sharpe ratio for performance profiles that exhibit positive skew, it still suffers from the flawed assumption that returns are normally distributed, which is required when using the standard deviation to measure risk.

There is however an alternative risk/reward measure free of the shortcomings discussed above which I personally prefer to use when evaluating performance. I’ll explore this measure next.

In an absolute sense, the most critical risk measure from an investors perspective is maximum drawdown because it measures the worst losing run during a strategy’s performance. A pragmatic approach then to measuring risk/reward is to determine how well we’re compensated for assuming the risk associated with drawdown. This is precisely what the MAR ratio achieves. It was developed by Managed Accounts Reports (LLC), which aptly reports on the performance of hedge funds. The ratio is simply the compounded return divided by the maximum drawdown. Provided we have a large enough sample, the MAR ratio is a quick and easy to use direct measure of risk/reward; It tells you how well you’re being compensated for having to risk your capital though the worst losses. The formula follows:

**CAGR / Max DD**

l find this ratio immensely useful. It’s simple, does not rely on flawed assumptions about market return distributions such as standard deviation, which is used in both the Sharpe and Sortino ratios, and it measures what’s important to investors: the number of units of return delivered for every unit of direct risk (maximum drawdown) assumed. When we use this metric to measure our live performance to date we find that QuantLab has delievered three units of return for every unit of risk, that is, our live MAR ratio is currently 3.

The MAR ratio is a transparent and direct measure of risk and reward that is impossible to manipulate (the Sharpe and Sortino ratios can be manipulated higher in several devious ways) and is thus my preferred measure of risk-adjusted performance when evaluating strategies.

We all have unique return expectations and tolerance for pain. For this reason, there is no single measure that appeals universally to everyone. In my personal trading, I analyse the MAR ratio, maximum drawdown, overall return and like to keep an eye on the smoothness in which returns are generated by examining the Coefficient of Variation, Sharpe and Sortino ratios. Keep in mind that regardless of the statistic you use, they are good estimates at best. Therefore, one can never be too conservative when analysing past performance. Given a long enough timeline, every strategy will exceed its maximum drawdown. This is a harsh reality that we as traders need to accept and prepare for, so it’s a good idea to be suspicious of any statistic and ensure we have buffers built into our expectations to handle new extremes that will likely be posted in the future.

As always, I welcome your thoughts and suggestions.

Happy Trading,

PJ

A blog series to contrast the key distinctions between trend following and countertrend strategies during building, testing and trading. In this post we examine the effects of data integrity and simulated trade sample size on backtested performance.

One of the major obstacles for traders looking to research trend following models is data. Since trend following models look to “cut losses short and let winners run”, profitable trades can last for many months or even years. This inherent characteristic has two important implications. First, it results in much longer trade duration’s and consequently fewer simulated trades from a backtest. Second, due to the strong positive skew in trade returns, a small number of highly rewarding trades contribute to the majority of the overall return. These characteristics combined mean that trend following strategies are very sensitive to potential data biases – they cannot tolerate data that has not been fully and properly adjusted for corporate actions and survivorship bias. “Garbage in, garbage out” aptly describes the effect of poor quality data on the backtesting process with respect to trend following. And you’re out of luck if you think that you can simulate the effects of perceived data biases – the concentration of overall return, relatively low number of simulated trades and material impact of survivorship bias makes it near impossible to estimate the effects of known data shortcomings when employing poor quality data for trend following backtesting.

Unfortunately, few retail offerings provide the rigour needed to ensure properly adjusted price datasets. It’s however possible to acquire data that has been professionally prepared for commercial entities in the asset management space, but these are costly and generally out of reach to the private investor.

Successful countertrend strategies on the other-hand are more short-term in nature, with trades lasting days as opposed to months. The shorter holds result in much higher number of simulated trades from a backtest. Another important distinction is that countertrend strategies have relatively low risk/reward ratios but high win rates, so their performance is not dependent on a few highly rewarding outcomes, but rather many small gains. These attributes – large number of historical trades with short duration’s and low past trade return concentration – make countertrend strategies less sensitive to data integrity issues. One additional upside associated with the low trade return concentration (many trades contribute to the overall strategy return, as opposed to few trades as with trend following) is the ability to simulate some of the likely effects of the known data integrity issues on performance. For instance, we could remove the top 10% of most profitable trades from our simulated database to allow for survivorship bias and corporate actions and then rerun the test to determine the effect on overall performance. Essentially, we can emulate a test done on high quality data by massaging the performance numbers downward to allow for perceived data integrity issues.

Many retail offerings provide cheap end-of-day equity price data that are “good enough” to test countertrend strategies. For most retail traders, countertrend strategies are better suited to the data solutions currently available. If you do not have the budget nor understand the intricacies involved in testing long-term strategies, then short-term strategies, such as a countertrend approach, is likely a better place to start.

As discussed above, countertrend strategies generate a much larger number of simulated trades during a backtest relative to trend following strategies. This is one of the most desirable aspects of a short-term approach because sample size is the single most significant contributor to our confidence in estimating the future – the more simulated trades we have, the higher our confidence in future performance. Smaller samples are more susceptible to the effects of good or bad luck during a backtest, which can over or underestimate the underlying edge that a strategy exploits. Consequently, the expected performance in any given year for a trend following strategy is far less certain relative to a countertrend strategy – our confidence bands are set wider as a direct result of a smaller number of historical trades.

After data integrity, trade sample size from a backtest is the most effective metric to gauge the robustness of a strategy, and oddly enough the least spoken about in trading circles. Sample size is so powerful that it doesn’t matter whether or not we understand why a given strategy works – as the trade sample increases, the probability that the strategy works due to chance alone decreases, and ultimately approaches zero. This fact alone is reason enough for most private investors to abandon research on long-term approaches and instead focus on short-term approaches.

Countertrend strategies, or short-term strategies in general, are much more forgiving when it comes to price data integrity issues. Regardless of whether you have high quality data or not, countertrend strategies always provide for higher levels of confidence in future performance due to the greater number of simulated trades relative to their trend following counterparts. For these reasons, most private investors will be better served by focusing their energies on developing short-term trading strategies as opposed to long-term strategies.

In my next post I’ll explore and discuss the most appropriate markets for each approach. As always, I welcome your thoughts and suggestions.

Happy Trading,

PJ