I have written a lot in the past about the cointegration of ETF pairs, and how this condition can lead to profitable pairs trading. However, as every investment advisor could have told you, past cointegration is no guarantee of future cointegration. Often, cointegration for a pair breaks down for an extended period, maybe as long as a half a year or more. Naturally, trading this pair during this period is a losing proposition, but abandoning such a pair completely is also unsatisfactory, since cointegration often mysteriously returns after a while.
A case in point is the ETF pair GLD-GDX. When I first tested it in 2006, it was an excellent candidate for pair trading, and I not only traded it in my personal portfolio, but we traded it in our fund too. Unfortunately, it went haywire in 2008. We promptly abandoned it, only to see the strategy recovered sharply in 2007.
So the big question is: how do we know whether the loss of cointegration is temporary, and how do we know when to resume trading a pair?
To answer the first question, it is often necessary to go beyond the technicals, and delve into the fundamentals of pair. Take GLD-GDX as the example. When I taught my pairs trading workshop in South Africa, several portfolio managers in attendance told me that there are 2 reasons why gold spot price diverged from gold miners' stock prices. Firstly, due to the sharp increase in oil prices during the first half of 2008, it costs the gold miners a lot more in energy to extract the gold from the ground, hence the gold miners' income lags behind the rise in gold prices. Secondly, many gold miners hedge their exposure to fluctuating gold prices with derivatives. Hence when gold price rise beyond a certain limit, the gold miners cease to benefit from this rise. Recently, the Economist magazine published an article that essentially confirms this view. But further confirmation can be gained by introducing oil (future) price into the cointegration equation. If you do that, and if you trade this triplet of GLD-GDX-USO, you will find that it is profitable throughout the entire period from 2006-2010. If you find trading a triplet too complicated, you can at least backtest a trading filter such that you will cease to trade GLD-GDX whenever USO goes beyond (above, and maybe below too) a certain band. If you have done all these backtests, you will have a plan in place to tell you when to resume trading this pair. But even if you haven't done this backtest, and you find that you need to stop trading a pair because of cumulating losses, you should at least continue paper trading it to see when it is turning around!
(By the way, if you think trading ETF pairs offers too low returns due to the low leverage allowed, consider the single stock futures on ETF's trading on the OneChicago exchange. Certainly the future on GDX is available there, while you might just trade the futures GC and CL directly on CME. There is, of course, the usual caveat that applies to futures pairs trading: the switch from contango to backwardation and vice versa can ruin many a pairs-trading strategy, even if the spot prices remain cointegrating. But that's a story for another time.)
141 comments:
I've done a quick analysis of a GLD-GDX-USO triplet using PCA. Just can't seem to find a portfolio that is more stable than GLD-GDX for the whole 2007-now period.
Could you give an example of a triplet ratio that is holds through the 2008 market?
sjev,
Try 0.5350*GLD-0.7387*GDX+0.0293*USO.
GLD, GDX, USO refer to their prices, not returns. This triplet should be stationary in the period I mentioned. By the way, I am not sure the PCA is the proper way to analyze cointegration.
Ernie
Ernie, would the pair return to its previous equilibrium? In other words, would the trade breakeven after a number of months or years?
Thanks
Anon,
The pair would not necessarily return to its previous equilibrium, but the triplet would, as evidenced by its cointegration property throughout the period.
However, the profitability of just trading the pair will still return if you use moving averages etc. instead of static parameters.
Ernie
Ernie, I suppose you ran OLS regression on levels, with 3 variables.
Or did you use Johansen test?
Jozef,
I used Johansen tests to get the hedge coefficients (via the eigenvectors). But you can also use regression of one variable (level) against the other 2.
Ernie
hello ernie,
i work in the industry and have your book and some others about pairs trading, but i never seen to find any info about coint triplets.
could you please share some of your knowledge about where can i find some papers, books, or anything about this topic? (since you dont have any seminars scheduled for us here in Brasil)
thanks
eduardo
Hi Eduardo,
The concept of cointegration has always been applied to more than 2 time series in econometrics. Johansen test, for e.g., is designed for this situation. You can read the documentation at spatial-econometrics.com to see how this is used for multiple time series.
I have not seen this explored in the trading literature though.
Ernie
Below a link to a multivariate pairs trading strategy:
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=952782
@sjev – do you use PCA in the traditional statistical approach (arbitrage price theory) - decompose the covariance matrix (correlation vs. cointegration) of security returns and identify the factors? If, yes than the assumption is that the factors are assumed to be constant which doesn’t hold true for the mean reverting spread.
@Ernie – is it possible that the cointegration break is due the fact that gold abilities as hedging instrument have been changed? At least for the past 12 months the price of gold and S&P 500 moved in tandem.
Carol Alexander is a very good source for cointegration.
Dan,
I don't think the cointegration break was due to gold's hedging ability has changed because the break only occurs in 2008. I think the reasons are more likely the 2 that I mentioned.
Ernie
Hi Ernie,
Can you kindly advise me on the tax payable by hedge funds on their trading gains? Thanks.
Ben
Hi Ben,
First, please let me say that I am not a CPA and am not qualified to give tax advice.
Just to share my own experience though, most hedge funds are organized as limited partnerships, and thus all taxable profits pass through to the limited partners. So the limited partners have to pay tax, but not the fund itself.
Ernie
Hi Ernie, some suggest to use Hurst Exponent to trade mean reversion, what is your opinion.
http://10outof10.blogspot.com/2008/03/introduction-to-hurst-exponent.html
Thx
Hi Anon,
I believe Hurst exponent can in fact be a reasonable measure of whether a time series is mean-reverting.
Ernie
Hi Erine
Would you mind sharing your experience or articles of how to apply the Hurst exponent to trade mean reversion in a proper way , I applied it on FX trading in the past but no so successful. I don't feel it is that simple by just looking at whether Hurst < 0.5.
Thx
Hi Kenny,
I believe the usage of Hurst exponent is the same as using the adf test to check for stationarity, or calculating the half-life of mean-reversion using the Ornstein-Uhlenbeck formula. Even after any of these criteria indicates that the time series is mean-reverting, you still have to find a suitable trading strategy to take advantage of the mean-reversion. For e.g. if you trade it using Bollinger bands, then you still need to decide what lookback to use, and whether you should enter at 1, 2 or 3 standard deviations.
Ernie
Ernie: you mentioned using moving average instead of static parameters. could you say more on how to dynamically update the parameters? is there any good reference on that?
ww,
By "moving averages" I just mean using the usual Bollinger bands (which are composed of moving averages and standard deviations) method to determine entries and exits into a pair, instead of entering and exiting at a fixed spread value.
Ernie
Hi Ernie,
How would you optimise for a portfolio of pairs? If i use the mean-variance approach of Markowitz then would i have to constrain the weights of the portfolio to be >= 0?
Thanks!
Hi Anon,
You can treat each pair as an asset with its own returns and stddev, which you can long only.
(By Long, we mean long that strategy, not that we are long one side of the pair and short the other side.)
Ernie
Hello Ernie,
I tried 0.5350*GLD-0.7387*GDX+0.0293*USO and it seems that this triplet is trend stationary (ie w/ a deterministic trend) and then cannot be used for a cointegration strategy.
I ran a Johansen test for the 2006-01-01 to 2011-01-01 period and I found that a stationary triplet is: (1; -4.21; 0.16).
Do you agree with me?
Hi Ernie,
So what would you do if you were trading a pair and it was no longer cointegrated (say by the ADF test)?
1.Immediately get out of the pair
2.Get out normally (take profit or stop loss) and do not re-enter trade
3.Continue trading pair normally for a set amount of time and stop
4.Something else?
Thanks
Hi edba,
I used data from 20060523-20100521 to compute the hedge ratios for GLD-GDX-USO. If you use a different data period, you could certainly get different ratios.
Ernie
Anon,
If you are trading a "static" mean-reversion strategy, then yes, loss of cointegration compels you to liquidate positions immediately.
"Static" means you don't continuously updates the mean and stddev of the spread.
However, short-term mean-reversion does not require cointegration/stationarity, so you can continue trading as long as the time series mean-reverts.
Ernie
Hi Ernie,
let me ask my question again:
The triplet that you suggest (0.5350*GLD-0.7387*GDX+0.0293*USO) has a strong trend (ie is not stationary). How do you use it for cointegration?
edba,
My hedge ratios are given by the Johansen tests, which find the 3 instruments to cointegrate. If you find them to have strong trend in another data period, you will certainly need to use Johansen to compute a new set of hedge ratios. In the period I tested, a mean-reversion strategy is quite profitable with this triplet.
Ernie
Hi, Ernie:
What do you think of the possibility of pairs trading a commodity ETF with its underlying futures contracts (contract rolling issues aside)? Right now your spread suggestions are all ETF/ETF combination.
Thanks.
Fuzhi
Hi Fuzhi,
If the commodity ETF holds futures also, then there is no issue, and it should cointegrate very well with the underlying futures.
One possible issue is that the ETF may hold contracts of different months. In that case, you have to hedge with contracts from those months as well.
Ernie
Thanks a lot Ernie.
Will you by any chance offer a workshop in the New York area this year?
Fuzhi
Fuzhi,
There may be a plan to offer this in New York in January 2012. Please check back in a few months.
Ernie
Hi Ernie,
I have been trying to trade pairs using cointegration. I generally use 2 yrs rolling data to test cointegration and a shorter period to find the static hedge ratio. Once I find cointegration and take the trade, I keep on calculating the spread every day by rolling the shorter period to check for mean reversion. Though the spread mean reverts nicely, the trades are not profitable. Could you please advise what's wrong with my approach.
Hi basant,
It sounds like your lookback period for your moving averages and/or standard deviations may be too short.
Ernie
Hi Ernie,
The lookback period is at least sixty days. Do you advise a longer lookback period?
I think the problem is because I calculate the new beta and new spread as I roll the lookback period and take sigma of the new spread. Since the original portfolio is formed using different beta and spread is with regard different beta, the trades are not profitable with respect to rolling mean. In this case I have two choice 1) Rebalance the portfolio with respect to new beta which leads to higher transaction cost. 2) Keep the beta constant and take mean and sigma of the rolling spread (which is same as bollinger band).
Could you please let me know your views?
Hi basant,
I like method 2. But in any case, your lookback should be set by halflife calculations, as recommended in my book. A long lookback will prevent the problem you are experiencing.
Ernie
Thanks Ernie for your comments. I looked in your book for lookback period and didn't find any reference to relation between lookback period and half life. Could you please through some more light on this?
Hi Basant,
Yes, I didn't mention this relationship in my book (I discussed this in my workshops.) But you can simply try setting your lookback to (or greater than) the halflife of mean reversion. This usually works out pretty well.
Ernie
Thanks a lot Ernie for you valuable comments.
Hi Ernie,
1)Does the optimal regression period of a pair depends on the half life? I mean is it wise to take beta of a period where the half life is shortest?
2)Should the prices of stocks be detrended before testing for cointegration?
Hi Anon,
Yes, generally you should set the period of regression to the halflife, unless the halflife is too short to give a meaningful fit.
You should not detrend stocks beforehand, otherwise the cointegration test has no meaning.
Ernie
Hi Ernie,
Could you elaborate on the following?
The half life is calculated after running the regression. Therefore how does one set the period of regression prior to calculating halflife?
How short is a short halflife? Sometime I get half life of 8 days in a regression period of 2 yrs. will this be considered as a short half life?
Hi Anon,
That is a good question and a good example of the situation one faces in numerical methods quite often.
One typically initiate the iterations by guessing an approximate lookback, and use this for regression and halflife computations. Then you set the new lookback to equal this halflife, and repeat the process. If this process converges, i.e. if the resulting halflife ceases to change much with each iteration, you have found the correct lookback to use.
I think a linear regression fit should have at least 10 data points.
Ernie
Thanks Ernie for the clarification.
Does that mean in the process the beta used in trading is different from the cointegration beta.
Anon,
Yes, typically the beta used in trading uses a shorter lookback than that used in cointegration test.
Ernie
Hi Ernie,
I am trying to replicate your suggestions but probably making some mistakes and not getting the expected results.
1)For example I started with a lookback of 100 days and got a half life of 15 days. Next with a lookback of 15 days I get a half life of 5 days and with lookback of 5 days HL comes to 2 days and so on. The convergence doesn't happen.
2) When I calculate beta with lookback of 10 days for the above data, the sigma of the spread is quite smaller than the 100 days sigma. In the out of sample spread crosses even 6 times the 10 day lookback sigma where as it remains with in 3 times the 100 day sigma. Which sigma should I use?
Hi Anon,
Did you really use 5 days for linear regression? That's too short. In any case, I think your HL calculations may not be correct. If you can't find the correct lookback this way, just try to find the optimal lookback in-sample, and test it out-of-sample. The same goes for your sigma computations.
Ernie
Hi Ernie,
Thanks much for the valuable information shared on your blog.
It seems that a lookback period of 2 years for the cointegration test is quite usual. However, do you believe that a shorter period might be useful to detect when cointegration breaks down (in other words, could a coningration test on a 6 month period detect than a pair is not any more stationary when the same test on 2 years would not?). Thanks.
-Henri
Hi Henri,
It is actually quite hard to detect the breakdown of cointegration except in hindsight maybe a year afterwards. This is because any drawdown in a pairs strategy can be interpreted as a breakdown. But only when the drawdown lasts for, say, a year, when we can say that the cointegration is really gone.
The best we can do is to examine past periods of such temporary breakdowns and identify the fundamental reason/variable for it, and then add an additional cointegrating instrument that hopefully will take into account the extra variable.
Ernie
Apologies if this is slightly off topic - it's less about when cointegration breaks down and more about the reliability/stability of a particular cointegration test.
Using EXACTLY the same data set, I'm seeing significantly different results between sequential tests on that same data when using the Spatial Econometrics cadf.m MATLAB function. Anyone else experienced this? Thx Andy
Hi Andy,
By sequential tests, do you mean running cadf on, say, the 2008 data of a price series over and over again and getting different results each time?
Ernie
Hi Ernie,
Let's assume I have a pair that cointegrates with a 95-99 % probability over a 2 year, 3 year and 4 year Lookback data period. But before I enter the trade i test the cointegration on a 1 year Lookback data period and the probability coming out of my adf test is high (eg 30%) . Should I assume the pair no longer cointegrate and not trade the pair OR ignore the 1 year period lookback test and assume they cointegrate??
Thanks
Ca
I am a student and working on my pairs trading project.
1. I found that constant beta give very slow mean reversion process. Spread revert to its mean some time in 6-9 months. I think that is not desirable. In this case, how to make changes such that it revert quickly.
2. If I change beta everyday, portfolio re balancing come into picture. Everyday spread changes with new beta. If I get a Exit signal with new beta which is say, less (greater) than beta while entering the position. I would left (fall short) with short securities to square off my positions. How to overcome this problem?
Thanks
Ca,
If you find that the pair is not cointegrating in the last 1 year, try to find out the fundamental economic reason/condition why they might diverge. But in any case, I would not trade it until I believe that this condition is over.
Ernie
Jeet,
Indeed I recommend updating your beta daily with a short lookback period. However, it doesn't necessarily mean you have to adjust your existing positions given the changed beta. You can just use it to generate an exit signal so that you are either in the current position, or exit both sides completely.
Ernie
Hi Mr. Chan,
Thanks for replying my previous post. I have regress S1 over S2 with 1000 points and found this pairs is co integrated by using ADF test.
Now, I know the pair is cointegrated, I tried to trade with this coefficient and found some time the trade length is very large like 4-5 months. It is not desirable.
So I decided to find coefficient dynamically. For the cointegrated pair, I regressed first 20 values of S1[1:20] vs S2[1:20] and find one coefficient, next day S1[2:21] ~ S2[2:21] and found another cofficient next day which I used to calculate the spread which reduces trade length dramatically but problem is spread is calculated with different cofficient everyday but actual trading does not involve rebalancing. Strategy makes loesses.
Jeet,
Did your strategy lose in backtest, or in live trading, based on the method you described?
Ernie
Hi Erin,
Some time period it made losses and some time periods it gives profit around 3-4% per Annum.
Need your suggestions.
Jeet,
I suggest you decrease the lookback period in order to reduce the holding period. This usually increases the Sharpe ratio as well.
Ernie
You mean to say I should use let us say 10 days look back period to calculate coefficients dynamically. S1[1:10]~s2[1:10], next
S1[2:11]~ s2[2:11], then
s1[3:12]~s2[3:12] and so on.
Jeet,
That's correct.
Ernie
Ernie, I revisit this post and read the latest conversation between you and @Jeet. I wonder is it meaningful to update hedge coefficient daily? and using such short period (10 days in Jeet's case)?
I think hedge coefficient needs to be updated when fundamental changes but pairs are still cointegrated.
gcn,
For pairs that are truly cointegrated, and you don't mind holding for a long enough period so that the spread mean-revert, you can certainly use a static hedge ratio, updated only when fundamental changes occurred. But for those traders who desire short holding periods, and particularly when the pair does not really cointegrate but nevertheless mean-revert on a short time scale, a short lookback enables us to take advantage of this profit opportunity and exit quickly.
Ernie
But what if the half life is short, for example 10 days or shorter. It is less meaningful to regression on such short series.
gcn,
You can certainly regress on 10 data points. We are used to large uncertainties in finance anyway, so just because one has small error bars doesn't mean the prediction is better.
Ernie
Hi Ernie,
In testing for cointegration in the pair, how do you decide (a) whether to use an Augmented DF test, or whether a simple Dickey-Fuller test is enough; and (b) in a ADF, what lag to use (especially if different lag lengths result in different conclusions)
Thanks
Hi Danie,
I always use ADF test, because of certain defects in the DF test. And I always use lag=1 in an ADF test, because it is usually the simplest model that can reject the null hypothesis.
Ernie
Thanks for the response Ernie.
Conceptually, if the ADF test with 1-lag suggests the series is stationary, but with higher lag orders the null hypothesis cannot be rejected, how confident would you be in trading the pair?
Many thanks.
Danie,
I actually have never seen such a situation, and am not even sure that it is theoretically possible, since higher lags include the possibility of lag=1. Have you found such an empirical counter-example?
Ernie
Thanks Ernie,
It's possible I'm making a mistake in the model specification (I'm working in Excel to keep it simple), but when testing two South African stocks over a 120 day period, the test statistic increases with the the lag period (i.e., lag=1 gives -3.98, l=2 gives -3.10, l=3 gives -2.95). The series being tested is the difference in log prices (i.e. Y = ln(price A) - 0.87*ln(price B))
Any thoughts are welcome!
Thanks, Danie
Danie,
As long as you have found one lag where the null hypothesis cannot be rejected, than the series is stationary.
Ernie
Thanks Ernie, I appreciate the help
Ernie,
I did calculation on OIH-RKH-RTH in your book example-6-3 but with past one year data. F I got is
[ -8.3926516 , -0.06206326, 38.20104829]. does it mean pair no longer co-integrates and RKH looks insignificant in portfolio.
By the way, I am using pandas python module which is very handy to replicate examples in your book. maybe you like to recommend other readers?
Regards
archlight,
Example 6.3 is only about optimal allocation of capital based on Kelly formula. It has nothing to do with cointegration of the 3 ETF's. Your numbers merely mean we should short OIH and long RTH.
Thanks for mentioning Python. Yes, I have heard good things about it.
Ernie
Hi Ernie,
I have enjoyed reading through the above comments. Very interesting and informative. I have a question regarding triplets. Once you get an entry signal from your model how do you get into the trade? One option is to cross the spread on all 3 legs (which is very expensive) ... another is to use an autospreader to dynamically adjust limit orders to try and leg into the trade as much as possible before crossing the spread on the leftovers. The danger of the latter approach is that you don't get into the trade as the spread reverts.. so there is an opportunity cost with being passive.
What is your opinion on this?
Regards, Rob.
Hi Anon,
We can place limit orders for the least liquid component. Once it is executed, then use market orders for the remaining two.
Ernie
Hi Ernie,
Yes this seems to be a standard approach.
I carry out the following regression on daily prices over a 2 year period:
asset1_t = drift + beta*asset2_t
The residuals, denoted a_t, are given by:
a_t = asset1_t - drift - beta*asset2_t
I find that the residuals are stationary. Thus, using a ratio of 1*asset1 and beta*asset2 I trade the spread.
From reading the comments above it seems that a common approach to trading the spread is to not use this ratio but instead estimate a linear regression over a shorter timeframe on a rolling basis... so they are only using the longer window to identifying a pair that is cointegrated .... Can you please clarify the logic in this? To me it seems that the cointegrating relationship holds for the original ratio (1, beta) .... but not necessarily for the rolling estimate?
Regards,
Rob.
Hi Rob,
Using a shorter time period to find a rolling estimate of the hedge ratio enables us to exit unprofitable positions naturally. Also, hedge ratio may drift over time in the out-of-sample period. You can backtest this scheme to see if this works better than a static hedge ratio out-of-sample. (Obviously, a static hedge ratio will work best in-sample.)
Ernie
Hi Ernie,
Ok, I can see logic in that. I will run some backtests to investigate it further.
From your experience of trading 2 leg and 3 leg mean reverting spreads what is the highest frequency you can trade the spread at before the transaction costs become to large relative to the expect profit per trade. For example, trading on a 1 second basis is too fast.
Regards,
Rob.ad
Hi Rob,
You can trade a pair at any frequency depending on the market and your technology. A holding period of seconds is possible.
Ernie
This is probably the best place for coint. Please I need clarification for following.
1. I have 2 series closing data coming in live every minute.
2. I check co-int by stacking every new minute data in the stack, and cointegration is detected/starts at some time = T1 : (T1-20)
3. Then I find the spread at that particular time. and enter a trade if spread suggests so.
4. Then new data comes in, I again stack the new data on to the previous data and keep checking for cointegration. and also calculate the spread for data (T1-20):NOW.
5. This time NOW changes every minute and every minute I calculate new spread from original start point when the cointegration started (i.e T1-20) till NOW.
6. Is it correct appraoch to start from the beginning or should I only use NOW-20:NOW for caluclating regression coefficients and then spread.
Thanks
Hi HASNAT,
It is not statistically significant to determine cointegration based on 20 data points. You need at least 100 data points. Also, coint is a long term property of time series, it is not very meaningful to determine cointegration using intraday data.
But in general, you can use NOW-lookback:NOW to test for coint.
Ernie
Mr chan, as you have pointed out, when calculating an ADF test to determine cointegration it is a good idea to test both ways (switch independent and dependent variable with each other). I have noticed that if the Y variable is smaller than the X variable it is more likely to be cointegrated for the ADF test, but not as likely when switched. This happens a lot. Does it make sense to calculate a hedge ratio with data before the start of test date and multiply X by that hedge ratio before performing ADF test? X and Y would seem to be normalized then. Is this valid? Thanks
Hi Anon,
The best way to avoid the order-dependence that you pointed out is to use Johansen test for cointegration. The eigenvector thus obtained is the best hedge ratios you can find.
For details, please see my new book.
Ernie
continuing from my post right above. My fear of using Cadf is that when testing both y=bx and x=by and one way is strong, but the other is weak cointegration I will be throwing out some good pairs that are only not working because of the price difference of the 2 stocks. You suggest me to use Johansen test to avoid this, but I have read you yourself use Engle-Granger. Do you not worry about the affect of order-dependence on your pairs? Thanks!!
Anon,
In cases where cointegration is strong, which is where I usually operate, using cadf is quite OK.
However, if you have any concern about the strength of cointegration, I recommend you switch to Johansen.
Ernie
Mr. Chan, What if the cointegration tests (johansen and adf) show strong cointegration, but a chart of the spread has a pretty clear up or down trend (not at all like the GLD, GDX pictures in your book)? Could the test be giving some kind of false positive?
Thanks again sir
Hi anon,
If you have set the input parameter p=2, Johansen test does allow for a non-zero slope of the spread as function of time. The mean reversion is then with respect to the trend line.
Ernie
hello,
I am still trying to understand Johansen method fully. Can you tell me if the correct number of lags to use is 1? (k=1) I am just testing 2 variables at a time if this makes a difference
thanks,
confused
Anon,
Yes, I often find k=1 is the minimum. If this can't reject the null hypothesis, try larger numbers, but I think they usually don't help much.
Ernie
Hi Ernie -- are there any papers or guidance on doing cointegration with tick data, instead of time-sliced bar data? I am thinking about creating 100-1000 tick bars for each asset, but it seems to be that in order to assess cointegration, I need a time-consistent frame across all assets, so I need to close all the "bars" at the same instant. Is there a more fluid but still consistent way of assessing the discrepancy from cointegrated mean?
I hope you understand my question, I believe it's quite basic but I don't want to re-invent the wheel.
Hi experquisite,
I think the only way to use tick data for coint test is to create volume bars, as you suggested.
Ernie
Ernie Chan, I have three basic questions. 1. In coint based tading, every new data point changes the regression coefficients. How can we stabilize the regression coefficient ? 2. It seems that coint based strategies only work on daily,weekly data and should not be used for minute, hourly, intraday data. but then the trades would be very slow (not rapid). Is there any rapid version/strategy of coint based trading? 3. Is there a simple, basic paper, example which can give a practical coint based trading strategy ? Mostly the papers calculate the regression coefficient in advance with all data (including the one to be tested). which I think is not a correct approach for practical trading.
Hasnat,
1) You should run regression everyday, and update your coefficients and possibly positions everyday based on the latest coefficients.
2) If you want to analyze coint day trading strategies (i.e. always liquidate all positions at market close), you can concatenate all the intraday prices of different days together, but adjusting them to eliminate the overnight gaps (similar to backadjustment of futures prices to avoid rollover gaps.)
3) Indeed, when computing regression coefficient in a backtest, one should only use data up to the moment you need to enter a trade. I don't know of any such papers, as these strategies are straightforward to construct and backtest yourself.
Ernie
Hi Mr. Chan,
I tried to replicate your johansen test and get the GLD,GDX,USO combination with the identical parameters as yours. But I did't succeed, where I used daily close price with time period 20060523-20100521.
So I am wondering what prices were you using? Did you do any preprocessing on your database?
Li,
The only adjustments are for splits and dividends, which correspond to the Adj Close column in Yahoo! Finance.
Ernie
Dr Ernie,
When I find the APR and Sharpe ratio for the USDCAD using the stationarytest file. I get the APR=63% and Sharpe Ratio=0.1. How can APR be so high while the equity line is so poor?
fprintf(1, 'APR=%f Sharpe=%f\n', sum(pnl).^(252/length(pnl)), sqrt(252)*mean(pnl)/std(pnl));
Thank you
Leo
Hi Leo,
APR can be very high just by luck. Sharpe ratio is a much better measure of consistency.
Ernie
Dr Earnest,
May I know the time zone of your USDCAD, CADAUD 1 minute data? ie NY time or Singapore time?
Thank you
Leo
Leo,
The data uses ET.
Ernie
hi Ernie,
How often do you run cointegration test(to determine if there still cointegration relationship) on a pair that you've been trading?
Best wishes,
D.
Hi D,
We would run it daily.
But there is no harm to run it monthly.
Ernie
Dear Dr Earnie,
You used CADUSD vs AUDUSD in example 5.1. Does hedging helps to improve sharpe ratio? Because we can simply combine these into a single price series since they are normalised to USD.
Thank you
Leo
Hi D,
Trading USDCAD vs AUDUSD is slightly different from trading AUDCAD.
You are free to backtest which version generates better Sharpe.
Ernie
Dear Dr Earnie,
May I know the reason why is it different? Looks the same to me.
Thank you for your explanation
Leo
Hi Leo,
They are certainly different because trading USDCAD vs AUDUSD will result in P&L denominated in CAD and USD, while trading AUDCAD will result only in PL in CAD.
Furthermore, trading USDCAD and AUDUSD allows you to choose an optimal hedge ratio, whereas for AUDCAD the hedge ratio is fixed by the broker.
Ernie
Dear Dr Earnie,
When I use regression for a pair of data to get the hedge ratio between them, should I ignore the y-intercept? ie Asset A is having prices around 105 and asset B is having prices around 100 but the hedge ratio is 2. ie A = 2*B - 100.
Hi Leo,
If you think that the two assets should go to zero together due to fundamental reasons, then you can set the intercept to zero. (E.g. GLD and GDX should go to zero at the same time.)
Ernie
Hi Ernie,
If you run it daily and if you're using values for Johansen as your weights. Would you adjust these weights daily as well?
Thank you!
Kindest regards,
D.
Hi D,
Sure, at least for new positions. You might not want to change the weights of existing positions to avoid transaction costs.
Ernie
Dear Dr Earnie,
You mention "You might not want to change the weights of existing positions to avoid transaction costs". Also we discussed before that sometimes we will choose the second eigen vector because the first eigen vector is very large. But sometimes both eigen vector are close, just that based on hindsight, one is slowly drifting away to very large number. How do you backtest with this kind of changing hedge ratio?
Second question: What is the length of period you used to calculate the hedge ratio and the length of period this hedge ratio is applied to?
Thank you for your teachings
Leo
Hi Leo,
There is no problem with changing eigenvectors if you decide to keep the eigenvector fixed during the lifetime of a position.
Generally speaking, one should set the period over which the hedge ratio is calculated to be at least as long as the half-life of mean reversion, and probably a few times that. But this is a free parameter than you can optimize in-sample.
Ernie
Dear Dr Earnie,
So the proper backtest method using bollinger will be like the following?
1) choose first eigen vector
2) bollinger trigger a trade,
3) eigen vector will not change till the bollinger trigger a close position.
4) system will resume update with new eigen vector daily till a trade is triggered by bollinger.
My second question will be since there are two eigen vectors to choose, say I choose first eigen vector, it is slowly becoming bigger and bigger while the second eigen vector do not deviate much. System would not know if the chosen eigen vector is going to be wrong or is it correct due to the data. Only through hindsight, will I see the correct hedge ratio. When I backtest it through time, my system always select the wrong hedge ratio. =(
Thank you for your teachings always
Leo
Hi Leo,
Yes, your description is accurate.
Once you entered into a trade, since you have fixed the eigenvector, it doesn't matter that the updated Johansen test will change that eigenvector.
Ernie
Dear Dr Earnie,
May I know your advice for my second question? Do I really need to build a sophisticated decision model to choose the hedge ratio based on the following problem? I need the hedge ratio to form the spread for backtest.
My second question will be since there are two eigen vectors to choose, say I choose first eigen vector, it is slowly becoming bigger and bigger while the second eigen vector do not deviate much. System would not know if the chosen eigen vector is going to be wrong or is it correct due to the data. Only through hindsight, will I see the correct hedge ratio. When I backtest it through time, my system always select the wrong hedge ratio. =(
Have a good long weekend
Thanks for your teachings always
Leo
Hi Leo,
As I said, it is irrelevant whether your first eigenvector is changing or not in theory. In practice, you have fixed its value once you entered into a position.
Ernie
Dear Dr Chan,
I mean during the period that we have not entered a position yet. During these times, the system will have to choose an eigenvector to compute the spread time-series. Hence one eigenvector maybe getting larger and larger and while the other eigenvector is not. How do you determine the eigenvector during this period automatically?
Thank you
Leo
Hi Leo,
For the purpose of computing the spread, we can use the most recent eigenvector(1), since this is the one we will actually trade into if the spread is big enough. Then fixing this to be our hedge ratio, we can compute the spread and Bollinger bands in the lookback period.
Ernie
hello Ernie,
Great blog. This is a late comment for me for this old post, but I think its important to note there is another reason while gold miners and spot gold may differ. Many thinks that gold miners are a proxy for pure gold investment, while the reasons you gave are definitely good there is one which many tend to miss and is less market oriented and more geopolitical. In fact, gold is 'commodity' which is situated in many hot spots around the world, so there is always a very strong risk of expropriation by government or local interest group. Thats why in my opinion while gold miners stocks should be looked into carefully especially in rising prices times.
cheers
Agreed!
Ernie
Hi Ernie,
Great books. I'm trying to reproduce your 0.5350*GLD-0.7387*GDX+0.0293*USO using the data file from the book, but am not able to:
load('inputData_ETF', 'tday', 'syms', 'cl');
idxA=find(strcmp('GLD', syms));
idxC=find(strcmp('GDX', syms));
idxD=find(strcmp('USO', syms));
x=cl(:, idxA);
y=cl(:, idxC);
z=cl(:, idxD);
dataIdx=find(tday>=20060523 & tday<=20100521);
x=cl(dataIdx, idxA);
y=cl(dataIdx, idxC);
z=cl(dataIdx, idxD);
y2=[x, y, z];
results=johansen(y2, 0, 1);
- results.evec(:,1)
this yields:
0.0284559
-0.1803813
0.0096945
I tried all possible end dates of the file, and the closest I got was 20071030:
0.490061
-0.739426
0.043671
I realize that you said your result was using yahoo adjusted data, but I tried that too with some R code, and had the same issue. Any ideas?
Please let me know which book and which example or program you are referring to.
Thanks,
Ernie
For (GDX, GLD, USO) from 20060523-20120409, using adjusted daily closes, I found evec(:, 1) to be
-0.1770
0.0331
0.0025
Hopefully that's what you got?
Ernie
Yes, that's exactly what I got (0.0331094,-0.1770360,0.0025494). But if you run it ending at 20100521 you don't get 0.5350*GLD-0.7387*GDX+0.0293*USO, you get 0.0284559,-0.1803813,0.0096945 so where are the 0.530,-0.7387,0.0293 you mentioned coming from?
Since we agree on this set of numbers, it means that our programs are the same.
You can ignore the eigenvector values I quoted previously - I am not going to find out exactly what went wrong, whether it is the data or the program.
Ernie
Thanks Ernie - I was going crazy trying to figure out what was going on
Hi, I have a problem with eigenvectors. My data is containing close prices of 100 stocks over 5 years. I used johansen.m file for coint test and the test statistics were ok at %99 but after I realized that eigenvector values for some pairs have the same sign?? I didn't understand what's the problem and it's meaning..
Thank you Ernie your books are really great.
Hi Cansin,
Thanks for your kind words.
Unfortunately, most implementation of Johansen's test (such as that in spatial-econometrics.com) cannot handle more than 12 variables. So 100 stocks won't work.
Ernie
Hi Ernie, I have a problem with intercept term and log-log regression.
I need your help. T.T
1. why we can interpret intercept term as premium?
I saw this assertion in book of vidyamurthy's pairs trading 107p
(it says
- intercept term is equilbrium.
- consider a portfolio long one share of A and short shares of B as same as correlation coefficient
- now, according to equilbrium relationship, such a portfolio yields an average cash flow of intercept term
which is given back when the position is reversed
- thus intercept term represents the premium paid for holding stock A over an equivalent position of stock B
)
I can't understand why above portfolio yields an average cash flow of intercept term
2. significant change in slope as change of intercept term assumption
X Y
0 1200 2400
1 1400 2800
2 1100 2000
3 1500 3300
4 1600 2800
5 1800 3000
============================================
Raw regression, intercept term: O
Intercept 710.0
X 1.4
dtype: float64
============================================
Raw regression, intercept term: X
X 1.882306
dtype: float64
============================================
Log log regression, intercept term: O
X 0.589441
dtype: float64
============================================
Log log regression, intercept term: X
Intercept -0.933403
X 0.109676
dtype: float64
above table is my own regression results.
there is significant difference of slope between 0 intercept term and non zero intercept term.
I think there are some problems or missed assumptions in my regression process.
how can I handle that?
Thank you for reading
Best regards.
Hi Sean,
1) Instead of "cash flow", I would characterize the premium as the different long term growth rate of the 2 portfolios.
2) In general, one should include the y-intercept term. Otherwise, the regression fit will be biased. However, if there is fundamental reason to believe the y-intercept should be zero (e.g. GDX vs GLD), then it can be fixed to 0 to avoid noise and overfitting.
Ernie
I really thanks for your kindness and clear explanation.
I have been wandered all day to figure out this problem.
but I can't find out a proper description about what is the meaning of "log-log" regression's intercept and setting it to 0.
sincerely, thank you again for your kindness
HI Ernie, hope all is well and I am enjoying your latest book (Machine Trading) and have been working through the examples, wondered if you had any hints why http://epchan.com/book3/Chap3%20Time%20Series/SSM_beta_EWA_EWC.m does not work as expected with MathLab R2017b (9.3.0.713579), just get the following error, any pointers to the root caused would be most welcomed!
`>> SSM_beta_EWA_EWC
Error using optimoptions (line 124)
No appropriate method, property, or field 'keys' for class 'meta.class'.
Error in statespace/estimate (line 646)
options =
optimoptions('fminunc','Algorithm','quasi-newton','Display','notify-detailed');
Error in ssm/estimate (line 254)
[EstMdl,estParams,EstParamCov,logL,Output] =
estimate@statespace(Mdl,Y,params0,varargin{:});
Error in SSM_beta_EWA_EWC (line 32)
model=estimate(model, y(trainset), param0);
>> `
Hi Douggie,
Thanks for your interest in my book.
I believe the error is because you did not have the Optimization toolbox installed. Please try installing a trial version to see if it works!
Ernie
Hi Ernie,
I am trying to implement pairs trading by searching for cointegrated pairs. I usually find cointegrated pairs that share the same stock, for example (A,B) and (A,C) etc... How do one construct a portfolio in this case? Let's say my trading signal suggests buying 1 share of A in the first pairs and shorting 1 share of A in the second pairs (and of course shorting 'hedge ratio' shares of B and buying 'hedge ratio' shares of C). This would mean my portfolio will have 1-1=0 shares of A. Is this implementation reasonable? I was just wondering if you have any guideline concerning how to construct a portfolio in this case, or if you can point to any book/article about this. I enjoyed your books but I couldn't find an answer for this specific question. I might have given you a new topic to cover in your next book :) !
Thanks
Hi Al,
I would recommend you set a maximum net dollar exposure to a stock you add to a portfolio. If the net exposure happens to be zero due to different pairs offsetting each other, that is great.
Ernie
Hi Ernie,
I am reading your second book. It is very interesing. I have a few question about Johanson test.
1. I find that the more assets I add to the co-integration profolio, the statistic for mean-reversion is more robust. Do it means that I should add more assets to the cointegration profolio as possible? Most of the Johanson test software package could handle up to 12 vairables. Will it be more robust to the regime shift if more equitys in the cointegration profolio?
2. The eigen value of Johanson test is the hedge ratio for different assets within the cointegration profolio. Should the hedge ratio be changed as the Kalmen filter do?
3. For daily update of the Johanson test eigen value, how many data points is needed? Are there some guide line? Or just a few multiples of halflife points?
Thank you!
Nicholas
Hi Nicholas,
1) Sure, diversification usually improves things!
2) Yes.
3) Yes, 10x half life is good rule of thumb.
Ernie
Hey Ernie,
Really enjoyed reading here so far!
Referring to pairs trading using CADF test for 1,2,3 years; what would you think the optimum number of pairs for a portfolio and what is the MAX amount of pairs that should be traded at any point, both from an efficiency and diversification point of view?
Many academic papers refer to the number 20 pairs in general but not to how many pairs should be traded at any point...
Thanks,
Matan.
Hey Ernie,
Really enjoyed reading here so far!
Referring to pairs trading using CADF test for 1,2,3 years; what would you think the optimum number of pairs for a portfolio and what is the MAX amount of pairs that should be traded at any point, both from an efficiency and diversification point of view?
Many academic papers refer to the number 20 pairs in general but not to how many pairs should be traded at any point...
Thanks,
Matan.
Hi Matan,
My answer may surprise you: but you can make a good living if you can find just 1 pair. Of course, the more the merrier!
Ernie
Hey Ernie,
I agree!
The issue I have is that if one of the 1YR or 2YR or 3YR cointegration breaks I tend to liquidate my position, as those pairs who had their statistical connection broke are generally the ones I see losses from if I don't liquidate my position.
This condition inevitably creates somewhat of a normal turnover of pairs in my portfolio.
Since co-integration breaking is a common attribute to my losing trades, I would like to minimize it as much as I can. (I understand that cointegration breaking is just the end result of a change in some fundamental factors, therefore it is always possible for cointegration breaking to happen)
Assuming normalized market movement I believe that reducing the number of pairs would prevent frequent cointegration breaking while in trade (as happened to me when I changed my number of pairs from 30 to 20).
On the other hand, most of the time a certain pair remains below the required threshold for trade(as I use 2 SD I take it is as normal) and I would like to employ my money to its best- I.E to be inside a trade most of the time, which requires me to increase the number of pairs.
Therefore I was looking for a formula/number/assumption?
Looking forward to your answer,
Matan.
Hi Matan,
This optimal number of pairs to trade can be treated as a parameter to be optimized. It should probably be optimized in the traditional way by finding the best Sharpe ratio in a training set.
Ernie
Post a Comment