Sunday, January 31, 2010

A method for optimizing parameters

Most trading systems have a number of parameters embedded, parameters such as the lookback period, the entry and exit thresholds, and so on. Readers of my blog (for e.g., here and here) and my book would know my opinion on parameter optimization: I am no big fan of it. This is because I believe financial time series is too non-stationary to allow one to say what was optimal in the backtest is necessarily optimal in the future. Most traders I know would rather trade a strategy that is insensitive to small changes in parameters, or alternatively, a "parameterless" strategy that is effectively an average of models with different parameters.

That being said, if you can only trade one model with one specific set of parameters, it is rational to ask how one can pick the best (optimal) set of parameters. Many trading models have a good number of parameters, and it is quite onerous to find the optimal values of all these parameters simultaneously. Recently, Ron Schoenberg published an article in the Futures Magazine that details a way to accomplish this with just a tiny amount of computer power.

The key technique that Ron uses is cubic polynomial fit of the P&L surface as a function of the parameters. Ron uses the VIX RSI strategy in Larry Connors' book "Short Term Trading Strategies That Work" as an example. This strategy has 5 parameters to be optimized, but Ron only needs to compute the P&L for 62 different sets of parameters, and the whole procedure only takes 58 seconds.

Although Ron has confirmed that most of the parameters that Connors picked are close to optimal, he did find a few surprises: namely, that RSI of period 3 or 4 is significantly more profitable than the 2 that Connors used, at least in the backtest period.

Now, for a true test of this optimization, it would be helpful if Ron performed this optimization withholding some out-of-sample data, and see if these parameters are still optimal in that withheld data set. Since he didn't do that, we need to wait for another year to find out ourselves!


Jez Liberty said...

Talking about out-of-sample data, do you have a strong opinion on Walk-forward testing? I know Robert Pardo is a big proponent of it as a tool to adapt to changing (ie non-stationary) markets.

On the other hand you have Dunn that trades the same system with same parameters...

I have also recently come across Ralph Vince Leverage Space Portfolio model which sounds like a promising alternative to the optimization process - using a surface of performance and drawdown - but only looking at the money management side of things (ie position size instead of parameter choice). The great thing is that it should allow multiple system optimisation. Have you got any thoughts on this one?

Thanks for the useful links anyway..

Ron Schoenberg said...

The failure to generalize out-of-sample is usually a problem with models that tend to be over-fitted, like neural nets. The models I have been looking at, variations on models proposed by Connors and Alvarez in their books, are very simple models and I don't expect them to have any problems out-of-sample.

I did forward test the model in the Futures Magazine. The model was optimized on data from 1993 to 2003, and then I ran the model on data from 2003 to 2009. The results were similar. This was reported in the Futures Magazine article. Wouldn't this qualify as an out-of-sample test of the model?

Ernie Chan said...

My apologies -- I did overlook the paragraph which mentioned forward testing. Your tables did show that the optimal parameters calculated in the backtest produce good results in the out-of-sample test.

However, did you optimize the whole set of parameters on the out-of-sample data, and see if what you picked in the in-sample period remains optimal in the forward test?

For e.g. we see that Factor5=3 is optimal in the backtest. But Factor5=4 is optimal out-of-sample instead. And who is to say that given a different out-of-sample period in 2010, Factor5=2 won't be optimal? Hence if I were trading this strategy, shouldn't I trade 3 models with Factor5=2, 3, and 4 simultaneously just to average over all the future possible optimal parameters?


Ernie Chan said...

Thanks for your mentioning Leverage Space Portfolio. I haven't came across it before and will study it before commenting.

Al Corwin said...


You make a good point that the optimal point in one time period is not the optimal point in another time period. That's clearly one of the weaknesses to any experiment with historical financial data. However, if the goal is to get the best numbers that the past has to offer in the shortest amount of time, you can't beat a statistically designed experiment. If we are talking about a quantified trading approach then we are talking about getting and using the best numbers, that is, the numbers shown to be most useful in the past.

But as I am sure you are aware, a designed experiment is more than about the optimal results. Every experiment contains a hundred questions and answers, and every experiment gives rise to questions that can only be answered with further experimentation. You could lose yourself for a year taking a look at all of the aspects of a single 200 trial experiment.

Anonymous said...

Hi Ernie,

When analysing spreads (calculating the cointegration coefficient, aplying the test for mean reversion and estimating the spread half life) do you use log prices or dollar prices?

Sorry for posting a commentary talking about other topic.

Thank you!

Ernie Chan said...

Hi Anon,
I use dollar prices, but I don't think it matters much.

Chris Sutherland said...

Just wondering what back testing system you use? And for fitting the cubic polynomial data do you use Matlab?

And great book!

Ernie Chan said...

I use Matlab for my backtesting.
As for the cubic fit, you can ask the authors of the original paper that I quoted. (You can of course do that in Matlab too, but I did not perform this research myself.)

Vina said...

Dear Ernie,
Thanks for the book. It is really good. I am trying to implement some of the programs and do some back testing. I don't have matlab as I found out it to be very expensive for my startup trading. Is there an other one that you would recommend. I am trying to program in R, but it is very slow right now. Do you have all your examples in Excel, by any chance.

Once again, thank you for the book and I have enjoyed reading it and trying to implement some of it.


Ernie Chan said...

Thank you for your compliments.
You may be able to find some Excel substitutes (for e.g. a reader said on this blog that there is an Excel adf test for stationarity somewhere). However, many of the strategies are difficult to implement in Excel. That's why many quants use Matlab for backtesting instead. Certainly R is a good alternative too. (Guest blogger Paul Teetor which you can search for on this blog has some sample R programs posted.)
Best of luck,

Unknown said...

Hi Ernie.
In Algorithmic Trading, when you describe the Bollinger bands strategy, the width of the bands (entry and exit Z-scores) are to be optimized on a training set.
How did you go about optimizing parameters before the article you mention?

Ernie Chan said...

Hi Danny,
You can maximize the Sharpe ratio on the training set by varying these parameters. With so few parameters, an exhaustive grid search should work. Otherwise, you can use Matlab's Global Optimization toolbox to find a better optimization algorithm.