A reader pointed out an interesting paper that suggests using option volatility smirk as a factor to rank stocks. Volatility smirk is the difference between the implied volatilities of the OTM put option and the ATM call option. (Of course, there are numerous OTM and ATM put and call options. You can refer to the original paper for a precise definition.) The idea is that informed traders (i.e. those traders who have a superior ability in predicting the next earnings numbers for the stock) will predominately buy OTM puts when they think the future earnings reports will be bad, thus driving up the price of those puts and their corresponding implied volatilities relative to the more liquid ATM calls. If we use this volatility smirk as a factor to rank stocks, we can form a long portfolio consisting of stocks in the bottom quintile, and a short portfolio with stocks in the top quintile. If we update this long-short portfolio weekly with the latest volatility smirk numbers, it is reported that we will enjoy an annualized excess return of 9.2%.
As a standalone factor, this 9.2% return may not seem terribly exciting, especially since transaction costs have not been accounted for. However, the beauty of factor models is that you can combine an arbitrary number of factors, and though each factor may be weak, the combined model could be highly predictive. A search of the keyword "factor" on my blog will reveal that I have talked about many different factors applicable to different asset classes in the past. For stocks in particular, there is a short term factor as simple as the previous 1-day return that worked wonders. Joel Greenblatt's famous "Little Book that Beats the Market" used 2 factors to rank stocks (return-on-capital and earnings yield) and generated an APR of 30.8%.
The question, however, is how we should combine all these different factors. Some factor model aficionados will no doubt propose a linear regression fit, with future return as the dependent variable and all these factors as independent variables. However, my experience with this method has been unrelentingly poor: I have witnessed millions of dollars lost by various banks and funds using this method. In fact, I think the only sensible way to combine them is to simply add them together with equal weights. That is, if you have 10 factors, simply form 10 long-short portfolios each based on one factor, and combine these portfolios with equal capital. As Daniel Kahneman said, "Formulas that assign equal weights to all the predictors are often superior, because they are not affected by accidents of sampling".
Excellent Article. It's refreshing to hear that simpler models can and often do outperform more complex models.
When it comes to different 'factors' as you call it, I have had good experience (though technically a bit difficult) with online Q-learning, it's relatively easy to implement and in case of simple binary yes-no rules it degrades to simple adjusting of weights on those rules over time. It HAS some alpha in it. There might be an easier way to do this in that case (I think I read your article on using perceptron learning of rules, though I think it was the usual training/test set approach). It was definitely better than simple equal weighting in my case, although I do get the point you're trying to make. A drawback of online learning approach is that one needs a lot of data in the time domain to make a judgement on the statistical significance of this approach.
Thanks for sharing this, as usual!
Thanks for sharing your experience with learning algos on factor models.
How did you determine that Q-learning performed better than equal weights in your case? Did you trade both side-by-side for a long period of time?
You've mentioned before that you can accept a latency of a few seconds in your trading strategies. Is that a technical limitation of matlab or does matlab have the capability to do so in 1 second or less?
The latency is not due to Matlab, but to Interactive Brokers.
By the way, I can no longer tolerate a few seconds of latency.
So, Matlab, itself is capable of trading at a 1 second resolution or less?
Can the Parallel Computing Toolbox help with that?
How about at the millisecond or even sub-millisecond level?
Ah, so have you switched to the quickfix/j solution for connecting to IB's fix?
*I mean sub-second not sub-millisecond, of course.
Have you gauged that with matlab?
Maybe, 0.3-0.5 seconds per trade?
Yes, Matlab can trade at latency less than 1 sec. I don't know the lower limit of its latency because I have only used it on IB, and IB has a minimum latency of 250ms, and Matlab has a latency at least as low as that.
The parallel computing toolbox won't help you eliminate this latency, since many of the steps cannot be parallelized. Of course, if you are trading multiple stocks with the same model simultaneously, then it can indeed help.
I have not tried it to send FIX orders, because it does involve calling jar files from the Quickfix/J package, and it gets complicated syntactically.
Thank you for reviewing the articles and taking time out of your busy schedule to make a post on this! I am not sure if you did find any value in this idea but I hope it may benefit you in the future!
Is that possible to trade the same underlying using two different trading strategies in IB?
While one short, the other long.
They do not cancel each other.
They may share the same margin, so they can hedge each other.
Thank you very much.
Sure, as your as your own program keeps track of which strategy is long and which is short, IB is indifferent.
Would you please recommend website or other sources on Forex analysis and trading?
Have you checked out the book by Richard Lyons on my Recommended Books list on the right sidebar?
Also, you can browse through papers and preprints at various business schools' websites.
I stumbled across one of your previous posts where you spoke about the differences between Information Theory and AI and how you believe that information theory techniques are helpful for someone who works in the HFT space. In your book you also constantly referenced how simple models often work out to be the best models....I have been doing work with intraday data; however, I want to switch my focus towards trading strategies that are based on EOD data. Please excuse me for being naive; however, I was wondering what mathematical tools would you recommend one to use when using EOD data and ofcourse keeping models as simple as possible.
I look forward in hearing back from you !
The only mathematical tools I recommend are linear ones: linear regression or Kalman filter.
Thank you for recommendation.
Richard Lyons' book is very interesting.
However, where can we find "order flow" data for Forex market?
In IB, they do not provide trade volume for FX.
Or we better apply this to stock markets where there is Exchange.
Thank you very much.
Only the banks or exchanges have access to real-time FX order flow.
We can however compute order flow for futures and stocks markets.
I have a strategy that trades several foreign markets. This involves FX risk. What are your thoughts on hedging FX risk? How do you go about doing that?
I am not exactly sure what you mean by hedging FX risk.
Since you are trading FX, presumably you are deliberately taking FX risk in order to earn a return? This is in contrast to someone trading stocks on foreign exchanges where they want to take equity risk but want to hedge the unwanted FX risk.
I mean unwanted FX risk when you trade foreign stocks and futures. Some people argue that the fx risk itself provides diversification while others favor full hedge at all times. Unwanted FX risk can be nonneglible at times.
In that case, if you buy £100K of stocks at the LSE, for e.g., you should simultaneously sell 100K GBPUSD. That would eliminate FX risk.
After reading some papers,
I find that, in order flow model,
order flow is allowed to affect price "contemporaneously".
It is ok in academic research.
However, this is not very reasonable assumption in practical trading.
I am not sure which paper you are reading. But I believe that Richard Lyons wrote that order flow leads price changes.
The lower latency limit of matlab is 60ms.
That is the time it takes to execute a strategy that utilizes quickfix for execution.
This information comes from the mathworks automated trading tutorial.
Given that python is about as fast as matlab, I'd expect the execution times to be similar.
I suppose Cython could be used to improve latency times, however to use cython to wrap c/c++ code and to use cython effectively in general, requires the knowledge of c/c++.
Thanks, anon, that's very useful info. Regarding Quickfix, I suppose if we write programs in java or c# and utilize quickfix/j or /n, we can reduce latency below 60ms?
That latency is a limitation of matlab.
From this thread one can gather that the open-source version of marketcetera has serious limitations as well.
"I did a performance test to see how fast the ORS can send orders to my broker: I wrote a simple java strategy that sends 500 orders as fast it can (no market data requests, no logging) and using wireshark (network packet analyzing tool) I then measured the time it took for the ORS to send the NewOrderSingle messages to the network. The results I got were far from great – it took about 30 seconds to send the 500 orders."
"For comparison, I also wrote a very simple quickfix application that sends 500 orders and it that program ~0.13 seconds t"
So, for matlab it takes 60ms.
Marketcetera Open-source edition is 30 seconds for 500 orders. So, I suppose that is also 60ms per order?
The base quickfix/j product can apparently execute at 250 microseconds, then?
So, quickfix, by itself, is definitely capable of HFT.
The other major open-source trading platforms (algotrader and tradelink) only have broken support for fix, except for CyanSpring ATS (full support for fix), which looks promising:
"The latency will depend on your strategy and validation logic. A straight DMA order latency is sub millisecond."
Apparently, quickfix/j has since improved in its latency times:
According to this, by H1 2012, it was down to 180 microseconds.
They've made a few updates to the quickfix/j program, since then, so it's probably a little lower now.
Very interesting, thanks!
It is good to know that the difference between a compiled program like Java and an interpreted program like Matlab or Python is about 60ms.
my online Q-learning approach (I adjusted a SARSA Q-learning Matlab implementation I found somewhere) showed me that dynamically adjusting weights of different rules (as I have said, Q-learning might not be necessary for this, it uses a distance method for samples it does not 'memorize' which is more appropriate for more fuzzy stuff) definitely had an edge over simple equal weighting. I played with it quite a bit, although I did not do any serious wide monte carlo like test, I tried random weights from some reasonable intervals, or weights that I manually picked out of weights as they were set by my algorithm at several time points in the test test. And online learning did significantly better (even after some playing with it's few parameters).
I suspect it was able to catch some useful auto-correlation (I mean trends) in time evolution of individual weights. But I also found out, that learning weights of factors that most likely did not have any useful information in them (or were made random by my manual input) deteriorated the overall performance. Too many rules (even 10+) had a deteriorating effect, or were useful only with some serious pre-learning and way more data. There are also other issues like constraining nominal sizes of individual weights etc.
And yes, I did tests over very long periods of time. This depends on resolution though of course, working on e.g. daily data would require going rather (too) far into the past.
The online learning of weights approach adds a level of complexity, but might be useful in some cases. Generally it might be enough to rebalance parameters/weights/whatever in your strategies from time to time, and I believe everyone who e.g. trades using a portfolio of strategies does something like that in that case. I really like avoiding the usual train/test set approach, but if equal weighting works for you, then I would not be looking into that.
This doesn't really have to do with options in particular, but the mentioning of volatility smirk brought to mind a question....
When applying a distribution to a set of data for a particular instrument, would you use data for the entire life (history) of the instrument, or just that from say 2008 to present (post modern crisis)?
I would use the post-crisis data set, as I am seen a lot of evidence that there is a regime shift around the time of the crisis.
What broker have you moved onto now instead of interactive brokers? I see marketceter and FIX being mentions but I thought fix was only a messaging protocol? Is it also capable of trade execution?
For stocks, we had used Lime Brokerage, and had very good experience.
For FX, as I reported elsewhere on this blog, we found it impossible to access ECNs due to our status as a commodity pool but not an eligible contract participant. (Under Dodd-Frank, all limited partners must be ECP in order for the pool to be ECP.)
Without real-time graphical updating from within matlab, you can cut latency down to 10ms for execution, rather than 60ms.
Thanks for the tip.
By real time graphical update, do you mean the display of output in the command window? Suppose we already do not output anything, how do we disable graphical update?
While browsing through the sea of information and views, I feel inclined to present a problem I am now besieged with and expect a helpful comment from here. I apologize that it is not directly related to the current contents of the blog.
I use Amibroker, an afl at 1 minute time frame for an auto program running through Nest Plus. I use LIMIT price, not MARKET. The price in the chart most often differs from the price in API, leading to disastrous consequences. In my code, there is a line "Price=(C,8.2, True)".
What could be the reason of the price distortion? Please help!
(I am based in India).
Post a Comment