Tuesday, June 12, 2007

A factor model that I can believe in

Some of you may remember that I preached about the uselessness of factor models in predicting short term return, and the unreliability of many exotic factors even for the long term. In particular, factor models are especially inaccurate in valuing growth stocks (i.e. stocks with low book-to-market ratio), as evidenced by such models' poor performance during the internet bubble. This is not surprising because most commonly used factors rely on historical sales or earnings measures to judge companies, while many growth stocks have very short history and little or no earnings to report. However, as pointed out recently by Barry Rehfeld in the New York Times, Professor Mohanram of Columbia University has devised a simple factor model that rely on 8 very convincing factors to score growth stocks. These factors are:

  1. Normalized return on assets.
  2. Normalized return on assets based on cash flow.
  3. Cash flow minus net income. (i.e. negative of accrual.)
  4. Normalized earnings variability.
  5. Normalized sale growth variability.
  6. Normalized R&D expenses.
  7. Normalized capital spending.
  8. Normalized advertising expenses.
By "normalized", I mean we need to standardize the numbers with respect to the industry median. To Prof. Mohanram's credit, he claims only that these factors will generate returns after 1 or 2 years, not the short-term returns that many traders expect factor models to deliver. The excess annual return based on buying the group of stocks with the highest score and shorting the group with the lowest score is a good 21.4%. Not only does the combined score generate good returns, but each individual factor also delivers good correlation with future returns, proving that the performance is not due to some questionable alchemy of mixing the factors. For example, it makes good intuitive sense that extra spending on R&D and advertising will boost future earnings for growth stocks.

Interestingly, Prof. Mohanram pointed out that most of the out-performance of the high-score stocks occur around earnings announcements. Hence for those investors who don't like holding a long-short portfolio for a full year, they can just trade during earnings season.

One caveat of this research is that it was based on 1979-99 data (at least for the preprint version that I read). As many traders have found out, strategies that work spectacularly in the 90's don't necessarily work in the last few years. At the very least, the returns are usually greatly diminished. In the future, I hope to perform my own research to see whether this strategy is still holding up with the latest data.

15 comments:

  1. Once again great post!

    I was wondering what your approach to investing is. Is your goal to develop a fully automated trading system or to develop a "hybrid" system in which both fundamental and quantitative analysis compliment eachother? I was intrigued by the fact that the D.E. Shaw Group seems to use a hybrid approach as opposed to Rentec.

    Do most quants working for hedge funds work in risk management or in the statistical arbitrage? With the emergence of so many quant funds, would it not be more logical to use quantitative analysis more for risk management than arbitrage? Unless a fund has a large team of quants like at Rentec, would it not be futile to focus too much on arbitrage?

    Thanks

    ReplyDelete
  2. Dear Anonymous,

    Fundamental analysis is often part of a fully automated trading system: fundamental factors such as advertising expenses and so forth are simply numerical variables that are input to the program.

    When people say D.E. Shaw uses fundamental analysis in a hybrid system, they mean more than entering some fundamental factors into a model: they probably mean in-depth analysis of companies that may take into account the quality of the management and other such intangible factors which cannot be quantified easily.

    Quants who work for hedge funds work in both stat arb and risk management. However, only large hedge funds can afford quants that work on risk mgmt alone -- most smaller and medium size hedge funds employ quants to develop strategies, not just risk mgmt.

    To develop a successful arbitrage strategy requires only one good brain: it doesn't require a team.

    Ernie

    ReplyDelete
  3. This sounds to me like simple data mining of last quarter's publicly available information. Shouldn't this already be reflected in the stock price? Seems to me the market, being a discounting mechanism that reflects the aggregate wisdom of all participants, is the ultimate factor model. What's the rational for beating it at its own game (discounting information)?

    Rentec does use fundamental inputs, at least in their equity program.

    This is an interesting blog.

    ReplyDelete
  4. Dear Anonymous,
    When you say that all publicly available information should already be reflected in the stock price, you are stating the "efficient market hypothesis". Most equity financial research is based on the notion that this hypothesis is wrong at least some of the time. The reason the market cannot discount all information correctly is that there are many irrational players (small investors? traders with bad pricing models?) and furthermore, every pricing model is just an approximation to the ultimate "true" price, if there is such a thing. Therefore, there are always pricing models that are better, and thus more profitable, than others.
    Ernie

    ReplyDelete
  5. Hi,

    What are your thoughts of Niederhoffer? After "blowing up" a couple years back, it seems that he now achieving amazing returns of about 50% for the last couple of years. His website, www.dailyspeculations. com, states that "The firm employs proprietary programs that predict short-term moves based on the interactions between multivariate time series." Could you expand on that? Are time series only useful for detecting short term fluctuation for high-frequency systems and not long term trends?

    Also, I still find the idea of Universal Portfolios very interesting and some studies I've looked though seamed to have attained interesting results. I know that the big difficulty is computational as well as associated to transaction costs. However, what if one was to create a "Universal Portfoilio" of 10 or so futures selected on the basis of fundamental analysis and let the portfolio be rebalanced with this concept? Also, I don't understand how Universal Portfolios satisfy the Kelly criterion. With Universal Portfolios, aren't you selling gains to put then into losers? Doesn't Kelly say to increase leverage of winners and decrease that of losers?

    Thanks,

    Quantonymous

    ReplyDelete
  6. Dear Quantonymous,

    I don't know much about Niederhoffer, except that the risks he takes on are too much for my taste. (I didn't learn much from a finance seminar he gave at NYU a couple of years ago, except that he enjoyed theatrics.)

    I do agree that time series analysis is more useful for short-term trading, simply because for short time scale, fundamental factors are not going to mean very much -- except when they change dramatically during that short time (such as earnings announcements).

    Certainly, creating a portfolio of futures or stocks and rebalancing them using Universal Portfolio algorithm is a good strategy.

    On a short time scale (e.g. daily), universal portfolio is just a constant rebalanced portfolio. So if you include cash as a security, it will maintain a constant proportion of cash vs. stocks, i.e. constant leverage. This is the same as Kelly criterion. But how do you determine the exact value of the leverage factor? In Universal portfolio, the factor is determined by evaluating long-run profits of each different allocation, whereas in Kelly's criterion it is determined by the returns vs risk of the strategy. In both cases, better returns or profits yield higher leverage. They are therefore consistent, and although I don't have a mathematical proof, I bet that the leverage factor will be exactly the same based on either method. (Kelly's criterion is silent on how you allocate capital among different stocks.)

    Thanks for your excellent questions!

    Ernie

    ReplyDelete
  7. Thanks for the quick reply.

    With respect to factor models, it would appear to me that commodities would make for more accurate models given that prices are largely decided by supply and demand.

    I remember you mentionning that the hedge funds that you worked with that had the highest Sharpe ratios used high-frequency trading systems. How complicated are such systems to develop and are they mainly based on multivariate time series? Do they tend to be long only and low leverage?

    I find it intriguing that high frequency funds like Rentec, SAC and D.E. Shaw trade stocks as well as derivatives. Why would a high frequency system trade stocks given the higher transaction costs?

    I also find it interesting that the new Institutional Equity fund from Rentec is a low frequency trading fund with definate elements of fundamental analysis, since the p/e ratio of the long equities is significantly lower than the short. Does this imply that high frequency systems are too difficult to impliment for large funds?

    Thanks,

    Quantonymous

    ReplyDelete
  8. Dear Quantonymous,

    I remember reading somewhere that a very successful hedge fund also thought that factor model would work well with commodities futures. But it turned out to be a disaster. The fund became successful later by switching to technical trend- following models.

    High frequency models are not hard to develop if you have the tick-by-tick historical data to backtest on. They can take long and short positions at different times, but they are not usually market-neutral. It is true that the trade size is limited for high frequency trading due to liquidity constraint. Therefore, if you, like Rentec, have >$10B to dispose of, high frequency trading in stocks won't do very much good.

    Even though stocks have higher transaction costs, they are suitable for high frequency trading because of their diversity (at least 3,000 symbols vs maybe 30 symbols for futures) -- they can generate more signals and therefore statistical significance.

    Ernie

    ReplyDelete
  9. In the case of factor models for commodities, were these unsuccessful on a short term basis as well as a long term basis?

    For high frequency trading, how easy is it really to develop a successful system? How large a team, what kind of computational power and how much time would it take to develop a system on average?

    What percentage of hedge funds, in your opinion, use such systems? With increase use of these techniques, won't they eventually become useless or is the market so large that, for very large markets, opportunities will remain? Isn't it unreasonable to believe that a small new hedge fund could possibly achieve great returns when the larger funds have much greater computational power?

    Do you use high frequency trading for your own account? I see that you focus a lot on convergence arbitrage on this blog. I find it interesting though that both Shaw and Simons quickly abandonned this technique early in their careers. Why is this?

    Thanks again,

    Quantonymous

    ReplyDelete
  10. Dear Quantonymous,

    I don't know exactly what the holding period for that factor model is, but it is likely to be measured in weeks, if not months.

    I don't believe you need "a team" to develop a successful HF trading system, nor do you need any unusually powerful computer beyond a good PC. You need good quality HF data, a good strategy, and a good programmer to build an efficient automated execution system (Commercial break: such as those programmers who work for my consulting company!) It won't take more than a few months to research and build a system -- whether it is going to be profitable or not, however, is another question.

    I don't think anyone know what percentage of hedge funds use HF systems, but I would bet that a majority of funds with assets greater than $1B have such systems in place. With regard to competition leading to diminishing returns, that's true for any strategy. However, as I explained before, there is no advantage in being a big hedge fund in this situation.

    My own HF strategy is still in R&D phase. I don't know whether Shaw and Simon "quickly abandoned" convergence strategies or not -- I am not privy to their funds' secrets. As for why someone would consider abandoning them in the past few years, I am going to post a new article on this topic.

    Ernie

    ReplyDelete
  11. Hi again,

    Here are two very interesting articles:

    Interview of Jim Simons with Seed Magazine and Institutional Investor:

    http://www.seedmagazine.com/news/2006/09/seed_interview_james_simons_2.php

    http://mikeonghai.blogspot.com/2005/12/rare-interview-of-james-simons-of.html

    Interview of D.E. Shaw with Wired:

    http://www.wired.com/wired/archive/5.01/ffshaw.html?topic=&topic_set=

    They say of Shaw in page 4:

    "During Shaw's education at Morgan Stanley, he decided pairs trading was not going to be the way he would make his fortune."

    In the case of Simons in the Institutional Investor interview:

    "We look for things that can be replicated thousands of times. A trouble with convergence trading is that you don't have a time scale. You say that eventually things will come together. Well, when is eventually?"

    Sorry for all the questions but this is a fascinating subject and unfortunately, there is no Quant department at my university.

    Sincerely,

    Quantonymous

    ReplyDelete
  12. Dear Quantonymous,

    Thank you for the links to the interviews.

    The question that Simon asks "when is eventually?" is a good question. Certainly statistical arbitrage is not like convertible arbitrage where there is a definite time in the future where the two sides must converge. This is also the reason that Edward Thorpe doesn't like statistical arbitrage. However, there is a good, albeit statistical answer, to the question of convergence time scale. I have written about it in my blog somewhere that if you apply the Ornstein-Uhlenbeck model to a mean-reverting spread, you can estimate the convergence time.

    Ernie

    ReplyDelete
  13. Hi Ernie,
    While dated I think this is a great thread. I would like to replicate such factor models but then comes the key question: How do I get the reuired dataset of fundamental data required for backtesting and optimization (Ideally free or affordable for a non professional enthusiast)

    ReplyDelete
  14. Hi Ernie,
    While dated I think this is a great thread. I would like to replicate such factor models but then comes the key question: How do I get the required dataset of fundamental data for backtesting and optimization (Ideally free or affordable for a non professional enthusiast)?

    ReplyDelete
  15. Olivier,
    You can scrape off such info from, e.g., finance.yahoo.com. An automated program can probably speed this up quite a bit.
    Ernie

    ReplyDelete