Thursday, November 18, 2010

Columbia Workshop on Financial Engineering

Our readers in New York may be interested in this finance workshop at Columbia University tomorrow.
I am particularly interested in the talk by Kent Daniel on "Characterizing Momentum", and by Doug Borden (of Knight Equity Markets) on "Stochastic Control Theory in High-Frequency Trading". Doug's talk can be downloaded here.


Sunday, October 10, 2010

Data mining and artificial intelligence update

Long time readers of this blog know that I haven't found data mining or artificial intelligence techniques to be very useful for my own trading, for they typically overfit to non-recurring past patterns. (Not surprisingly, they are much more useful for driverless cars.) Nevertheless, one must keep an open mind and continues to keep tabs on new developments in this field.

To this end, here is a new paper written by an engineering student at UC Berkeley which uses "support
vector machine" together with 10 simple technical indicators to predict the SPX index, purportedly with 60% accuracy. If one includes an additional indicator which measures the number of news articles on a stock in the previous day, then the accuracy supposedly goes up to 70%.

I did not have the chance to reproduce and verify this result yet, but I invite you to try it out and share your findings here. If you do so, you may find this new data mining product called 11Ants Analytics useful. It is an Excel-based software that includes 11 machine learning algorithms including the aforementioned support vector machines. It also includes decision trees which are sometimes quite useful in automatically generating a small set of trading rules from an input set of technical indicators. (Whether those rules remain profitable in the future is another question!) If you have tried this product, I would also appreciate your comments here.

(If you are a die-hard MATLAB fan, support vector machines are available in their Bioinformatics Toolbox, and classification and decision trees in their Statistics Toolbox.)

Saturday, October 02, 2010

The main virtue of buying options

I realized that I have omitted the most obvious virtue of trading options instead of stocks in my last post: the much more attractive reward-risk ratio for options.

Suppose your stock strategy generated a buy signal. You can either buy the stock now, or you can buy an ATM call. If you buy the stock, you are of course benefiting from 100% of the upside potential of the stock price movement, but you are similarly exposed to 100% of the downside risk. Indeed you can lose the entire market value of the stock. If you buy the call, you will benefit from > 50% of the upside potential of the stock price, assuming that your holding period is so short that the time value will not dissipate much. As the stock price rises, so does your delta. (It increases from 0.5 to 1.) But what about the downside risk? All you can lose is the option premium, usually << 50% of the market value of the stock.

In other words, while one may be tempted to hedge a large stock position with stock index futures, there is no need to hedge an equivalent call option position. This should simplify your strategy implementation and reduce risk management costs (i.e. the probable loss on your short futures position).

Given that I am a short-term trader anyway, I can't figure out why I have been trading stocks instead of options all these years! (Aside from the caveats detailed in the previous post.)

Saturday, September 25, 2010

Implementing stock strategies using options

There are many stock trading strategies that are quite attractive in terms of Sharpe ratios, but not very attractive in terms of returns. (Pairs trading comes to mind. But in general, any market neutral strategy suffers from this problem.)  Certainly, one cannot feed a family with annualized returns in the single or low double digits, unless one already has millions of dollars of capital. One way to solve this dilemma is of course to join a proprietary trading group, where we would have access to perhaps x30 leverage. Another way is to implement a stock trading strategy using options instead, though there are a sizable number of issues to consider. (I recently brushed up on my options know-how by reading the popular "Options as a Strategic Investment".)
  1. Using options will allow you to increase your leverage beyond the Reg T x2 leverage (or even the day trading x4 leverage) only if you buy options only, but not selling them. For example, to implement a pairs trading strategy on 2 different stocks, you would have to buy call options on the long side, and buy put options on the short side (but not sell call options). Otherwise the margin requirement for selling calls is as onerous as shorting the underlying stock itself.
  2. The effective leverage is computed by multiplying the delta of the option by the underlying stock price divided by the option premium. If you buy an out-of-money (OTM) option, the delta will be small (smaller than 0.5), but the option premium is small also. Vice versa for an in-the-money (ITM) option. So you would have to find the optimal strike price so that the effective leverage is maximized. I personally choose to buy an at-the-money (ATM) call or slightly ITM call without actually computing the optimized strike, but perhaps you have reached a different conclusion?
  3. Naturally, the shorter the time-to-expiration, the cheaper the option and higher the effective leverage. Additionally, for ITM options, their deltas increase as we get closer to expiration, which also contributes to higher effective leverage. However, the time-to-expiration must of course be longer than the expected holding period of your position, otherwise you would incur the transaction cost of rolling over to the further-month options.
  4. The discussion of finding the right strike price based on its delta is moot if your brokerage's API does not provide you with delta for your automated trading system. In theory, Interactive Brokers's API provide deltas for whole options chains, and quant2ib's MATLAB API will pass these on to your MATLAB exeuction program too. However, I have not been successful in retrieving deltas using quant2ib's API. If you have encountered a similar problem, and perhaps have found the reason/cure for this, please let me know. For now, I am reduced to assuming that all my near ATM calls for different stocks have the same delta, and I increase this common value from 0.5 to close to 1 as time passes.
  5. Options don't have MOO, LOO, MOC or LOC order types. If one uses market orders to buy at the open or close, one would incur significant transaction costs due to the much wider bid-ask spread compared to stocks. I try to use limit orders on options orders as much as possible.
If you have used options to implement stock trading strategies, and have experiences with these or other issues, please do share them here.

====

Reminder: my next pairs trading workshop will take place in New York on October 26-27th.

Sunday, August 22, 2010

Phantom quotes

Have you ever got the feeling that your market orders are often filled at prices worse than the NBBO displayed on your trading screen? Apparently, this may be the result of deliberate manipulation of the market by high frequency traders. These HF traders submit thousands of quotes per second to the NYSE ("quote stuffing") and then cancel them within 50 ms. This slows down the exchange data queue so much that by the time a quote is transmitted to you, it is stale already, even if your trading server is collocated at the exchange. (Checking the time stamp of the quote is of no help: the time stamp is based on the time the quote enters the queue, not when it exits the queue.)

If you can no longer believe in the quotes, is there any integrity left in the market? Much as I think that HFT may be useful liquidity providers, I can't see how this specific practice could be good for anyone over the long term.

(Hat tip: Jim Liew of Alpha Quant Club.)

Saturday, August 14, 2010

What are we to do with Sharpe ratio?

I wrote several times before how useless Sharpe ratio is for certain types of strategies: see here and here. Not only is a high Sharpe ratio quite useless in telling you what damage extreme events can do to your equity, a low Sharpe ratio is also quite useless in telling you what spectacular gain your strategy might enjoy in the event of a catastrophe. I came across another brilliant example of the latter category in the best-selling book "The Big Short", where the author tells of the story of the fund manager Mike Burry.

Mike Burry started buying credit default swaps in 2005, essentially an insurance policy on mortgage-backed securities, betting that there will be widespread defaults on mortgages. Of course, we now know how this story would turn out: Mike Burry made $750 million in 2007 alone.  But there was nothing but pain for the fund manager and his investors in 2005-2006, since they had to pay an annual premium of 8% of the portfolio.  Investors who measured the performance of this strategy using Sharpe ratio, without knowing the details of the strategy itself, would be quite justified to think that it was an utter disaster prior to 2007. And indeed, many of them lost no time in trying to pull out their investments.

So what are we to do with Sharpe ratio, with its inherent reliance on Gaussian distributions? Clearly, it is useful for measuring high frequency strategies which you can count on to generate consistent returns every day, but which has limited catastrophic risks. But it is less useful for measuring statistical arbitrage strategies that hold positions over multiple days, since there may well be substantial hidden catastrophic risks in these strategies that would not be revealed by their track record and standard deviation of returns alone. As for strategies that are designed to benefit from catastrophes, such as Mike Burry's CDS purchases or Nassim Taleb's options purchases, it is completely useless. If I were to allocate my assets over different hedge funds, I would be sure to include some funds in the first category to generate cash flows for my daily needs, as well as funds in the last category to benefit from the infrequent black-swan events. As for the funds in the middle category, I am increasingly losing my enthusiasm.

Friday, July 30, 2010

Pair trading technologies update

Pair trading was invented two decades ago, but automating its implementation has only recently become fashionable with independent traders. But once the spotlight is on, innovations come fast and furious. Here are a number of recent developments that I find interesting:


1. I mentioned previously the software called quant2ib. It is an API which allows us to get market data and send orders from a Matlab program to Interactive Brokers (IB). I have used it extensively for our trading, and it is as reliable as IB's native API. Their latest version now includes functions for constructing a "combo" security. This combo security can be pairs of stocks, ETF's, futures, etc. (with the notable exception of currencies), and the API allows you to get market data as well as to submit orders on a combo. This is a huge improvement because you can now automatically trade a pair of securities as one unit by submitting limit orders on the combo. (Previously, you would have had to submit market order on at least one side of the pair, and this would have required your program to continuously monitor the market prices and send orders when appropriate. Or else you had to give up using the API and manually enter a "generic combo" limit order in IB's TWS.)

2. Alphacet Discovery also has the ability to send limit orders on pairs, due to its partnership with Knight Trading. Besides, based on a demo that I have recently seen, they also now have great pairs portfolio and execution reporting functionality. (Full disclosure: I used to consult for them.)

3. IB itself has released a "Scale Trader" algorithm that can be applied to combos (see 1. above. Hat tip: Mohamed.) I can't explain this better than their press release: "... ScaleTrader algorithm allows clients to create conditions under which a long position in one stock is built while simultaneously creating an offsetting short position in the other. The ScaleTrader is named because investors can 'scale-in' to market weakness by setting orders to buy as the market moves lower. Similarly, sell orders can be 'scaled' into when a market is rising. The ScaleTrader algorithm can be programmed to buy the spread and subsequently take profit by selling the spread if the difference reaches predetermined levels set by the user." In other words, it allows us to automatically implement the "parameterless trading" or the "averaging-in" strategy that I blogged about previously without any programming on our part!

Speaking of pair trading, I will be teaching my first New York workshop in October.  (My editor inevitably picks touristy locations for these workshops. My London workshop takes place across the street from the Tower of London, my New York workshop is across from the new World Trade Center, and my Hong Kong workshop is in the "Golden Mile" shopping district of Tsim Sha Tsui.)

Saturday, May 29, 2010

The Quants

Once in a while, a book about trading written for the general public contains some useful nuggets even for professionals.  Fortune's Formula was one. It introduced me to the world of Kelly's formula, Universal Portfolios, and the maximization of compounded growth rate. The Quants, by WSJ reporter Scott Patterson, is another. (Hat tip to my partner Steve for telling me about it.)

What is the most important take-away in The Quants? No, it is not that you should learn to become a master poker player or chess player before hoping to make it big, though you would think that given Patterson's exhaustive coverage of poker games played by the top quants. Among my own professional acquaintances, trader-poker-players are still a minority.

The most important take-away is what ex-employees said about Renaissance Technologies: "there is no secret formula for the fund's success, no magic code discovered decades ago by geniuses .... Rather, Madallion [Fund]'s team of ninety or so Ph.D.'s are constantly working to improve the fund's systems, ..."

In other words, though you may not have 90 Ph.D.'s  at your disposal, you can still work on continuously improving/refining your strategies, improving the engineering of your trading environment, and increasing the diversity of your strategies. And though you may still not archive 60-70% annualized returns every year, you will nevertheless enjoy stable returns year after year.

By the way, it is good to see my ex-colleagues Lalit Bahl, Vincent and Stephen Della Pietra mentioned in the book, all of whom left IBM to join Renaissance many years ago, and who are extraordinarily nice and friendly guys, quite in contrast to the norm on Wall Street.

Saturday, May 22, 2010

A HFT primer

As a follow-up of my previous discussions on high frequency trading, I have invited guest blogger Jennifer Groton to share with us a quick survey of various common HFT strategies used by equities and FX traders.

==

High frequency trading strategies are under fire.  The recent trading spike in our national exchanges was duly noted as a short-circuit waiting to happen and drew immediate industry criticism of auto-trading robots. Before a witch-hunt ensues, perhaps a review of the common HFT strategies in stocks and Forex is in order. 

High-frequency firms employ a wide variety of  low-margin trading strategies that are implemented by professional market intermediaries who have invested heavily in technology. These firms claim that they make markets more efficient by enhancing liquidity and transparent price discovery to the benefit of investors.  The Forex market’s unique combination of high liquidity and low volatility make it an ideal environment for deploying HFT strategies, although many of the ideas and technology are from the equity markets.  The basic strategies fall into three categories: market-making, trending or predictive, and classic arbitrage.

Market-making strategies tend to focus on a single stock or currency pair.  Many firms in this area have been described as engaging in "rebate-capture trading", a reference to the credits that firms get for providing liquidity on most market centers.

The second group consists of mean-reversion and trending strategies. These utilize technical indicators for stocks or forex indicators for currencies, and seek to generate more return from individual trades.

The last group may involve a cross-section of trades from multiple markets.  The classic arbitrage strategy is a form of the “carry trade” that uses the prices of a domestic bond, a bond denominated in a foreign currency, the spot price of the currency, and the price of a forward contract on the currency.  If the market prices are sufficiently different from those implied in the model to cover transactions costs, then four transactions can be made to guarantee a risk-free profit.

High frequency trading is attributed with generating over 70% of the volume of trades on our equity markets.  Similar statistics are not available for forex markets, but speculating disguised as commercially necessary trades have been reported to be over two-thirds of the volume.  Liquidity and pricing transparency are the benefits offered by its advocates, but regulators and other market participants who disagree with this positive assessment are presently discounting these benefits.  Transaction taxes and time limits on orders have been proposed to mitigate the perceived risk created by HFT firms, but the wheels of Washington move slowly, even in crisis.  For the time being, there is no indication that their participation will be discontinued.   

Saturday, May 08, 2010

Are flash orders to be blamed for Dow's 1,000 points drop?

Before the smoke is clear, fingers are already pointing at flash orders. See these two NYT pieces here and here. Our reader Madan has convinced me previously that flash orders can indeed be used to  front-run other traders, but until more evidence comes in, I am yet to be convinced that they are the main culprit. Couldn't old-fashioned automated momentum programs accomplished the same thing after an initial erroneous transaction price and/or quote was reported? Perhaps you know of discussions elsewhere on the blogosphere that bring more light to the issue?

Sunday, May 02, 2010

An additional ETF pair

Many of you know that there are a number of dependable commodity-related ETF pairs that remain cointegrated ever since I mentioned them in 2006: IGE-EWC, IGE-EEM, IGE-EWA, EWA-EWC, etc. (Their latest zScores are available here to my book's readers and to Premium Content subscribers.) A recent visit to a client in South Africa prompted me to add a new one: EWA-EZA.

It is worth noting that for those country ETF pairs that cointegrate, their underlying currency cross-rates are often stationary as well. Now, there are several advantages in trading currency cross rates instead of ETF pairs. When trading a stationary cross rate, you can enter a limit order to enter and exit, but trading pairs of ETF's involve market orders on at least one side. Also, ETF's can sometimes be hard-to-borrow, and their margin requirements are much more onerous than that of currencies. However, the one major disadvantage in trading cross rates is that they are not always available on your brokerage. For example, based on the cointegration of EWA and EZA you would think that trading AUDZAR would be quite profitable. And you would be right, theoretically, except that AUDZAR is not available for trading on Interactive Brokers. If you know of a good Forex brokerage that have many emerging markets cross-rates for trading, especially those of Latin American countries, please let the rest of us know!

Saturday, April 17, 2010

How do you limit drawdown using Kelly formula?

As many of you know, I am a fan of Kelly formula because it allows us to maximize long-term growth of equity while minimizing the probability of ruin. However, what Kelly formula wont' prevent is a deep drawdown, though we are assured that the drawdown won't be as much as 100%! This is unsatisfactory to many traders and especially fund managers, since a deep drawdown is psychologically painful and may cause you to panic and shut down a strategy prematurely.

There is an easy way, though, that you can use Kelly formula to limit your drawdown to be much less than 100%. Suppose the optimal Kelly leverage of your strategy is determined to be K. And suppose you only allow a maximum drawdown (measured from the high watermark, as usual) to be D%. Then you can simply set aside D% of your initial total account equity for trading, and apply a leverage of K to this sub-account to determine your portfolio market value. The other 1-D% of the account will be sitting in cash. You can then be assured that you won't lose all of the equity of this sub-account, or equivalently, you won't suffer a drawdown of more than D% in your total account. If your trading strategy is profitable and the total account equity reaches a new high watermark, then you can reset your sub-account equity so that it is again D% of the total equity, moving some cash back to the "cash" account. Otherwise, you continue to keep the equity in the cash account separate from the equity of the trading sub-account.

Notice that because of this separation of accounts, this scheme is not equivalent to just using a leverage of L=K*D% on your total account equity. Indeed, some of you may be too nervous to use the full K as leverage, and prefer to use a leverage L smaller than K. (In fact, the common wisdom is that, due to estimation errors, it is never advisable to set L to be more than K/2, i.e. half-Kelly.) The problem with using a L that is too small is that, besides not achieving maximum growth, the portfolio market value will be unresponsive to gains or losses and will remain relatively constant. Using the scheme I suggested above will cure this problem as well, because you can apply a higher leverage L_sub to the sub-account (e.g. use L_sub = L/D%) as long as L_sub < K, so that the portfolio market value is much more sensitive to your P&L while still ensuring the drawdown will not exceed D%.

Has anyone tried this scheme in their actual trading? If so, I would be interested in hearing your experience and see if practice is as good as theory.

Saturday, February 27, 2010

Conference on the sociology of quantitative finance

A new conference called Psi-Q will be held in London this June, featuring luminaries in the academic quantitative finance world, as well as risk and fund managers from various banks and hedge funds. Example topics:
  • How did shared beliefs, practices, ways of calculating, and technical systems impact evaluation of asset-backed securities and CDOs before and during the credit crises?
  • Was that Lucky or Good? Creating a framework for skill attribution in finance, business management and other risky endeavors.
  • The “backing out” phenomena observed in options markets:  how traders use models to imply independent variables consistent with market observed pricing, and where enough traders can be wrong about the expected results and the backed-out positions can send the wrong message.
Sounds like an interesting bird's eye view of quantitative finance.

Thursday, February 18, 2010

Pairs Trading Workshop in Hong Kong

For my readers in Asia, I will be conducting a pairs trading workshop in Hong Kong on March 10-11. This workshop is organized by the Technical Analyst magazine and is similar to the one I gave in London last year.
However, I have added a few useful insights based on audience feedback. As always, no prior knowledge of Matlab or advanced statistics is assumed. The numerous in-class exercises should be sufficient to bring your Matlab programming skills up to speed.

Sunday, January 31, 2010

A method for optimizing parameters

Most trading systems have a number of parameters embedded, parameters such as the lookback period, the entry and exit thresholds, and so on. Readers of my blog (for e.g., here and here) and my book would know my opinion on parameter optimization: I am no big fan of it. This is because I believe financial time series is too non-stationary to allow one to say what was optimal in the backtest is necessarily optimal in the future. Most traders I know would rather trade a strategy that is insensitive to small changes in parameters, or alternatively, a "parameterless" strategy that is effectively an average of models with different parameters.

That being said, if you can only trade one model with one specific set of parameters, it is rational to ask how one can pick the best (optimal) set of parameters. Many trading models have a good number of parameters, and it is quite onerous to find the optimal values of all these parameters simultaneously. Recently, Ron Schoenberg published an article in the Futures Magazine that details a way to accomplish this with just a tiny amount of computer power.

The key technique that Ron uses is cubic polynomial fit of the P&L surface as a function of the parameters. Ron uses the VIX RSI strategy in Larry Connors' book "Short Term Trading Strategies That Work" as an example. This strategy has 5 parameters to be optimized, but Ron only needs to compute the P&L for 62 different sets of parameters, and the whole procedure only takes 58 seconds.

Although Ron has confirmed that most of the parameters that Connors picked are close to optimal, he did find a few surprises: namely, that RSI of period 3 or 4 is significantly more profitable than the 2 that Connors used, at least in the backtest period.

Now, for a true test of this optimization, it would be helpful if Ron performed this optimization withholding some out-of-sample data, and see if these parameters are still optimal in that withheld data set. Since he didn't do that, we need to wait for another year to find out ourselves!

Tuesday, January 19, 2010

Excel ADF test

Some readers have asked whether there is an Excel version of the ADF test for cointegration (mentioned in articles here or here.) You can download one such package here (Hat tip: Bruce H.).

And as always, you can download the Matlab version from spatial-econometrics.com.

Saturday, January 09, 2010

Does Averaging-In Work?

Ron Schoenberg and Al Corwin recently did some interesting research on the trading technique of "averaging-in". For e.g.:  Let's say you have $4 to invest. If a future's price recently drops to $2, though you expect it to eventually revert to $3. Should you

A) buy 1 contract at $2, and wait for the price to possibly drop to $1 and then buy 2 more contracts (i.e. averaging-in); or
B) buy 2 contracts at $2 each;  or
C) wait to possibly buy 4 contracts at $1 each?

Let's assume that the probability of the price dropping to $1 once you have reached $2 is p. It is easy to see that the average profits of the 3 options are the following:
A) p*(1*$1+2*$2) + (1-p)*(1*$1)=1+4p;
B) 2; and
C) p4*$2=8p.

Profit A is lower than C when p > 1/4, and profit A is lower than profit C when p > 1/4. Hence, whatever p is,  either option B or C is more profitable than averaging in, and thus averaging-in can never be optimal.

From a backtest point of view, the Schoenberg-Corwin argument is impeccable, since we know what p is for the historical period. You might argue, however, that financial markets is not quite stationary, and in my example, if the historical value of p was less than 1/4, it is quite possible that the future value can be more than 1/4. This is why I never make too much effort to optimize parameters in general, and I can sympathize with traders who insist on averaging-in even in the face of this solid piece of research!