Thursday, April 01, 2021

Conditional Parameter Optimization: Adapting Parameters to Changing Market Regimes via Machine Learning

Every trader knows that there are market regimes that are favorable to their strategies, and other regimes that are not. Some regimes are obvious, like bull vs bear markets, calm vs choppy markets, etc. These regimes affect many strategies and portfolios (unless they are market-neutral or volatility-neutral portfolios) and are readily observable and identifiable (but perhaps not predictable). Other regimes are more subtle, and may only affect your specific strategy. Regimes may change every day, and they may not be observable. It is often not as simple as saying the market has two regimes, and we are currently in regime 2 instead of 1. For example, with respect to the profitability of your specific strategy, the market may have 5 different regimes. But it is not easy to specify exactly what those 5 regimes are, and which of the 5 we are in today, not to mention predicting which regime we will be in tomorrow. We won’t even know that there are exactly 5!

Regime changes sometimes necessitate a complete change of trading strategy (e.g. trading a mean-reverting instead of momentum strategy). Other times, traders just need to change the parameters of their existing trading strategy to adapt to a different regime. My colleagues and I at PredictNow.ai have come up with a novel way of adapting the parameters of a trading strategy, a technique we called “Conditional Parameter Optimization” (CPO). This patent-pending invention allows traders to adapt new parameters as frequently as they like—perhaps for every trading day or even every single trade.

CPO uses machine learning to place orders optimally based on changing market conditions (regimes) in any market. Traders in these markets typically already possess a basic trading strategy that decides the timing, pricing, type, and/or size of such orders. This trading strategy will usually have a small number of adjustable trading parameters. Conventionally, they are often optimized based on a fixed historical data set (“train set”). Alternatively, they may be periodically reoptimized using an expanding or rolling train set. (The latter is often called “Walk Forward Optimization”.) With a fixed train set, the trading parameters clearly cannot adapt to changing regimes. With an expanding train set, the trading parameters still cannot respond to rapidly changing market conditions because the additional data is but a small fraction of the existing train set. Even with a rolling train set, there is no evidence that the parameters optimized in the most recent historical period gives better out-of-sample performance. A too-small rolling train set will also give unstable and unreliable predictive results given the lack of statistical significance. All these conventional optimization procedures can be called unconditional parameter optimization, as the trading parameters do not intelligently respond to rapidly changing market conditions. Ideally, we would like trading parameters that are much more sensitive to the market conditions and yet are trained on a large enough amount of data.

To address this adaptability problem, we apply a supervised machine learning algorithm (specifically, random forest with boosting) to learn from a large predictor (“feature”) set that captures various aspects of the prevailing market conditions, together with specific values of the trading parameters, to predict the outcome of the trading strategy. (An example outcome is the strategy’s future one-day return.) Once such machine-learning model is trained to predict the outcome, we can apply it to live trading by feeding in the features that represent the latest market conditions as well as various combinations of the trading parameters. The set of parameters that results in the optimal predicted outcome (e.g., the highest future one-day return) will be selected as optimal, and will be adopted for the trading strategy for the next period. The trader can make such predictions and adjust the trading strategy as frequently as needed to respond to rapidly changing market conditions.

In the example you can download here, I illustrate how we apply CPO using PredictNow.ai’s financial machine learning API to adapt the parameters of a Bollinger Band-based mean reversion strategy on GLD (the gold ETF) and obtain superior results which I highlight here:

 

 

 

Unconditional Optimization

Conditional Optimization

Annual Return

17.29%

19.77%

Sharpe Ratio

1.947

2.325

Calmar Ratio

0.984

1.454

 

The CPO technique is useful in industry verticals other than finance as well – after all, optimization under time varying and stochastic condition is a very general problem. For example, wait times in a hospital emergency room may be minimized by optimizing various parameters, such as staffing level, equipment and supplies readiness, discharge rate, etc. Current state-of-the-art methods generally find the optimal parameters by looking at what worked best on average in the past. There is also no mathematical function that exactly determines wait time based on these parameters. The CPO technique employs other variables such as time of day, day of week, season, weather, whether there are recent mass events, etc. to predict the wait time under various parameter combinations, and thereby find the optimal combination under the current conditions in order to achieve the shortest wait time.

We can provide you with the scripts to run CPO on your own strategy using Predictnow.ai’s API. Please email info@predictnow.ai for a free trial.

6 comments:

  1. Hi Ernest,

    First of all, thank you for all the contributions. I'm a big fan of your books and blog.

    On the subject of this post, I don't really understand how this strategy of optimizing a strategy using a machine learning model can address the difficulty of deciding the size of the training set. For me, as in the case of a walk-forward optimization, this technique also has the difficulty of choosing between a large set of tuples and generalizing well the data OR choosing a smaller (recent) set and adapting faster to changes in markets regimes.

    best,
    Charles

    ReplyDelete
  2. Hi Charles,
    Thanks for your kind words.

    Training set is considered a hyperparameter in machine learning, and is not a strategy trading parameter to be optimized. Typically, the best way to decide the optimal training size is to plot a curve of the predictive performance (e.g. accuracy or F1 score) on a validation set as a function of the train set size, and pick the train set size that maximizes the performance.

    In CPO, we can adapt to a different regime every day (or every period) no matter how big the train set is. This is precisely its advantage over walk forward optimization.

    Did you get a chance to read the example in our paper? It should clarify things further.

    Ernie

    ReplyDelete
  3. Hi Ernest,
    thanks a lot for the blog post and the paper. I'm looking into something similar, using daily data and metalabelling of simple trading models.

    A few questions on this from my side:
    - from the code snippet in the paper, am I right to assume you are using SHAP as a feature selection? Do you do this for all possible combinations of trading parameter sets?
    - Are you using the set of trading parameter that optimizes the train set or the test set? Are you using any kind of cross-fold for your training?
    - Is there any way you can share a simplified code of what you are doing, to help go through step by step?

    Thanks a lot!
    Steven

    ReplyDelete
  4. Hi Steven,
    1) We use cMDA for feature selection. It is done during CV of the training phase. The training phase uses all features including the trading parameters.
    2) Trading parameters used for test set are those predicted using the latest feature vector as explained in our paper.
    3) The high level description was in the paper, and the detailed code as well. Please email info@predictnow.ai so we can answer further questions!
    Ernie

    ReplyDelete
  5. Hi Ernie,
    thanks for the blog post, I really like the concept.
    Two questions from me:
    1. You mention in the paper that you are using features on minute bars but daily labels (ie the aggregated returns of the strategy for that day). How would you match that correctly for the model input?
    Do you somehow aggregate your features does your label just stay constant for all the minutes in one day?
    2. If you would do this live, would you retrain your model every day or less (weekly/monthly)?
    Thanks for your help!

    ReplyDelete
  6. Hi,
    1) There are multiple ways to aggregate features on minute bars so that they become daily features. For example, sum/mean/max/min/AND/OR and any other statistical summary may be appropriate for different features.
    2) There is no need to retrain model daily. I think once you have new data that exceeds 10% of old train set, then you should retrain.
    Ernie

    ReplyDelete