###### Traders often use Monte Carlo simulations to estimate worst-case drawdowns, but did you know they can be used for out-of-sample testing too?

###### This post demonstrates the use of StrategyQuant’s Monte Carlo simulator to randomize historical prices and strategy parameters, helping you select robust strategies for live trading.

Robustness is a big deal in trading strategy development.

A robust strategy is insensitive to variations in price behaviour, meaning it will continue to perform well when market conditions change.

To quantify a strategy’s robustness, traders often include out-of-sample tests during development. This usually involves backtesting the strategy on a different market or timeframe.

But what if you want to run an out-of-sample test on the same market and timeframe?

StrategyQuant’s Monte Carlo simulator can help you with this.

# What is Monte Carlo Simulation?

Monte Carlo simulation uses repeated random sampling to produce different backtest outcomes.

It is most commonly used to randomize your backtest trade sequence, essentially producing a new equity curve with a different risk and return profile. This is a good approach to estimate your worst-case drawdowns.

The example below shows 25 equity curve simulations, each with a different trade sequence.

Apart from randomizing your trade sequence, StrategyQuant’s Monte Carlo simulator can also randomize your:

- Historical prices
- Strategy Parameters

Both of these methods effectively ‘shake’ your strategy to evaluate how much curve fitting has occurred over your development, much like how you shake a ladder before climbing it.

If your strategy’s performance suffers drastically due to changing prices or parameters, it has likely fallen victim to overfitting. Such strategies lack robustness and will likely not trade as well as their backtests indicate.

To demonstrate the use of these Monte Carlo tests, I’ll use the GBPJPY trend following strategy generated in the StrategyQuant development section.

This strategy trades on the hourly timeframe, but enters the market on a breakout of the previous week’s opening price. Trade management is minimal, with only a bar-based time stop and an ATR-based stop loss.

You can download this strategy here.

# Monte Carlo With Randomized Prices

This test creates variations in the historical prices used for backtesting.

## Setting Up

Let’s load the strategy and configure the Monte Carlo simulator.

Select the **Monte Carlo Retest Methods** option in StrategyQuant’s **Retester** module.

Under the **Settings** tab, let’s configure the following:

**Number of simulations**:**1000**is the maximum allowed. Unlike randomizing your trade sequence, a fresh backtest needs to be performed each time a new equity curve is created.**Use Full Sample**: I’ll toggle this**ON**. By default, the simulations will only be performed on the in-sample portion of your data (configured under the Data tab). If you included an out-of-sample portion shown below, this portion will not be used for backtesting. Selecting Full Sample means all the available data will be used.**Backtest Precision**: I recommend**Selected Timeframe Only**, which is equivalent to the ‘Open prices only’ model in MT4. The alternative is ‘1-minute data tick simulation’, which is equivalent to the default ‘Every tick’ model in MT4. This second model is much slower because it simulates tick movement by interpolating between the 1-minute OHLC prices.- Select the
**Randomize History Data**option from the list of available simulations. **Probability Up/Max Up Change**: Probability Up is the % likelihood that each bar in the price history will have one of its prices (open, high, low, close) increased. Max Up Change is the maximum allowable change, as a % of the 14-period ATR. I’ll use**30%**for both fields. So for example, if the ATR is 10 pips, there is a 30% likelihood that the price will increase by up to 3 pips. I recommend using similar values for downside price changes, since it is difficult to predict market bias.**Keep Connected**: I’ll toggle this**ON**. This preserves any price gaps in the original data series.

Before proceeding, now’s a good time to introduce the concept of confidence levels.

Monte Carlo simulation is probability-based, meaning your results will differ slightly each time you run the simulation. It’s like flipping a fair coin 1000 times – you’re not going to get 500 heads every time. In theory, your backtest metric will converge towards a certain ‘true’ value after running an infinite number of simulations.

This is obviously an impractical approach. A good compromise would be to run a sample of simulations (1000 in this case), and use confidence levels to quantify the uncertainty arising from this smaller sample.

A confidence level refers to the probability that the sampled results contain a parameter’s true value. For this post, I’ll use the backtest’s return/maximum drawdown ratio (Ret/DD) as the parameter of interest.

Here’s how we apply confidence levels to the Ret/DD. The following table shows a typical result from a Monte Carlo simulation:

With a 95% confidence level, there is only a 5% probability that:

- The net profit will be lower than $581
- The drawdown will be higher than $157
- The Ret/DD will be lower than 3.91

Higher confidence levels result in greater deterioration of your Monte Carlo metrics, but these metrics are more likely to encompass your future performance.

With an understanding of confidence levels, it’s time to input the test filters. These will be useful if you’re using the Monte Carlo simulations to shortlist strategies. Head over to the **Filtering** tab.

I’ll only use one condition, which states that the Ret/DD at the 95% confidence level should be at least 50% of the original backtest’s Ret/DD. This left side value is setup as follows:

**Metric**: Select your metric of interest. I’ll use**Return/Maximum Drawdown**here. If you trade a small account and are concerned about capitalization, maximum drawdown is an alternative.**From Backtest**: Select the relevant robustness test.**Monte Carlo Retest Methods**is what we’re using.**At Confidence Level**:**95%**is a commonly used level. You can use this to adjust the difficulty of your test. Even if you use a 100% confidence level, it doesn’t mean that future performance cannot be worse. Monte Carlo simulations are estimations, after all.

As for the right side value,

**From Backtest**: Similar to above, but this time we will be using the original backtest as a benchmark, so select**Main Data**.**Sample**: Like the Use Full Sample option described above, we want to use all the available data, so select**Full**.**Direction**: Select**Both**, since the strategy trades symmetrically on the long and short sides.**Result In**: This depends on what units your metric uses. It is irrelevant here because Ret/DD doesn’t have units, although**Money**is selected by default.**Apply Percentage Ratio**: Since we want the Monte Carlo Ret/DD to be at least 50% of the original, set the percentage to**50%**.

That’s it! Let’s run the simulation.

## Monte Carlo Simulation Results (Randomized Prices)

The characteristic ‘straw broom’ Monte Carlo chart is shown below.

There are 1000 additional equity curves, each created by backtesting on different historical data. As a consequence, the curves contain different numbers of trades and final equity values.

The original equity curve is mostly in the top third of the ‘broom.’ This means majority of the simulations resulted in worse outcomes, which is not great. Let’s pull up the confidence levels.

As we progress through the confidence levels, the Ret/DD tends to deteriorate significantly because we expect a simultaneous decrease in profit and increase in drawdown. The 5.03 Ret/DD at the 95% confidence level is 61% of the original value, so the test passes.

# Monte Carlo With Randomized Parameters

This test randomizes your strategy parameters. Robust strategies remain profitable over a large range of parameters.

This strategy only contains three parameters:

- Number of bars the entry buy/sell stops will be valid for
- Numbers of bars used for the time stop
- Multiple for the ATR-based stop loss

The fewer parameters you use, the more robust your strategies tend to be. Complex strategies fail in complex ways!

## Setting Up

The steps here are identical to those above, with the exception of the following:

- Select
**Randomize Strategy Parameters**from the list of available simulations. **Probability**: This is the % likelihood that each parameter will be changed. I’ll input**30%**.**Max Change**: This is the maximum % change that each parameter will be subjected to. I’ll input**30%**. For example, a 100-bar time stop can have its value changed to anything in the 70-130 range.**Symmetric Parameters**: I’ll toggle this**ON**. This means the same parameters will be used for both the long and short sides.

## Monte Carlo Simulation Results (Randomized Parameters)

The ‘straw brooms’ look quite similar to those obtained from the randomized prices, with one key difference – there is a much larger variation in the number of trades per simulation. This is due to changes in the length of the time stop and size of the stop loss.

The 95% confidence level Ret/DD is slightly lower than that in the randomized prices simulation, but still passes our 50% filter.

So far the strategy seems robust; it has performed reasonably well in the face of changing prices and parameters. Being a perverse tester I can’t help but think: what if we randomize *both* prices and parameters simultaneously?

I did exactly that, and the chart looks more scattered, with even larger variation in the number of trades.

The 95% confidence level Ret/DD is down to 42% of the original, so if you want a more stringent robustness test, this is the way to go.

# Wrapping Up

Using Monte Carlo simulations to randomize your prices and/or strategy parameters is another way to do out-of-sample testing.

If your strategy’s performance does not deteriorate significantly as you progress through the confidence levels, it is likely you have a robust strategy at your fingertips.

Supplementing these simulations with other robustness tests, such as walk-forward optimization, is a great way to defeat overfitting.

If you want to check out StrategyQuant’s Monte Carlo capabilities, why not take advantage of its 14-day FREE trial?

Don’t forget to use coupon TACT to get USD 200 off StrategyQuant Pro!

## 0 Comments