Your interactive trading laboratory!
 • 
9 users online

How to get a regular model to match a MetaModel

I'm just getting started with this, but simple tests are failing.

  1. I create two simple models (simple RSI tests, open and close long positions at Market Open).
  2. I create a MetaModel using those two simple models.
  3. I create a regular model with both simple models inside.

All are set to Fixed Position Size with a small enough position size to avoid NSF positions.

Problem: I think the results from #2 and #3 should be nearly identical, but they are not. They are very similar comparing positions taken at the beginning, but not identical (one will have one or more positions the other does not), and they diverge from there, subtly but significantly.

It happened first with some real strategies that I wanted to combine (instead of using MetaModels) and then it happened with some dead simple test strategies (using blocks). Can anyone verify this problem? I need to know how to properly construct compound models that work like the MetaModels do, because I'm finding it difficult to build/edit/test MetaModels properly, and also would like to add some ideas that simply can't be done with current MetaModels.

If necessary, I can post some codes here that I used for testing.

Attachment

Cancel

Responses

Do you have the MetaModel rebalance frequency set to None?

Do you have the MetaModel rebalance frequency set to None?

Yes, my "Reblance" frequency is set to None for all the MetaModels I've run.

Yes, my "Reblance" frequency is set to None for all the MetaModels I've run.

OK I will need to spend some time investigating. For one thing, be sure you're allocating enough capital so that all trades can be taken.

OK I will need to spend some time investigating. For one thing, be sure you're allocating enough capital so that all trades can be taken.

Hmm, I feel silly now, cause my simple test does NOT fail now, I get exactly the same answer when there's enough capital allocated. But the results were SO different when there are some NSF positions that I "felt sure" there was some problem. The problem is relying on gut feel instead of proper testing. Back to my backtesting, thanks Glitch.

Hmm, I feel silly now, cause my simple test does NOT fail now, I get exactly the same answer when there's enough capital allocated. But the results were SO different when there are some NSF positions that I "felt sure" there was some problem. The problem is relying on gut feel instead of proper testing. Back to my backtesting, thanks Glitch.

Haha, no problem. As my internet name of some 20+ years indicates, I know that glitches in software are possible but happy to hear in this case it is explainable :)

Haha, no problem. As my internet name of some 20+ years indicates, I know that glitches in software are possible but happy to hear in this case it is explainable :)

I discovered the cause of my frustrations, and it was my reliance on boilerplate code without thinking through the consequences. I was running two simple models, so they were easy to combine into one combo model that I thought should be the same as a MetaModel of the two. Both models, however, used this common construct in the Execute() method:

			if (!HasOpenPosition(bars, PositionType.Long))
			{
				//logic here to take a long position
			}
			else		
			{
				//use the selling logic here
			}

Trouble is, there are now two of these code constructs (one for each component model), and our Execute function runs through bars symbol by symbol. So if Model 1 is long, it may have its position sold by either Model 1 (expected behavior) or by Model 2 (unexpected, and often premature).

The solution is simple enough, just set transaction tags that differ between the models in the Buy transaction, then test for them instead of simply for any long position:

			//if (!HasOpenPosition(bars, PositionType.Long))
			Position foundPosition = FindOpenPosition(2);  
			if ( foundPosition == null) // no long positions open
			{
				//logic here to take a long position AND SET TAG = 2 FOR THIS MODEL
			}

Now my combined model is very close to the MetaModel. Still not exactly the same results, but very very close, so I'm moving forward. Copy, paste, fail... anybody know the Latin for this to put on my personal coat of arms? Google says "Exemplum, crustulum, deficient." About right for a Bitfool.

I discovered the cause of my frustrations, and it was my reliance on boilerplate code without thinking through the consequences. I was running two simple models, so they were easy to combine into one combo model that I thought should be the same as a MetaModel of the two. Both models, however, used this common construct in the Execute() method: [CODE] if (!HasOpenPosition(bars, PositionType.Long)) { //logic here to take a long position } else { //use the selling logic here } [/CODE] Trouble is, there are now two of these code constructs (one for each component model), and our Execute function runs through bars symbol by symbol. So if Model 1 is long, it may have its position sold by either Model 1 (expected behavior) or by Model 2 (unexpected, and often premature). The solution is simple enough, just set transaction tags that differ between the models in the Buy transaction, then test for them instead of simply for any long position: [CODE] //if (!HasOpenPosition(bars, PositionType.Long)) Position foundPosition = FindOpenPosition(2); if ( foundPosition == null) // no long positions open { //logic here to take a long position AND SET TAG = 2 FOR THIS MODEL } [/CODE] Now my combined model is very close to the MetaModel. Still not exactly the same results, but very very close, so I'm moving forward. Copy, paste, fail... anybody know the Latin for this to put on my personal coat of arms? Google says "Exemplum, crustulum, deficient." About right for a Bitfool.
Forum Tips

Please sign in if you want to participate in our forum.

Our forum uses Markdown syntax to format posts.

To embed code snippets, enclose them in [CODE][/CODE] tags.