Monthly Archives: November 2014

In search of Edge #2

Some day, at some point, every great trader gets to the point where the forums just don’t provide very much new or interesting information (ahem). Today, I am not at that point. =p

The “Tsunami Method” thread popped up in my feed (I really do love that aspect of ForexFactory) so I took a browse through it. It’s premise and idea isn’t a particularly new or profound, but if taken to be true it does spawn some new ideas. The gist of the thread is: take a lot of small positions, and hold on to the winners for as long as you can. Aim to have a few winners that are SO large you can double or triple your account in one trade. Obvious pros, with the cons being that:

1. Must be comfortable taking A LOT of losses
2. Must be comfortable holding onto to positions for long periods of times with varying floating P&L
3. Arguably requires a strategy that has a very tight SL

I like it because it’s a lot “safer” then martingale in the regard that you don’t have a ton of floating negative gain. However you’re still taking a bunch of losing trades. I’ve always found these types of strategies more “fun” though for whatever reason. Your entry doesn’t have to be that accurate but it does need to be clean and I’d like to argue that to employ this correctly, you DO need a tight SL. I think somewhere around 15 being max, preferably 10 or 5 (like the OP) if you can manage it. The reasoning is this: If you have a 30 pip SL, then even if you catch a 600 pip move you’re only making (600/30) 20x return. Sounds great but to counteract how many times you’re likely to lose, you should be risking no more than .5% (roughly). The 20x return becomes 10x, and you’ve waited (likely) 2 to 3 months for 10% gain. Realistically, if you’re adding on a bunch of positions, you’ll get maybe 20 or 30% over the course of holding multiple winners. Better, but not HG status. With a small SL, you give yourself more room for error, and require fewer pips to make more gain.  Smaller sniping strategies have a lot more strength with their ability to compound quickly so for a longer time frame strategy to work the gains really need to be big and worth it.

Hence, to trade this ‘style’ correctly, we require a balance of:
1. Small SL (entering extremes)
2. Accurate entry (semi accurate)
3. Ability to hold a trade for a lot of pips (accurately mark and trade HTF swings)

I mean it kinda looks like we just want the farm here but realistically, only a few parts are needed. The accuracy only needs to be decent, the SL is somewhat fixed because we need it to make good R:R, so the ability to actually know when a big move will occur is really one of the bigger parts. The good thing is that we don’t necessarily need to know what direction it will occur in order to trade it. What does this mean? Magnitude.

Again, it has taken some random thread on the internet for me understand and conceptualize a Signal Bender theory in a way that makes sense to me. First the Similarity thread, now the Tsunami thread. However unlike the Similarity thread, the Tsunami thread provides no clues on how to find the answer, only how to manage it.


Trend is your friend?

Is this scientific proof to follow the trend and it’s strength? (pt 2)

Continuation complete

I’m cautious to make more additions to this. I think at this point I want to be veryy critical and precise with improving at the core components. I’m kind of viewing this as “gen 1” of 1 part of a system of x number of components if we’re speaking about the SB metabrain style of trading.

The picture is pretty self explanatory given one is aware of what h values are. What’s interesting is the difference in strength between the low and high extreme h values.
For context:


Currently (after nearly 18 months LOL):

The journey continues.

MTF aggregating attempt #2

Initial batch done!




I think correct aggregation will be hard or impossible to do without actually using real price points, but I will see what I can come up with. At the moment, it seems that the initial edge I found a few months ago will be universal for all time frames and h values.


might end up ditching lower time frames due to not having enough data, but might keep the 15m to see if it’s close to the HTF findings regardless.

Single frame tests (recurrency project)


Things get easier the second time around 🙂

Days worth of work condensed into minutes. Time allowing I’ll have the rest of it done before the weekend; I’ve coded it to do the simple analysis for me so I’m interested to see what it turns up.. so far it looks ok. Neither bad nor good.



Worked on some additional parts. For the most part, very consistent. I think this is a good thing. I think that if the edge I had originally applies to all time frames, it may give some “legitimacy” to the theory being a sound one.




MTF Min h

Boy I thought it would take much longer. I suppose it would but I called it quits after 4 trials. They’re more or less the same (except for the 15m) when the filter is applied.


Looks like I’ll be using 15 as my base line regardless of TF, which is kind of nice. I’m sure now however, that completing the multiple wave work will take much much more time to get correctly, if such a thing exists.

Draft #1

Got enough time to throw this together. I don’t know how it’ll turn out but I think this is the best I can do for now. There’s the semi-martingale strategy that could be added to this which will require a different dimension (also huge potential) but I want to look at the base first to see what the numbers look like.

I’m not very good with coming up with creative names so I’ll just be calling this MTF recurrence.

1. Find every transient price for h>1 [then max(left h, right h)]
2. For all such prices, find min h value to meet “Fx-Jay” recurrency.
3. Freq Dist. all these values to find min h value to achieve min h value.
4. Complete the above 3 steps for TFs 15m, 30m, 1hr, 2hr, 3hr, 4hr, 6hr, 8hr, 12hr, D1, W1.
5. Create wave sequences for all frames. Leave out the issue of double tops/bottoms for now.
6. Create statistics for 3 wave and 5 wave direction probabilities.
7. Create dashboard to monitor direction bias across time frames.

I’ve completed all these steps before over the past few months, but only for 1 time frame. My biggest fear atm is that the results won’t give a good skewed distribution of wave probabilities. The ones that I came up with for H1(shown in earlier posts) were quite good. I ran a quick test for a lower time frame and they converged a bit more towards 50%. If I’m lucky it will be that way because the h value used was incorrect. In other words, there is a very realistic possibility that the correct h value to achieve 97% recurrency and the correct h value to create maximum skew in wave probabilities are two different values. Luckily for me, I already have 1 basis point of wave probabilities so if it’s twin is way off, then I will know I’m wrong. The problem then will be on how to go about fixing it. (note: big problem)