Single Bar MAE/MFE and density distribution (pt1)

Two thoughts that have always stuck with me:

  1. If you long or short randomly, there is almost always some time x or price p that exiting will give you a profit.
  2. Probabilities concerning how price can move in the future, without context (past) are probably not useful since TA relies on trading the future based on the past.

To me, these are both true and important to be aware of, but the first comes to odds with the second in the sense that the raw probabilities have to be consolidated and meld with the model itself. The smaller your MFE/MAE ratio becomes, or the smaller the MFE becomes in terms of a raw pip number, the more “wrong” you are and the more likely you are to be in the wrong “direction”. Yet one can’t deny that no matter how bearish a movement is, there is always some room to go long. Understanding and using this is how it seems a lot of scalpers cut their losses. To me (although I have no concrete proof as I would need to know the complete strategy to know this) it seems like cutting losses is not about seeing a position go south and dropping it, but rather more about finding a way to the a movement that is down, and manage to get a few pips in profit in the slight up movement provided. I see now how range bars can attempt to solve this issue of how price can move in up/down direction at any one given point. However, a problem I’ve always had with any model or minimum movement statistic is that in practical trading terms, it’s very hard to put to use. Similar to transient zones from years ago, it’s easy to get a very high probability with a small profit, but that small percentage of time that fails very easily leads to a margin call.

I wanted to explore a bit more about how price moves from point A to point B, and what price at any (random) point looks like.

Raw MFE: Using H1/48 hour- Calculation of difference between max price of next 48 bars – current bar close creates the category “up”, and current bar close – min price of next 48 bars creates the “down” category. In picture form:
mfe_ex
This helps answer the question of what’s the minimum movement in either direction at any one point? Something to be aware of is that sometimes price can gap down and stay down for the next 48 bars – in other words, the max upward movement can still be negative.
48h maemfe.PNG
Some notes:
The 2% quantile is about 1/48. I kept 5% in even though 2/48 is about 4% and 3/48 is 6.25%. For what it’s worth, 4% is about 4 pips and 6% is about 6 pips. This means that roughly 1 bar in every 2×24 hour cycle has just a 2 pip movement in the non-favorable direction. Of course, bars aren’t random. It’s possible that you have some consolidation and a short blip before a reversal, in which case you could have 3,4 or more bars that only have that 2 pip “grace” area. So it’s worth also taking a look at blocks of 48 hours together.
freq_extreme_bars
These are looking at collections of 48 bars that themselves look at a 48 hour period, so the max period of time covered is 96 bars (bar 1 will look into the future 48 bars, bar 2 will as well, and since they are next to each other, bar 2 only creates 1 extra bar of “new” information).  So although a bar will have that 2 pip buffer with frequency 1/48, 58% of assorted real data 48 blocks contain a minimum buffer of more than 2 pips. Hence why context is so important.

Advertisements

Bar exploration

Simple Ranges

Daily/hourly:
daily_percentile.PNG       ratio.PNG
50-150 pips is about 80% of the distribution, with 50% of the range narrowing down to between 65-110 pips. Due market volatility I prefer to use medians here, so about 50% of the time the range the daily bar will be right around 85 pips. On another twist from a very old post, I’m not surprised at all that the range of H1 bars is nowhere close to the range of d1 bars  / 24. Most of a days range is made up of just a few bars, and using ratios and percentiles narrows down that range to be, for the most part, somewhere between 5 to 8 bars.

A simple question like this led me to ask a lot of questions, some which I’ve discovered before, some which I was too lazy to compute (or didn’t see the value in) and others that I couldn’t find a good way to work on. I’m going for really long posts now, so let’s see how many I can cover 😉

How many bars actually contribute to a days range? How many bars are basically inside bars within the context of the day’s range?

example:
bar_counting.PNG
The result is surprisingly bell-curved:

bars_contributing_to_d_range.PNGnumber of bars dist.PNG
So if on average it takes 5-8 bars to have the same range as the daily range, and it takes on average 9-12 bars to contribute to that same daily range, then what’s the relationship between the bars and their contribution? What does that look like? What’s the correct way (if there is one) to measure this? I thought of two ways to do this; The first would be to check each bars additional contribution as a percentage of the days total range. Using the example picture above, bar 1 would be maybe 20%, bars 2/3/4 would be fairly small, and bar 5 would also maybe be 20%, and so on. The second would the check bar contribution as a percentage of the days range at the time the bar extends the range. The difference here would be that bar 1 is always 100%, and bars 2/3/4, while still small, are much bigger in this version than they would be in the first version. Since this is only really useful in a predictive sense I went with the latter.

Here are some results covering the 6-12 bar range:

v2_counts
The top is the % increase in range, and the left are counts of bars that meet that criteria. The interpretation of this is that for bars that increase the daily range by 0-5%, this occurs 0 times, 15% of the time. It’s a bit abstract for sure. It’s hard to take too much away from it, but one thing to note is the top right corner. Only 2% of the time does a day contain 0 bars that extend the range 25% or more. In other words, 95% of the time, there is at least 1 bar that extends the current range by 25% or more.
It turns out, perhaps not so unexpectedly, that this tends to occur very frequently in the first bar that breaks range:
Bar per extension

Some other numbers regarding this:
Probability that none of the first 3 expansion bars are greater than 25%: 10.5%
Probability that none of the first 3 expansion bars are greater then 25 + one of the bars following is 25%+: 86%
It’s kind of interesting and provides some good spots for single bar explosions, but very difficult to spot these visually without a program constantly updating numbers.

Overall, more questions than answers but I did add more knowledge to my base of what bars look like in the context of a day.

 

 

Base wave findings in fractal model v2 [FMv2]

The theme of this post is confirming hunches. Most of this isn’t really new or groundbreaking, but rather providing statistical information on things that already make sense.

Finished v2 of the fractal model, which I think will end up in v3 or v4, but I haven’t quite figured out how I want to approach the other iterations. There isn’t too much of a difference between v1 and v2, but it cleans up what I call “false extensions”.

Here, the normal swing progression is SFFTE, however, the trend leg isn’t what I consider a true trend leg because it doesn’t break the lowest low before it. The model removes the 2 swings before it, and re-classifies the leg as a Flat, and changing the leg after it to a Trend.
The baseline stats are relatively unchanged (v2 on the bottom):
h1_median_moves_v2median_moves_v2

Noticing that the Trend and Expansion legs have significantly higher averages than means compared to their swing and flat counter-parts due to expected trend anomalies. Swings and Flats are confined to the 100% range space that defines then, while Trend and Expansion legs are not.

Swing Leg Findings:

When it comes to swing legs, the question I’m trying to answer is: “where is the safest place to be able to place a trade in preparation of a continuation of the trend?swing_retrace_ex

All the same photo just enhanced. There’s a small gap area shown in the triangle that suggests that when the previous wave is small, the retracement tends to be quite large in proportion, or at least there’s a small probability that it is in the “normal” 30-60% range. Smaller waves naturally have a harder time successfully printing a retracement while keeping the range small.
Past previous swing lengths of around 60 pips, the distribution looks a bit more random and even.

This next picture is the swing retracement when compared to the swing leg itself. The correlation is more obvious which is great but there isn’t much predictive use of this if at all. Nevertheless, just for comprehensiveness, I think it’s nice to have.

swing x swing length

Trend Leg Findings:

Full View on top, the condensed version below.

Kind of shows the limitations on how big ratios can be. Legs that are smaller (less than 150 pips or so) have the potential to have following legs that are up to 5x more, although even in the complete set of data so far it’s not very common. On the other hand, very large legs (200+ pips) have a “cap” of less than 3x.

The large bulk of swing ratios being under 1 gives a slightly different way to track the bounds of where a leg can end. Noting from above that the median/average T leg extension is around 1.5/2.6 respectively, initially thought that I could just subtract 1 and that would leave the ratio of 0.5-1.6, but that’s not quite correct.tail_end_t3_vs_t1zoomed_t3_vs_t1

 

Overall I’m still kind of mixed if this is any “better”. There are simply tradeoffs. The net direction of T->T direction is ~57% in this model compared to 63% here. Although it’s pretty close to 60% either way, if we’re going to be technical about it, the sample size is statistically significant and the trend “conversion” rate of the previous model is better than the new model (not great). However, the new model is much cleaner and allows me to be more confident that we’ve established leg ‘x’ ending/starting much much earlier which is a huge win.

Wave patterns from T leg to next T leg:

net_direction

H4:

h4_stats

H1 and H4 Fixed patterns when first leg starts as a T Leg:

H1_fixed_patternsh4_fixed_patterns

Next steps:

looking at trend waves in 2 different ways is new to me and it got me thinking a lot about what “normal” waves look like.

waves.PNG

It’s hard to say whether or not one trend wave is more “correct” than another or if some should be excluded. Some of them certainly look more textbook than others. Since normal sounds so normal, it’s probably a rabbit hole of information. Till next time.

 

Fractal model base findings

Finished up my new model and got the chance to take a look at some of the baseline stats that come with it. The model is using the base fractal indicator where every fractal is either a higher high, higher low, lower low or lower high, and then I connect the HH to LL with some filtering/condensing to create waves. I picked it because it’s a pretty simple one, and I like that it contains micro points for me to use should I find a need for it.

HH_LL_model

Compared to the previous model, this model’s trend->swing ratio is a bit lower

baseline stats, 3 wave pattern probabilities and 5 wave pattern probabilities.

fractal_model_baseline3_wave_pattern5_wave_pattern

I think these numbers need more trading context which these numbers don’t show, but it’s just interesting to see how these numbers compare to the previous ones.

This next one is Retracement over time, with trends and expansion waves being greater than 1 since their leg is larger than the previous. It looks like trend/expansion waves are pretty similar with no big distinction between them, while flats are a bit shorter than swings.

retracement by swingOn a zoomed-in scale though, it’s not super clear where any particular levels are denser than others.
micro_flat_swing

All of this is kind of expected – no obvious edge here but trying to find some way to take advantage of the natural tendencies of flat->trend legs or expand->swing legs seems like a good way to go. I have some things that I’m mentioned in previous posts that I haven’t really done much of in the past that I’m working on now that will hopefully be more interesting, but it will take a while to correctly map and dissect. It’s a topic worth it’s own post when I get there 🙂

Some MFE MAE findings

Mostly writing this one just to have record of what I worked on here since I generally don’t keep all the plots I make when doing analysis.

One of the things I’ve really been working on and thinking about is making as much of my analysis as possible in an extra dimension – that is, from a simple frequency distribution to a scatter plot.

I created a simple strategy of sorts (more of an extrapolation of a model with some filtered parameters) and in general just wanted to learn more about it. Putting the strategy into a trading simulator, pulling out all the data I want to examine, and plotting it is a good way to do this.
First:

winslossesThis is simple win/loss in green/red with risk/reward ratio of the y axis and hours til trade completion on the x axis. Thus you can see already by this point there has been some filtering of the system (no RR greater than 10, no trade duration greater than 30). These other points not pictured can either be removed completely or analyzed separately.
Second:

mfe1
The next iteration takes the data and looks at MAE/MFE, with winners in green and losers in red. More filtering and improvements can be made here: almost no winners have MAE lower than -.01 (100 pip drawdown) and no losers have MFE greater than +.005 (50 pips “left on the table”). Below is a more blown up image of the negative region:
mfe2
Thus it’s an easy adjustment to say if a trade reaches -50 pips in drawdown, it’s probably easier to just cut it as a loser and save additional risk.

Lastly:
mfe3
Trimming off the +50 and -50 gives a better look at what the rest of the trades look like. Then if possible, take a look at the new scatter plot, made observations, and go through the process again; Making trade adjustments, looking through individual trades for patterns, or moving on to the next project.

First Look at MTF analysis on transient waves

Model: Fractal n=24, Transient waves

Here’s my first look at MTF. I took the model I had for the 1 hour, and simply multiplied it by 4 and applied it to get the equivalent 4-hour chart. I was curious about the ideas of traditional wave theory; That smaller waves have a 5, or 7, or something legs that are equal to 1 leg of the higher frame. I don’t know much traditional wave theory, and I don’t expect my model to be that perfect instantly, but I was curious about to just explore: what kinds of 1-hour legs are contained in 4-hour legs?

MTF1MTFWave
A simple “T” is pictured on the right – A single trend pattern (3 legs). A 1, in this case, means that the leg completed in 1 move – the 1-hour leg and the 4-hour leg are identical. A single leg, simple ABC wave, a 2x ABC, and 3x ABC leg make up an overwhelming 80% of the wave formations in 4 hour frames.

This is somewhat interesting because while present, it shows that TEE is actually a very rare type of way to complete a leg.
MTFex

It means that in the scenario above, as price is rising on the 4th leg, the soft resistance is a marker for how price can react. One way to look at it is to say a short at this level is potentially a good opportunity – At this point, it is not a “1” move. It is either a T move, and the move is over, or it is more (perhaps TST or TSTST). It is fairly close to 50-50 here, so the risk:reward here is crazy (Now, what is the probability that a move makes it this far and continues down? That’s another test). However, if price were to break into the “unlikely” area, now we are basically in reversal mode. It would be a good chance to load longs by the blue box because breaking the lows here would be very unlikely.

It’s not perfect because sometimes a full TST wave will have already been created on the 1h frame before the 4hr frame is “confirmed”. It’s easier to trade when the 1hr and 4hr frames are confirmed in the same leg, and even then it’s not always a sure thing. The need to “backfill” full waves occasionally on this model is it’s biggest downside.

completed:
-Learning the Oanda API to create an automated trading bot. Learning MQL seems a bit much for a programming novice like me, so if I can do it in python, why not try?
-Learning how to draw lines with the matplotlib library to better create pngs with wave lines drawn in
-Additional work on the main framework to allow testing new ideas/models easier/faster
-Trying MultiTimeFrame analysis
-Automate data collection into sql database (woo!)
-Begin test/live deploy of first automated trading bot

to do:
-Work on new model

It’s Already July

Wow! A small self-blog and an update. I had to scrape my initial plan for the year because, after some research, I realized I didn’t quite have the tools (yet) to be able to play with. Maybe in another couple years 😉

It’s been a slow transition, but I’ve finally made it to the point where my self-taught skills in python are about on par with where my excel skills are. For quick things, I still default to excel, but programming is so powerful. It should absolutely rank #1 on things I regret about my trading career. Rel suggested it to me one of the first times we spoke, and I didn’t follow up on it. I was too eager to get as much done ASAP, and couldn’t be bothered to take the steep learning curve to learn how to do things the “right” way. I did learn a lot about excel, but most of those skills are useless now. It’s a shame really, but I did eventually learn. Programming, if it’s not obvious by now, allows iteration SO much faster. It allows one to port a model into a higher time frame, or to be tweaked and run again almost instantly. It allows you to run “what-if” scenarios and trade simulations much easier and can help you visualize so much better as well. I can create a way to model waves, iterate through data, create a OHLC charts, and save them into PNGs to view individually and think of new ideas. Again, Wow. I’m still learning a lot and simple things to explain take me days/weeks to complete, but I’m getting there and it’s worth it. I’m much busier these days so I don’t have nearly the number of hours to spend on trading related things as I used to but the dream will never die. Back to work.

Things I’m working on:
-Learning the Oanda API to create an automated trading bot. Learning MQL seems a bit much for a programming novice like me, so if I can do it in python, why not try?
-Learning how to draw lines with the matplotlib library to better create pngs with wave lines drawn in
-Additional work on the main framework to allow testing new ideas/models easier/faster
-Trying MultiTimeFrame analysis