Haven’t had too much time to get work done with my computer have a boot failure and needing to completely factory restore it. In the mist of buying a car, finding time to get as much work done as I would like is proving difficult. Here’s some things I’m kicking around atm.
First, how I got to these ideas. I’ve been reading through “The Ultimate truth” on FF, and the gist is really just trying to eek out a small edge and combining it with a bunch of other edges presented in the same data sequence. It’s basically stretching out the idea of linear prediction (if there are 4 reds in a row, what’s the chance of a green on the 5th?) to include different types of count (if there has been a red every other bar for 4 bars in a row now, what’s the chance of a green on the next?) And combining all of these might show an edge. I like the idea, but I’m not sold on it’s value in it’s basic state. Bar color can only show so much, and remembering the words of Rel, a correct model needs to show price as accurately as possible. Because of this, and in the recommendations of Pelt, I think the next logical direction is to find some way to incorporate what I’ve done with my waves already.
So here’s actually what’s up. Predictability works best when the data size is greatest, because the edge shines bright. If a coin has 2 sides, you’ll know rather quickly, and it will be quite hard to dispute if the coin favors one side over the other. Rolling a 6 or 20 sided dice will take more rolls to discover if a particular side is favored. Therefore, rather than working with my 8 wave types, what I would really like to do is to find some way to morph this into 4, or even better, 2 wave types. Wave types A-B-C-D account for about 90% of the days, which I think is quite good. If I could find a way to better characterize days that are NOT of that type to not trade, or even trade and lose until it is confirmed that a non-A/B/C/D is occurring, I think I would be fine. The issue of downsizing data and crunching it is of course removing data that may or may not be important. Keeping it in presents a sort of skew since those days are so rare. What I am considering is a slightly different way to create those waves, and possibly create a filter, like some sort of time requirement, that would get rid of the EFGH waves, and turn them into some extended C or D wave.
Another request I got, this time about simple rejection bars.
Here’s the scenario: On 1 hour data, if price touches a big number but doesn’t close past it, what are the odds someone could squeeze some pips out of it by trading away from the zone?
More specifically, if price makes a bullish move and the high is above a big number (using EU, 1.3500) but the close is under this price (say 1.3490), is it wise to short? What’s the color of the next bar?
Snippet of filtered relevant “trades”:
O/H/L/C on the left, the color of the bar next to it (so this test is looking at the color of the next row). The trade decision is in the short/buy column, and each of these is split in the next to columns to show what the result of executing a trade after the trigger is created is.
Doesn’t appear to be an edge, which I didn’t expect there to be. Nice statistic to know, perhaps useful in some other form.
Someone wanted to take an idea I did quite some time ago, about candle prediction, further. Rather than using 1hr data, he wanted to use daily data. The question is simple: is there an edge in trading off of consecutive bars? In other words, if there have been 7 bars of the same color in a row, what’s the chance that the next bar will be of the opposite color? Is it significantly higher than 50%? I thought no, but I ran the excel formulas anyway and here are the results:
Who’s the really bright one who can figure out what this one is without anything on it??
First up is a very simple statistic, with very simple implications. On the left side is time, and in the first column is the pip range (max-min) of each hour averaged. I added in the standard deviation for kicks, since it’s interesting to see what kind of variety is in the set. Also as we expect, the hours generally known to have less volatility have a lower standard deviation. The conclusion is simple: Things start heating up 2 hours before London open and cool down right before London close. Comforting.
I don’t think I’m working within the right frame, so this will probably be sometime I come back to at a later time. I’ll probably end up redoing the whole statistic with some better filters. I’m starting to learn how important it is to start with the right ideas and take small steps. It really seems like there are two parts to the research. Having the right ideas, and starting with the right info. In this case I think I have the right idea, but not the right info. I’ve “skipped” a couple of steps that are really important.
I looked at the data one more time before getting rid of it. In this rendition, I deleted all the first elements of the waves. A “B” wave is made of two waves: The initial wave, and the “real” breaker wave. I deleted all the “initial waves”. Something that pops up is the bell curve of data that appears in the terminating wave. Since the initial wave is gone, the terminating wave is the last wave as it appears in the set, and which I’ve boxed for the BCDE waves. The clustering of the earlier waves of the earlier waves suggest something like the following:
The last part of the wave takes the most time, generally around 6 to 8 hours to complete. If current wave is not the final wave, it will complete faster. Generally. This can be a dangerous point to trade off of without more statistics because it is entirely possible and perhaps just as likely that the waves are the same length (in time). For example a “C” wave that takes 2 hours for the initial wave and then 6 hours for both the second and final wave. This element of “playing against the clock” has some pros and cons, the biggest point(not really a pro or a con) suggesting that more digging needs to be done in another time frame. The pro is that combined with knowing when time has run out for the day (aka times when price is highly unlikely to make new highs), it may be possible to filter out which waves may be possible. Is this something Rel looked into? Likely. Did he find anything in it? We’ll see.
Edit: This statistic needs to be redone. *construction in progress*
There are a couple of ways to organize the data visually, and there are a lot of small calculations that could change the results you observe. The idea behind this first one is fairly basic. I want to see how long it takes each wave to be created.
At the very top, and data lined horizontally simply shows time elapsed in hours. Each wave is measured relative to where it’s previous wave ended. Interestingly, It seems that A waves (those that move only in one direction) tend to hit their maxs/mins very late into the day, generally taking at least 12 hours to complete. And since these only have 1 starting point (the original), that means if price has moved in only 1 direction since open, it would be a risky short, even as volatility dries up and it is expected that winners will take profit. As another explanation for reading the data, the B wave (which has 2 extremes) shows the time it takes to complete the first extreme is under the “First Wave”, and the time it takes to complete the second extreme is listed until the “Second Wave”
It’s a bit hard to say, but it looks like there is some support for the idea that it takes longer to complete each portion of the wave, rather than the same amount of time. This then suggests that if extremes are made very quickly, and very early in the day, there is a higher probability for that extreme not to be the last. However it will take much more studying to take that theory as a statistical edge.
There are a few other observations to make, such as the fact that in any wave other than the “A” wave, the final wave ALWAYS takes at minimum 3 hours to complete. The first wave is ALWAYS completed faster than 10 hours for any wave bigger than a “C” wave. Hm…
Are certain wave patterns more likely to appear before after other waves?
Starting with the original data:
I checked the data for the wave that appeared before and after each individual wave type. So I looked for the A wave, and checked which wave came before that A and what wave came after the A. Rinse and repeat for all 8 waves.