I don’t think I’m working within the right frame, so this will probably be sometime I come back to at a later time. I’ll probably end up redoing the whole statistic with some better filters. I’m starting to learn how important it is to start with the right ideas and take small steps. It really seems like there are two parts to the research. Having the right ideas, and starting with the right info. In this case I think I have the right idea, but not the right info. I’ve “skipped” a couple of steps that are really important.
I looked at the data one more time before getting rid of it. In this rendition, I deleted all the first elements of the waves. A “B” wave is made of two waves: The initial wave, and the “real” breaker wave. I deleted all the “initial waves”. Something that pops up is the bell curve of data that appears in the terminating wave. Since the initial wave is gone, the terminating wave is the last wave as it appears in the set, and which I’ve boxed for the BCDE waves. The clustering of the earlier waves of the earlier waves suggest something like the following:
The last part of the wave takes the most time, generally around 6 to 8 hours to complete. If current wave is not the final wave, it will complete faster. Generally. This can be a dangerous point to trade off of without more statistics because it is entirely possible and perhaps just as likely that the waves are the same length (in time). For example a “C” wave that takes 2 hours for the initial wave and then 6 hours for both the second and final wave. This element of “playing against the clock” has some pros and cons, the biggest point(not really a pro or a con) suggesting that more digging needs to be done in another time frame. The pro is that combined with knowing when time has run out for the day (aka times when price is highly unlikely to make new highs), it may be possible to filter out which waves may be possible. Is this something Rel looked into? Likely. Did he find anything in it? We’ll see.