Monthly Archives: November 2015

Rabbit Hole Series #2.1

Note: I’ve been through a lot of renditions of this already tweaking numbers and adjusting values; optimal numbers for frequency and success are still being looked into

4 Possibilities:
script
Either find a chance to jump in with price before the move is over, or anticipate the end of the current move.

Starting with the in-trend moves, or the oversold up moves and the overbought down moves. Within each wave, I’m looking at the number of times one of the above occurs within the set:

occurances
I was hoping that there would be much more of a cluster, rather than the range that we see (like 90% of the data contained with 0-5, but that’s clearly not the case). However, it’s just the case that sometimes you get something that looks like this:

seve
This is showing 8 occurrences when I think it should really be 2.. so filtering out the clumps..

1

Much better! My developing general intuition about using the number of occurrences before the “max” is that if you can’t cover at least 90% of the data within 3 or 4 points, it’s not very useful.

In these two scenarios(overfill/underfill), since movements are pinned against a wave, the only time it fails is if it gives a signal at the very end. In other words, The worst case scenario is if the price wave is down, and this TCD prints a overbought at the very end of the wave; inducing more sells at the worst price possible. For this TCD to have value, the number of these failures needs to be REALLY low. Trading TCDS is essentially statistically backed confidence, so I better have a good reason for being confident!:

post

A=The probability that the incorrect signal appears at the very end of the wave
B=The probability that the incorrect signal is the first time a signal is given
Therefore: probability that the first signal given is the incorrect signal= (.040*.348)=.014 or 1.4%. not bad..

currently there extra bars are in there are a sort of filter protecting (in live trading) against starting a new wave too soon, and also in hopes of potentially catching the new trend right as it’s developing. Each wave currently has an extra 2 dummy bars attached at the end of it. I think that I likely need to add a couple extra to help determine when the move is over, find a use for the other TCD, or create a new TCD (most likely) Example of one of the failures:

seve

What’s left is the biggest piece of the puzzle: How accurate are over/under fills in respect to price? Sure visually they consistently look similar, but how similar are they really? Tricky to answer..

Advertisements

Rabbit Hole Series #2 extended

in trend
Some pictures from the H1

Can’t quite get the logic correct to get all my numbers to update live time, but I do have a good way to create these graphs historically to check multiple time frames, various h values, and the like. This expands the possibilities I can measure and cross check, and perhaps follow SBs advice on using an MTF approach. I’m not sure if excel is better suited for checking these things compared to MT4, but one certainly has to be clever in how to organize and pull the data to check for hard proof that these concepts actually work.

I currently don’t know which levels (70/30, 75/25, 80/20, etc) are better. I think to better capture these opportunities though, I want to expand the average length of each wave (more potential).

extended

Giving clues on in-trend pull backs and end of trend exhaustion.
I’m noticing that the signal is usually slightly early; A long signal usually has a one more dip low, and a short signal usually has one more push up.
Enough cherry picking, time to try to find a scientific way to analyze all of this.

 

Rabbit Hole Series #2

Trying to build a basic sort of momentum indicator.
I think I may have to revisit the premise of how I’m building it (when I can “see” it) but I’m in the baby stages atm.
The more tailored the tool is, the more wiggle room I have to make a tool that is one-dimensional. That is, normally if the tool needs to determine where price is going, then it needs to distinguish both long and short, as well as strength. But if I already know the direction, then I only need it to determine strength.
Specifically, when comparing the accuracy of an indicator that giving long signals, I don’t need to check a DR (down retrace)  against DT UR and UT, only against DT.

The blue line is a measure of C-PL, and the red line is a measure of PH-C. Then I modded the values using a sort of “strength” multiplier based on the TAC TCD (H-PL vs PH-L). A question that I was wondering is if price makes a move in which the bar prints a downward pinbar like so:
pinbar
Should price be considered bullish or bearish? How does context matter in this case? If price is already moving down, do I consider this a “slowing bear”, and thus bullish? If price was moving up prior, do I now considering this new strong selling action, and this bearish? If the context matters, (which logically it should?..) that would mean that the strength multiplier needs to also be multiplied by.. the strength of the strength multiplier?..

Imagine it this way.. if bull movement is A, and bear movement is B, then the strength of A would be A/(A+B), and the strength of B would be B/(A+B) (that’s one possibility at least). But if the context of the current strength matters, then the strength of A would now be something like [A/(A+b)] * Y, where Y is equal to the slope or average of past strength.. or some other indicator would need to indicate the strength multiplier Y to create the current strength, to then be used to measure the actual strength of price. Oy.

DR2 DT2

..which one is which? I couldn’t really tell the difference. Accounting for natural bias I don’t think that if these were mixed together, that I could separate them back out. Normally I would expect (or hope to produce) that the blue line is measuring bull strength, and the red line is measuring bear strength. Thus whichever is on top shows the current dominant vector, which the other line fighting to overtake the current direction. It just doesn’t work though.  But there was something interesting that I missed the first time, which shows here:

oscillate

What I noticed is that (after the 4th rendition), that the red line does a pretty good job of following price. Usually not something to be excited about, but I did manage to also get price shrunk into a 0-100 oscillator with a bell curve.

bell curve

So disregarding the first couple points of each wave which are distorted due to having too few data points, I may be able to detect price extremes using it. The issue now is what to do with the blue line..It’s not perfectly inverse, so perhaps there’s something to look at in the spots where they differ.

Here’s one example of printing an extreme (over filling?) to give a short signal when price actually has plenty of room to move down.

oscillate

The Rabbit Hole Series.. #1

I’m going to see if I have enough “creativity” in me to start a new category on my blog that’s basically going to be offloading a bunch of random output I get from various tests in the Signal Bender realm. Normally these things would be filed until statistics or something but until I get something “real”, I’m mostly posting to help myself keep track of what I’ve tried and what I’m working on, I understand most people will not be able to follow. I still plan on posting some stuff that I think more people will be able to digest; let’s see how deep I can make it this time..

Right now I’m working on RT/Omega relationships both pinned and unpinned to swings. pinning is definitely like “forcing” but it kind of creates a fixed number of relationships which can be useful if I know what i’m hunting for.

I’m trying to spend more time with each result and studying it, rather than jamming out the numbers and moving on. This one is okay but has it’s obvious flaws. I almost feel like I need to create a new metric to analyze these more efficiently and quickly..

gap 1

gap2

Left side SR zone

Haven’t quite worked on confirmation statistics, working on a couple of extra parts for now before bringing it together. Considering that catching retracements are easier than extensions, I think it’s important to work on the safest entry first.

To be specific, this is where the specific price, low in this case, lands within a single bar. Because of this I don’t think 5% is very different than 10%, maybe like 3-6 pips or something, but I think looking at the 25/50/75 marks are worth something just for eyeballing.

percent in bar

The extreme of the swing bar occurs within 75%, or close to 80%, of 1 side of the 50-50 line, the lower half. This is nice as it creates high probability zones, but still problematic and currently not as useful because there are a LOT of them.

Capture

I’m noticing that the extreme bar, if you trace it to the left, it seems to be very close to or in the wick of another bar.. Considering the 1/4 1/2 1/4 rule, I wonder why I feel like I’m seeing it a lot. I’m also noticing that ends tend to appear in areas where there is “air” or a portion of right-side transience. (there’s also a TCD reversal at that swing point)

air and wick space

Back to work..

Edit: about a 50% probability that the extreme is occurring inside a wick..

3

Bars until swing termination point within the active zone

Trying to do a better job of explaining my posts.. makes it way easier to go back to later and understand what I did..

Following the debriefing, I decided to basically hop back into the “try anything I can think of” phase of development (time permitting T_T) with the understanding that the “active zone” or soft reaction point between the 40%-85% is the optimal point of entry for swing (retracement) trading.

The first of my ideas is a simple bar count of how many bars are in the zone before price either a) peaks out or b) blows through. The count is activated once price hits at least 40% retrace, and swings that don’t reach 40% (some UR/DR) are just not counted. The main interest points here are a) if there is a difference between retracements and trends, and b) particularly low counts and high counts.

Bars until swing termination point within the active zone

This is still with the 30m EJ data, and the 80% capture rate is about 6 bars/3 hours on retracement and rejection swings, and 5 on the trending swings. This is kind of interesting because of the following:

If you take DR swings, for example, there were 213 swings that fell between 40.01% and 99.99% (these would then be all the swings used in this analysis). Of that subset, 74.6% of the swings reverse before the 85% mark. On the other hand, if you take DT swings, 100% of them go above 100% (by definition) and therefore 100% of them go past the 85% mark. Yet, the time to reach termination point, be it 50%, 60%, or 85% has only a 1 bar, or 30m difference at most, and is generally not really noticeable.

The other thing to note that is slightly more obvious (and intuitive) is that the longer price spends in the zone, the more likely it is to be a retracement swing rather than a trend type swing.

Edit: I should note that this statistic also has what I would assume to be a pretty high probability scalp imbedded in it. Since about 80% of swings have multiple bars in the zone, it means that after the first bar prints, it’s highly likely that at least one more bar will print to complete the high/low of the swing.

“Debriefing” on MMLC v12

Been hanging around this thread:

http://www.forexfactory.com/showthread.php?t=565169

After a lot of thought, it kind of feels like the “path to profits” has been made both easier and harder. On one hand, the market has been simplified to:

  1. Find good areas to trade in (done)
  2. Find strong signals within these areas (not done)
  3. Find success and failure points (not done)
  4. Win

On the other hand, the methods used to find the next steps seem to make frequency distributions of price retracements/extensions less effective or not effective at all. These are the “other methods” or additional parts.

Having the trade spots known creates a sort of a filter on what price range are worth looking and excludes certain types of analysis making it easier in the hypothesis but harder in the design phase. Time to be “creative”.

*puts crazy hat on*