Monthly Archives: February 2014

Dear Journal #8

More of a word vomit this time around.

Stuck. Not finding anything is one thing. Being unable to determine what to try next is another, and more difficult to overcome. I tried to go back to the roots for some inspiration. Back to the kresik thread, and digging online for some of Signal Bender’s other threads. It was made even more difficult since he’s used so many different usernames. Probably impossible if I didn’t have Relativity’s thread to mention him a couple times and note that he used multiple IDs. (If you’re reading this Rel, I didn’t realize until just the other day that you were LLT on T2W as well! ).

7thSignalTrader seems to have quite the thread on T2W, but either I’m not reading it clearly and fully enough, or T2W has made some complications stemming from deleting/altering posts, or a lot of stuff in there is in reference to other threads that I can’t find. I know the last one is true, but the difficulty is in knowing how much.

%Fill, or perhaps known better by %TCD Fill (are they even the same thing?) is something that’s popped up a few times that I can’t understand or find sufficient information about.

Nevertheless, there’s a lot of potential insight in there I believe. The core SB values are there, but being able to actually pick them out into tangible ideas is beyond me. What he did iterate is the following idea:


“Problems cannot be solved by the same level of thinking that created them” (attributed to Einstein)

There were a couple of bright people who seemed to be on their merry way to success, all briefly mentioning the importance of the TCDs. I’ve come to think of TCDs as basically the same thing as metadata, but I’m not sure that’s correct. Anyhow, there was a small thread in particular who mentioned 7th’s ideas and TCDs being used in his analysis, and it seemed to be hitting projections quite well. I haven’t the slightest clue how they manged that. How do you create more TCDs other than just linking O/H/L/C points to other O/H/L/C points? If you take averages of date created from highs and lows, how can you possibly create a range that is outside of it? How can you create metadata (Or I suppose meta-info) that shares a secret about the market that is still grounded in market logic? That’s the part that stumps me. I can imagine applying all sorts of math to metadata and creating new data. But what’s the point of it, and why would it make sense to work, if it’s not grounded in something that actually makes sense to begin with?



P.S on a side note I ran across a little story telling 7th did regarding mentoring 1 trader per year who showed potential, turning them into trading elites. Then they would go on and mentor at least 1 person in their career. Unfortunately it seems that there’s not much public info about the elite traders posting and searching the way 7th did, and 7th seems to have disappeared from forums sometime in 2006 (I think?) from being hated too much. I want to be mentored ):  Lol. Cool read though.


Dear Journal #7


I’ve decided to give programming another shot. Something a bit more basic than the world of C’s. I simply need a tool that can handle some sort of data basing the way excel does, but with a bit more “smarts”. Yet I’m always attracted to VBA as I can do so much with it. It’s simple, fast, and capable of making big calculations.

I’ve briefly stepped away from making new statistics; we all need breaks sometimes. This blog is an online diary, I should use it like one more often and read the things I write. Things are always changing, new wisdom on old ideas birth more knowledge. Some routes are extinguished, new ones appear.

The week. That seems to be what things boil down to. I’m not bright enough to have figured that out by myself, but nudges in the right direction are always helpful from those who know more. My most recent statistic (which in my opinion has a lot of potential), agrees. The ABC waves in the 24 1hrx24 were able to pin down expansion structures to capture 90% of the movement into just 3 wave types with 2 subsets (up and down). Likewise, it appears that just 2 days capture at least 1 weekly extreme 88% of the time (with about a 11% overlap). These need to be pinned down further. If the week is King, I think I should use a forest to trees approach. Look for more supporting evidence in this area, then being to narrow it down. Let things grow. Vary. Average.

I likely won’t be touching a statistic for the next 2 months or so, but as always, I’ll be thinking.


some additional thoughts. I should be a little better about updating things when things are ‘slow’ for me. As a role model of mine once said: “Just show up”. Do something everyday, or as frequently as time permits.


Goal: Remap the waves in the weekly structure. Start with the MM.


-In the original framework(daily structure), is there any proof of retracements? Previous studies seem a bit sloppy to me, it’d be good to re-do these. Treat each wave differently. ABCD only. the A wave will need special criteria.

-In the original framework(weekly structure), how big is each wave? Are A/B/C waves the same as D/E/F/G waves with the addition that the latter have wimpy left end tails? (Do the D/E/F/G waves have smaller A waves than the A waves from A/B/C waves?)

-Separate the MM of the week into 2 types, up/down. Keep in mind that though there may be no difference between the two, it creates less room to make errors when calculating reverse and retracement moves.

-Do a quick check on range movement. See if I can find an average. Avg range of 1 week? Avg range of 1 week x4? Avg range of 5 days x5? Is there a difference in these? Previous statistics seem to show that 1 day range averages 100 pips, yet 1 week range averages only 200. Hmm.

-Map out waves using 5x D1 and 120x 1Hr. It’s likely only 2 types will appear for 5x D1 (A and B wave). Currently I already know 120x 1Hr bars using only HH/LL in the weekly create too many wave types to be useful atm. Try to use new criteria to map using this method in light of new information concerning retracement numbers. Is it really about that 20-30% and 80%? Boxes? Probably boxes. Map in bins of 5%s.

-Find my own retracement number proofs. This may be done within the 24x 1hr framework I’ve been using already.

-Find consistency in internal coding. Should I map expansions in terms of the original wave size or of the current boss? What are the benefits/downfalls of each? I think the latter is the way to go, but this tool needs to “reset”. It seems best to be able to map the current boss as an expansion percentage of the original wave, but make sure the original wave isn’t too wimpy? Otherwise percentages could become too variable.