Showing posts with label Oscillators. Show all posts
Showing posts with label Oscillators. Show all posts

Wednesday, 3 September 2025

Expressing an Indicator in Neural Net Form

Recently I started investigating relative rotation graphs with a view to perhaps implementing a version of this for use on forex currency pairs. The underlying idea of a relative rotation graph is to plot an asset's relative strength compared to a benchmark and the momentum of this relative strength and to plot this in a form similar to a Polar coordinate system plot, which rotates around a zero point representing zero relative strength and zero relative strength momentum. After some thought it occurred to me that rather than using relative strength against a benchmark I could use the underlying relative strengths of the individual currencies, as calculated by my currency strength indicator, against each other. Furthermore, these underlying log changes in the individual currencies can be normalised using the ideas of brownian motion and then averaged together over different look back periods to create a unique indicator.

This indicator was coded up in the "traditional" way that will be familiar to anybody who has ever tried coding a trading indicator in any coding language. However, I thought that if the calculation of this indicator could be expressed in the form of a Feedforward neural network then the optimisation opportunities available using backpropagation and regularization could be used to tweek the indicator in more useful ways than just varying a look back length and amount of averaging. After some work I was able to make this work in just these two lines of Octave code:

indicator_middle_layer = tanh( full_feature_matrix * input_weights ) ;
indicator_nn_output = tanh( indicator_middle_layer * output_weights ) ; 

Of course, prior to calling these two lines of code, there is some feature engineering to create the input full_feature_matrix, and the input weights and output_weights matrices taken together are mathematically equivalent to the original indicator calculations. Finally, because this is a neural net expression of the indicator, the non-linear tanh activation function is applied to the hidden middle and output layers of the net.

The following plot shows the original indicator in black and the neural net version of it in blue 

over the data shown in this plot of 10 minute bars of the EURUSD forex pair. 

The red indicator in the plot above is the 1 bar momentum of the neural net indicator plot.

To judge the quality of this indicator I used the entropy measure (measured over 14,000+ 10 minute bars of EURUSD), the results of which are shown next.

entropy_indicator_original = 0.9485
entropy_indicator_nn_output = 0.9933

An entropy reading of 0.9933 is probably as good as any trading indicator could hope to achieve (a perfect reading is 1.0) and so the next thing was to quickly back test the indicator performance. Based on the logic of the indicator the obvious long (short) signals are being above (below) the zeroline, or equivalently the sign of the indicator, and for good measure I also tested the sign of the momentum and some simple moving averages thereof.

The following plot shows the equity curves of this quick test where it is visually clear that the blue equity curves are "the best" when plotted in relation to the black "buy and hold" equivalent equity curve. These represent the equity curves of a 3 bar simple moving average of the 1 bar momentum of both the original formulation of the indicator and the neural net implementation. I would point out that these equity curves represent the theoretical equity resulting from trading the London session after 9:00am BST and stopping at 7:00am EST (usually about noon BST) and then starting trading again at 9:00am EST until 17:00pm EST. This schedule avoids the opening sessions (7:00 to 9:00am) in both London and New York because, from my observations of many OHLC charts such as shown above, there are frequently wild swings where price is being pushed to significant points such as volume profile clusters, accumulations of buy/sell orders etc. and in my opinion no indicator can be expected to perform well in that sort of environment. Also avoided are the hours prior to 7:00am BST, i.e. the Asian session or overnight session.    

Although these equity curves might not, at first glance, be that impressive, especially as they do not account for trading costs etc. my intent on doing these tests was to determine the configuration of a final "decision" output layer to be added to the neural net implementation of the indicator. A 3 bar simple moving average of the 1 bar momentum implies the necessity to include 4 consecutive readings of the indicator output as input to a final " decision" layer. The following simple, hand-drawn sketch shows what I mean:
A discussion of this will be the subject of my next post.

Thursday, 21 March 2024

Standard "Volume based" Indicators Replaced with PositionBook Data

In my previous post I suggested three different approaches to using PositionBook data other than directly using this data to create new, unique indicators. This post is about the first of the aforementioned ideas: modifying existing indicators that somehow incorporate volume in their construction.

The indicators I've chosen to look at are the Accumulation and Distribution index, On Balance Volume, Money Flow index, Price Volume Trend and, for comparative purposes, an indicator similar to these utilising PositionBook data. For illustrative purposes these indicators have been calculated on this OHLC data,

which shows a 20 minute bar chart of the EUR_USD forex pair. The chart starts at the New York close of 4 January 2024 and ends at the New York close on 5 January 2024. The green vertical lines span 7am to 9am London time and the red lines are 7am to 9am New York time. This second chart shows the indicators individually max-min scaled from zero to one so that they can be more easily visually compared.

 
As in the OHLC chart, the vertical lines are the London and New York opening sessions. The four "traditional" indicators more or less tell the same story, which is not surprising since their calculations involve bar to bar price changes or open to close intra-bar changes which are then multiplied by bar volume. Effectively they are all just differently scaled versions of the underlying price movement, or alternatively, just accumulated sums of different fractions of the bar volume. The PositionBook data version, called Pos Change Ind, does not use any OHLC information at all but rather uses the accumulated difference(s) between position changes multiplied by volume. For most of the day the general story told by the Pos Change Ind indicator agrees with the other indicators; however during the big run up which started just about 9am New York time there is a significant difference between Pos Change Ind and the others.

In hindsight, by looking at my order levels chart

and volume profile chart
 
it is easy to speculate about what market participants were thinking during this trading day, especially if the following PositionBook chart is taken into account.
 
For the purpose of the following brief "stream of consciousness" narrative imagine it's 7am New York time and looking back at the day's action so far it can be seen that the downward drift of the day seems to have halted with a mini double bottom, and we are now moving up with some new heavy tick volumes having accumulated over the last hour or so, forming a new volume profile point of control (POC). Over the next hour prices continue the new slight drift up with accumulating long positions and at about 8.30am we see a doji bar form on the 10 minute chart at the level of the rolling vwap for the day. Suddenly there is the big down bar, which could conceivably be a shake-out of the recently added longs, targeting the stop orders below, which finishes with an extended lower wick. This seems to be an ideal set-up for a long trade targeting either the old POC, which also happens to be the current high of the day, or the accumulated orders which happen to coincide with the level at which, currently, the greatest proportion of long positions have been entered. 
 
Of course it can seen, in hindsight, that this was a great set-up for an intra-day trade that could have caught almost the entire high-low range of the day as a profitable trade, dependent of course on exact entry and exit levels. This set-up is a synthesis of observations from the volume profile chart, the order levels chart and the position levels chart, along with the vwap indicator. The Pos Change Ind indicator does not seem to add much value over that provided by the more traditional, volume based indicators in the set-up phase.

This is not necessarily the case for the exit. It can be seen that the Pos Change Ind indicator turns down sharply several bars before all the other indicators, and this movement in the indicator is evident by the close of the bar with the long upper wick which makes the high of the day. This sharp downturn in the indicator shows that there was a mass exit of longs during the formation of this bar, made clearer by the following chart which shows the two components of the Pos Change Indicator, namely the
 
"Outside Change" and the "Inside Change." The outside change shows the total net position changes for the price levels that lie outside the range of the bar and the inside change is the net change for price levels that lie within the bar range. The greater change of the two is obviously the (red) inside change, and looking at the position levels plot we can see why. The previously mentioned level of "greatest proportion of long positions" suddenly loses that distinction - a large number of the longs at this level obviously liquidated their positions. This is important information, which shows that sentiment favouring long positions obviously changed, and it can be surmised that many long position holders were happy to get out of their trade at more or less break even prices. Also noticeable in the position levels chart is the change in blue shade from darker to lighter at the price levels within the range of the large price run-up. This reduction in colour intensity shows that those traders who entered during the run-up also exited near the top of the move. Taken together these observations could have been used as a nice short set-up targeting, for example, the then currently lower level of vwap, which in fact was subsequently hit with the day closing at this level.

As I had previously suspected, there is value in PositionBook data but it is perhaps tricky to operationalise or to easily automate within a trading system. It can be used to indicate a directional bias, or as above to show when traders exit positions. Again, as shown above, it can be used to put a new, useful twist on existing indicators, but in general it appears that use of this data is primarily visual by way of my PositionBook chart and subsequent, subjective evaluation. Whilst I am pleased with the potential insights provided, I would prefer a more structured, algorithmic use of this data, as in the third point of my previous post.
 
More in due course.


Wednesday, 5 November 2014

A New MFE/MAE Indicator.

After stopping my investigation of tools for spectral analysis over the last few weeks I have been doing another mooc, this time Learning from Data, and also working on the idea of one of my earlier posts.

In the above linked post there is a video showing the idea as a "paint bar" study. However, I thought it would be a good idea to render it as an indicator, the C++ Octave .oct code for which is shown in the code box below.
DEFUN_DLD ( adjustable_mfe_mae_from_open_indicator, args, nargout,
"-*- texinfo -*-\n\
@deftypefn {Function File} {} adjustable_mfe_mae_from_open_indicator (@var{open,high,low,close,lookback_length})\n\
This function takes four input series for the OHLC and a value for lookback length. The main outputs are\n\
two indicators, long and short, that show the ratio of the MFE over the MAE from the open of the specified\n\
lookback in the past. The indicators are normalised to the range 0 to 1 by a sigmoid function and a MFE/MAE\n\
ratio of 1:1 is shifted in the sigmoid function to give a 'neutral' indicator reading of 0.5. A third output\n\
is the max high - min low range over the lookback_length normalised by the range of the daily support and\n\
resistance levels S1 and R1 calculated for the first bar of the lookback period. This is also normalised to\n\
give a reading of 0.5 in the sigmoid function if the ratio is 1:1. The point of this third output is to give\n\
some relative scale to the unitless MFE/MAE ratio and to act as a measure of strength or importance of the\n\
MFE/MAE ratio.\n\
@end deftypefn" )

{
octave_value_list retval_list ;
int nargin = args.length () ;

// check the input arguments
if ( nargin != 5 )
   {
   error ( "Invalid arguments. Arguments are price series for open, high, low and close and value for lookback length." ) ;
   return retval_list ;
   }

if ( args(4).length () != 1 )
   {
   error ( "Invalid argument. Argument 5 is a scalar value for the lookback length." ) ;
   return retval_list ;
   }
   
   int lookback_length = args(4).int_value() ;

if ( args(0).length () < lookback_length )
   {
   error ( "Invalid argument lengths. Argument lengths for open, high, low and close vectors should be >= lookback length." ) ;
   return retval_list ;
   }
   
if ( args(1).length () != args(0).length () )
   {
   error ( "Invalid argument lengths. Argument lengths for open, high, low and close vectors should be equal." ) ;
   return retval_list ;
   }
   
if ( args(2).length () != args(0).length () )
   {
   error ( "Invalid argument lengths. Argument lengths for open, high, low and close vectors should be equal." ) ;
   return retval_list ;
   }
   
if ( args(3).length () != args(0).length () )
   {
   error ( "Invalid argument lengths. Argument lengths for open, high, low and close vectors should be equal." ) ;
   return retval_list ;
   }   

if (error_state)
   {
   error ( "Invalid arguments. Arguments are price series for open, high, low and close and value for lookback length." ) ;
   return retval_list ;
   }
// end of input checking
  
// inputs
ColumnVector open = args(0).column_vector_value () ;
ColumnVector high = args(1).column_vector_value () ;
ColumnVector low = args(2).column_vector_value () ;
ColumnVector close = args(3).column_vector_value () ;
// outputs
ColumnVector long_mfe_mae = args(0).column_vector_value () ;
ColumnVector short_mfe_mae = args(0).column_vector_value () ;
ColumnVector range = args(0).column_vector_value () ;

// variables
double max_high = *std::max_element( &high(0), &high( lookback_length ) ) ;
double min_low = *std::min_element( &low(0), &low( lookback_length ) ) ;
double pivot_point = ( high(0) + low(0) + close(0) ) / 3.0 ;
double s1 = 2.0 * pivot_point - high(0) ;
double r1 = 2.0 * pivot_point - low(0) ;

for ( octave_idx_type ii (0) ; ii < lookback_length ; ii++ ) // initial ii loop
    {

      // long_mfe_mae
      if ( open(0) > min_low ) // the "normal" situation
      {
 long_mfe_mae(ii) = 1.0 / ( 1.0 + exp( -( ( max_high - open(0) ) / ( open(0) - min_low ) - 1.0 ) ) ) ;
      }
      else if ( open(0) == min_low )
      {
 long_mfe_mae(ii) = 1.0 ;
      }
      else
      {
 long_mfe_mae(ii) = 0.5 ;
      }
           
      // short_mfe_mae
      if ( open(0) < max_high ) // the "normal" situation
      {
 short_mfe_mae(ii) = 1.0 / ( 1.0 + exp( -( ( open(0) - min_low ) / ( max_high - open(0) ) - 1.0 ) ) ) ;
      }
      else if ( open(0) == max_high )
      {
 short_mfe_mae(ii) = 1.0 ;
      }
      else
      {
 short_mfe_mae(ii) = 0.5 ;
      }
      
      range(ii) = 1.0 / ( 1.0 + exp( -( ( max_high - min_low ) / ( r1 - s1 ) - 1.0 ) ) ) ;
      
    } // end of initial ii loop

for ( octave_idx_type ii ( lookback_length ) ; ii < args(0).length() ; ii++ ) // main ii loop
    { 
    // assign variable values  
    max_high = *std::max_element( &high( ii - lookback_length + 1 ), &high( ii + 1 ) ) ;
    min_low = *std::min_element( &low( ii - lookback_length + 1 ), &low( ii + 1 ) ) ;
    pivot_point = ( high(ii-lookback_length) + low(ii-lookback_length) + close(ii-lookback_length) ) / 3.0 ;
    s1 = 2.0 * pivot_point - high(ii-lookback_length) ;
    r1 = 2.0 * pivot_point - low(ii-lookback_length) ;

      // long_mfe_mae
      if ( open( ii - lookback_length + 1 ) > min_low && open( ii - lookback_length + 1 ) < max_high ) // the "normal" situation
      {
 long_mfe_mae(ii) = 1.0 / ( 1.0 + exp( -( ( max_high - open( ii - lookback_length + 1 ) ) / ( open( ii - lookback_length + 1 ) - min_low ) - 1.0 ) ) ) ;
      }
      else if ( open( ii - lookback_length + 1 ) == min_low )
      {
 long_mfe_mae(ii) = 1.0 ;
      }
      else
      {
 long_mfe_mae(ii) = 0.0 ;
      }
   
      // short_mfe_mae
      if ( open( ii - lookback_length + 1 ) > min_low && open( ii - lookback_length + 1 ) < max_high ) // the "normal" situation
      {
 short_mfe_mae(ii) = 1.0 / ( 1.0 + exp( -( ( open( ii - lookback_length + 1 ) - min_low ) / ( max_high - open( ii - lookback_length + 1 ) ) - 1.0 ) ) ) ;
      }
      else if ( open( ii - lookback_length + 1 ) == max_high )
      {
 short_mfe_mae(ii) = 1.0 ;
      }
      else
      {
 short_mfe_mae(ii) = 0.0 ;
      }
      
      range(ii) = 1.0 / ( 1.0 + exp( -( ( max_high - min_low ) / ( r1 - s1 ) - 1.0 ) ) ) ;

    } // end of main ii loop

retval_list(2) = range ;    
retval_list(1) = short_mfe_mae ;
retval_list(0) = long_mfe_mae ;

return retval_list ;
  
} // end of function
The way to interpret this is as follows:
  • if the "long" indicator reading is above 0.5, go long
  • if the "short" is above 0.5, go short
  • if both are below 0.5, go flat
An alternative, if the indicator reading is flat, is to maintain any previous non flat position. I won't show a chart of the indicator itself as it just looks like a very noisy oscillator, but the equity curve(s) of it, without the benefit of foresight, on the EURUSD forex pair are shown below.
The yellow equity curve is the cumulative, close to close, tick returns of a buy and hold strategy, the blue is the return going flat when indicated, and the red maintaining the previous position when flat is indicated. Not much to write home about. However, this second chart shows the return when one has the benefit of the "peek into the future" as discussed in my earlier post.
The colour of the curves are as before except for the addition of the green equity curve, which is the cumulative, vwap value to vwap value tick returns, a simple representation of what an equity curve with realistic slippage might look like. This second set of equity curves shows the promise of what could be achievable if a neural net to accurately predict future values of the above indicator can be trained. More in an upcoming post.  


Sunday, 15 September 2013

Savitzky-Golay Filter Convolution Coefficients

My investigation so far has had me reading General Least Squares Smoothing And Differentiation By The Convolution Method, a paper in the reference section of the Wikipedia page on Savitzky-Golay filters. In this paper there is Pascal code for calculating the convolution coefficients for a Savitzky-Golay filter, and in the code box below is my C++ translation of this Pascal code. Nb, due to formatting issues the symbol "<" appears as ampersand lt semi-colon in this code.
// Calculate Savitzky-Golay Convolution Coefficients
#include 
using namespace std;

double GenFact (int a, int b)
{ // calculates the generalised factorial (a)(a-1)...(a-b+1)

  double gf = 1.0 ;
  
  for ( int jj = (a-b+1) ; jj &lt; a+1 ; jj ++ ) 
  {
    gf = gf * jj ; 
  }  
    return (gf) ;
    
} // end of GenFact function

double GramPoly( int i , int m , int k , int s )
{ // Calculates the Gram Polynomial ( s = 0 ), or its s'th
  // derivative evaluated at i, order k, over 2m + 1 points
  
double gp_val ;  

if ( k > 0 )
{   
gp_val = (4.0*k-2.0)/(k*(2.0*m-k+1.0))*(i*GramPoly(i,m,k-1,s) + s*GramPoly(i,m,k-1.0,s-1.0)) - ((k-1.0)*(2.0*m+k))/(k*(2.0*m-k+1.0))*GramPoly(i,m,k-2.0,s) ;
}
else
{  
  if ( ( k == 0 ) && ( s == 0 ) )
  {
     gp_val = 1.0 ;
  }     
  else
  {
     gp_val = 0.0 ;
  } // end of if k = 0 & s = 0  
} // end of if k > 0

return ( gp_val ) ;

} // end of GramPoly function

double Weight( int i , int t , int m , int n , int s )
{ // calculates the weight of the i'th data point for the t'th Least-square
  // point of the s'th derivative, over 2m + 1 points, order n

double sum = 0.0 ;

for ( int k = 0 ; k &lt; n + 1 ; k++ )
{
sum += (2.0*k+1.0) * ( GenFact(2.0*m,k) / GenFact(2.0*m+k+1.0,k+1.0) ) * GramPoly(i,m,k,0) * GramPoly(t,m,k,s) ;    
} // end of for loop

return ( sum ) ;
                                               
} // end of Weight function

int main ()
{ // calculates the weight of the i'th data point (1st variable) for the t'th   Least-square point (2nd variable) of the s'th derivative (5th variable), over 2m + 1 points (3rd variable), order n (4th variable)
  
  double z ;
  
  z = Weight (4,4,4,2,0) ; // adjust input variables for required output
  
  cout &lt;&lt; "The result is " &lt;&lt; z ;
  
  return 0 ;
}
Whilst I don't claim to understand all the mathematics behind this, the use of the output of this code is conceptually not that much more difficult than a FIR filter, i.e. multiply each value in a look back window by some coefficient, except that a polynomial of a given order is being least squares fit to all points in the window rather than, for example, an average of all the points being calculated. The code gives the user the ability to calculate the value of the polynomial at each point in the window, or its derivative(s) at that point.

I would particularly draw readers' attention to the derivative(s) link above. On this page there is an animation of the slope (1st derivative) of a waveform. If this were to be applied to a price time series the possible applications become apparent: a slope of zero will pick out local highs and lows to identify local support and resistance levels; an oscillator zero crossing will give signals to go long or short; oscillator peaks will give signals to tighten stops or place profit taking orders. And why stop there? Why not use the second derivative as a measure of the acceleration of a price move? And how about the jerk and the jounce of a price series? I don't know about readers, but I have never seen a trading article, forum post or blog talk about the jerk and jounce of price series. Of course, if one were to plot all these as indicators I'm sure it would just be confusing and nigh impossible to come up with a cogent rule set to trade with. However, one doesn't have to do this oneself, and this is what I was alluding to in my earlier post when I said that the Savitzky-Golay framework might provide unique and informative inputs to my neural net trading system. Providing enough informative data is available for training, the rule set will be encapsulated within the weights of any trained neural net.

Tuesday, 10 September 2013

Savitzky-Golay Filters

Recently I was contacted by a reader who asked me if I had ever looked into Savitzky-Golay filters and my reply was that I had and that, in my ignorance, I hadn't found them useful. However, thinking about it, I think that I should look into them again. The first time I looked was many years ago when I was an Octave coding novice and also my general knowledge about filters was somewhat limited.

As prelude to this new investigation I have below plotted two charts of real ES Mini closes in cyan, along with a Savitzky-Golay filter in red and the "Super Smoother" in green for comparative purposes.

The Savitzky-Golay filter is easily available in Octave by a simple call such as

filter = sgolayfilt( close , 3 , 11 ) ;

which is the actual call that the above are plots of, which is an 11 bar cubic polynomial fit. As can be seen, the Savitzky-Golay filter is generally as smooth as the super smoother but additionally seems to have less lag and doesn't overshoot at turning points like the super smoother. Given a choice between the two, I'd prefer to use the Savitzsky-Golay filter.

However, there is a serious caveat to the use of the S-G filter - it's a centred filter, which means that its calculation requires "future" values. This feature is well demonstrated by the animated figure at the top right of the above linked Wikipedia page. Although I can't accurately recall the reason(s) I abandoned the S-G filter all those years ago, I'm pretty sure this was an important factor. The Wikipedia page suggests how this might be overcome in its "Treatment of first and last points" section. Another way to approach this could be to adapt the methodology I employed in creating my perfect oscillator indicator, and this is in fact what I intend to do over the coming days and weeks. In addition to this I shall endeavour to make the length of the filter adaptive rather than having a fixed bar length polynomial calculation. 

And the reason for setting myself this task? Apart from perhaps creating an improved smoother following my recent disappointment with frequency domain smoothing, there is the possibility of creating useful inputs for my Neural Net trading algorithm: the framework of S-G filters allows for first, second, third and fourth derivatives to be easily calculated and I strongly suspect that, if my investigations and coding turn out to be successful, these will be unique and informative trading inputs.

Wednesday, 16 November 2011

Update on the Perfect Moving Average and Oscillator

It has been some time since my last post and the reason for this was, in large part, due to my changing over from an Ubuntu to a Kubuntu OS and the resultant teething problems etc. However, all that is over so now I return to the Perfect Moving Average (PMA) and the Perfect Oscillator (PO).

In my previous post I stated that I would show these on real world data, but before doing this there was one thing I wanted to investigate - the accuracy of the price projection algorithm upon which the average and oscillator calculations are based. Below are four sample screenshots of these investigations.

Sideways Market

Up Trending Market

Down Trending Market

The charts show a simulated "ideal" time series (cyan) with two projected price series of 10 points each (red and green) plotted such that the red and green projections are plotted concurrently with the actual series points they are projections, or predictions, of. Both projections/predictions start at the point at which the red "emerges" from the actual series. As can be seen the green projection/prediction is much more accurate than the red, and the green is the improved projection algorithm I have recently been working on.

As a result of this improvement and due to the nature of the internal calculations I expect that the PMA will follow prices much closer and that the PO will lead prices by a quarter cycle compared with my initial price projection algorithm. More on this in the next post.

Friday, 7 October 2011

The Theoretically Perfect Moving Average and Oscillator

Readers of this blog will probably have noticed that I am partial to using concepts from digital signal processing in the development of my trading system. Recently I "rediscovered" this PDF, written by Tim Tillson, on Google Docs, which has a great opening introduction:

" 'Digital filtering includes the process of smoothing, predicting, differentiating,
integrating, separation of signals, and removal of noise from a signal. Thus many people
who do such things are actually using digital filters without realizing that they are; being
unacquainted with the theory, they neither understand what they have done nor the
possibilities of what they might have done.'

This quote from R. W. Hamming applies to the vast majority of indicators in technical
analysis."

The purpose of this blog post is to outline my recent work in applying some of the principles discussed in the linked PDF.

Long time readers of this blog may remember that back in April this year I abandoned work I was doing on the AFIRMA trend line because I was dissatisfied with the results. The code for this required projecting prices forward using my leading signal code, and I now find that I can reuse the bulk of this code to create a theoretically perfect moving average and oscillator. I have projected prices forward by 10 bars and then taken the average of the forwards and backwards exponential moving averages, as per the linked PDF, to create a series of averages that theoretically are in phase with the underlying price, and then taken the mean of all these averages to produce my "perfect" moving average. Similarly, I have done the same to create a "perfect" oscillator and the results are shown in the images below.
Sideways Market
Trending up Market
Trending down Market

The upper panes show "price" and its perfect moving average, the middle panes show the perfect oscillator, its one bar leading signal, and exponential standard deviation bands set at 1 standard deviation above and below an exponential moving average of the oscillator. The lower panes show the %B indicator of the oscillator within the bands, offset so that the exponential standard deviation bands are at 1 and -1 levels.

Even cursory examination of many charts such as these shows the efficacy of the signals generated by the crossovers of price with its moving average, the oscillator with its leading signal and the %B indicator crossing the 1 and -1 levels, when applied to idealised data. My next post will discuss the application to real price series.