Showing posts with label AFIRMA. Show all posts
Showing posts with label AFIRMA. Show all posts

Monday, 15 July 2013

My NN Input Tweak

As I said in my previous post, I wanted to come up with a "principled" method of transforming market data into inputs more suitable for my classification NN, and this post was going to be about how I have investigated various bandpass filters, Fast Fourier transform filters set to eliminate cycles less than the dominant cycle in the data, FIR filters, and a form of the Hilbert-Huang transform. However, while looking to link to another site I happened upon a so-called "Super smoother" filter, but more on this later.

Firstly, I would like to explain how I added realistic "noise" to my normal, idealised market data. Some time ago I looked at, and then abandoned, the idea of using an AFIRMA smoothing algorithm, an example of which is shown below:
The cyan line is the actual data to be smoothed and the red line is an 11 bar, Blackman window function AFIRMA smooth of this data. If one imagines this red smooth to be one realisation of my idealised market data, i.e. the real underlying signal, then the blue data is the signal plus noise. I have taken the measure of the amount of noise present to be the difference between the red and blue lines normalised by the standard deviation of Bollinger bands applied to the red AFIRMA smooth. I did this over all the historical data I have and combined it all into one noise distribution file. When creating my idealised data it is then a simple matter to apply a Bollinger band to it and add un-normalised random samples from this file as noise to create a realistically noisy, idealised time series for testing purposes.

Now for the "super smoother" filter. Details are freely available in the white paper available from here and my Octave C++ .oct file implementation is given in the code box below.
#include octave/oct.h
#include octave/dcolvector.h
#include math.h
#define PI 3.14159265

DEFUN_DLD ( super_smoother, args, nargout,
  "-*- texinfo -*-\n\
     @deftypefn {Function File} {} super_smoother (@var{n})\n\
     This function smooths the input column vector by using Elher's\n\
     super smoother algorithm set for a 10 bar critical period.\n\
     @end deftypefn" )
{
  octave_value_list retval_list ;
  int nargin = args.length () ;

// check the input arguments
    if ( nargin != 1 )
    {
        error ("Invalid argument. Argument is a column vector of price") ;
        return retval_list ;
    }

    if (args(0).length () < 7 )
    {
        error ("Invalid argument. Argument is a column vector of price") ;
        return retval_list ;
    }

    if (error_state)
    {
        error ("Invalid argument. Argument is a column vector of price") ;
        return retval_list ;
    }
// end of input checking

  ColumnVector input = args(0).column_vector_value () ;
  ColumnVector output = args(0).column_vector_value () ;
  
  // Declare variables
  double a1 = exp( -1.414 * PI / 10.0 ) ;
  double b1 = 2.0 * a1 * cos( (1.414*180.0/10.0) * PI / 180.0 ) ;
  double c2 = b1 ;
  double c3 = -a1 * a1 ;
  double c1 = 1.0 - c2 - c3 ;
 
  // main loop
  for ( octave_idx_type ii (2) ; ii < args(0).length () ; ii++ )
      {
      output(ii) = c1 * ( input(ii) + input(ii-1) ) / 2.0 + c2 * output(ii-1) + c3 * output(ii-2) ; 
      } // end of for loop
     
  retval_list(0) = output ;

  return retval_list ;
  
} // end of function
I like this filter because it smooths away all cycles below the 10 bar critical period, and long time readers of this blog may recall from this post that there are no dominant cycle periods in my EOD data that are this short. The filter produces very nice, smooth curves which act as a good model for the idealised market, smooth curves my NN system was trained on. A typical result on synthetic data is shown below
where the red line is a typical example of "true underlying" NN training data, the cyan is the red plus realistic noise as described above, and the yellow is the final "super smoother" filter of the cyan "price". There is a bit of lag in this filter but I'm not unduly concerned with this as it is intended as input to the classification NN, not the market timing NN. Below is a short film of the performance of this filter on the last two years or so of the ES mini S&P contract. The cyan line is the raw input and the red line is the super smoother output.
Non-embedded view here.

Friday, 7 October 2011

The Theoretically Perfect Moving Average and Oscillator

Readers of this blog will probably have noticed that I am partial to using concepts from digital signal processing in the development of my trading system. Recently I "rediscovered" this PDF, written by Tim Tillson, on Google Docs, which has a great opening introduction:

" 'Digital filtering includes the process of smoothing, predicting, differentiating,
integrating, separation of signals, and removal of noise from a signal. Thus many people
who do such things are actually using digital filters without realizing that they are; being
unacquainted with the theory, they neither understand what they have done nor the
possibilities of what they might have done.'

This quote from R. W. Hamming applies to the vast majority of indicators in technical
analysis."

The purpose of this blog post is to outline my recent work in applying some of the principles discussed in the linked PDF.

Long time readers of this blog may remember that back in April this year I abandoned work I was doing on the AFIRMA trend line because I was dissatisfied with the results. The code for this required projecting prices forward using my leading signal code, and I now find that I can reuse the bulk of this code to create a theoretically perfect moving average and oscillator. I have projected prices forward by 10 bars and then taken the average of the forwards and backwards exponential moving averages, as per the linked PDF, to create a series of averages that theoretically are in phase with the underlying price, and then taken the mean of all these averages to produce my "perfect" moving average. Similarly, I have done the same to create a "perfect" oscillator and the results are shown in the images below.
Sideways Market
Trending up Market
Trending down Market

The upper panes show "price" and its perfect moving average, the middle panes show the perfect oscillator, its one bar leading signal, and exponential standard deviation bands set at 1 standard deviation above and below an exponential moving average of the oscillator. The lower panes show the %B indicator of the oscillator within the bands, offset so that the exponential standard deviation bands are at 1 and -1 levels.

Even cursory examination of many charts such as these shows the efficacy of the signals generated by the crossovers of price with its moving average, the oscillator with its leading signal and the %B indicator crossing the 1 and -1 levels, when applied to idealised data. My next post will discuss the application to real price series.

Sunday, 10 April 2011

Coding and testing of AFIRMA completed

Following on from the previous post, the coding of the AFIRMA trend line using the leading function values as a proxy for the "peek into the future" values of price is complete, and I have to say that the results are quite disappointing. Using the rules outlined in the previous post it soon became obvious from cursory scanning of the equity curves that my version AFIRMA trend line did not live up to its early, potential promise. In fact the equity curves were so disappointing that I did not even bother to do the Monte Carlo permutation and bootstrap tests. The equity curves were no better than those produced by my simple benchmark suite of "systems" and the draw downs were such that I would never trade the AFIRMA as a stand alone "system," at least with the rules outlined in the previous post.

Below is a screen shot of the AFIRMA with a window length of 21 (peeks 10 days into the future) shown on the last 150 daily bars of the S&P E-mini. The red line is my version of it, with a Blackman-Harris window, and the yellow and green lines are two original versions with a Blackman and Blackman-Harris window. As can be seen, the original versions are smooth and accurately pick out major turning points whilst my version is not as smooth and gives many false signals that result in losses, even during a trending period.

Simple analysis of this chart, knowing the reasoning and coding behind it, shows why my version fails as a directional system. Each time the price moves contrary to any immediately prevailing trend, even those of short duration, the leading functions project this small movement as if it were the turning point of a major cycle and hence one ends up with trades in the opposite direction of the major trend, the result being that one is whipsawed in and out of this major trend. My version of the AFIRMA is simply too sensitive to minor price direction changes and I do not really have an idea as to how I can dampen this sensitivity. Smoothing it would probably be pointless as one might as well just smooth the prices directly.

However, all is not completely lost. The fact that my AFIRMA is so sensitive could be useful in identifying pullbacks in trends, acting as a set up to add to positions or for continuation trades. This is something I may investigate in the future, and this idea has been added to my "to do" list. For the moment I do not think that working more on the AFIRMA would be productive.

For interest, this second chart shows AFIRMA trend lines with a window length of nine (peeks four days into the future). It can be seen that my version of AFIRMA is quite robust in that the two trend lines (this chart and the one above) have different length windows but are almost identical.

Thursday, 31 March 2011

AFIRMA trend line

I recently came across a blog post here that talks about an Autoregressive Finite Impulse Response MA, with a link to Metatrader 4 code here. Whilst the blogger in question dismisses the AFIRMA as being "completely useless" I think that in fact this could be quite useful for me. I can guess that s/he dismisses it due to the fact that the windowing function "peeks into the future" (10 days in the case of the default setting of 21).

However, using the leading functions I have talked about in my recent posts, I believe it might be possible to use the values of these leading functions as a proxy for this "peek into the future." Essentially the idea is to take the current bar's 1 bar leading function value of the Cybercycle as a proxy for the next bar's actual Cybercycle value, and then add this to the current bar's Instantaneous trend line value plus its 1 bar momentum to arrive at a "best guess" for the next bar's estimated value. Then all calculations for the indicators are re-run using this estimated value, a new 1 bar leading function calculated, added etc. and wash, rinse and repeat as required.

I can envisage that the coding of this will be difficult and time consuming, so before attempting to do so I will conduct some basic tests of the AFIRMA as is, with the luxury of this perfect "peek into the future" on historical data. I will have to be satisfied that the AFIRMA is possibly worth using before I put time and effort in to this coding task.