Thursday, 20 March 2014

Update on Recent Work

It has been almost two months since my last post and during this time I have been working on a few different things, all related in one way or another to my desire to create a rolling NN training regime. First off, I have been giving some thought as to the exact methodology to use, and two had come to mind
  1. a rolling look back period of n bars, similar to a moving average
  2. selecting non consecutive periods of price history with similarity to the most recent history
I have not done any work on the first because I feel that it might lead to an unbalanced training set, so I have been working on the second idea. It seemed natural to revisit my earlier work on market classifying with the idea of training a NN for the current market regime using the most recent n bars that have the same market regime classification as the current bar. However, having been somewhat disappointed with the results of my previous work in  this area I have been looking at SVM classifiers, in particular the libsvm library. To facilitate this, and following on from my previous post where I mentioned the comp engine timeseries website, I have been hand engineering features for inputs to the SVM. Below is a short video which shows the four features I have come up with. The x and y axis are the same two features in both parts of the video, with the z axis being the third and fourth features. The data are those obtained from my usual idealised market types, with added noise to try and simulate more realistic market conditions. The different colours indicate the five market types that are being modelled.
As can be seen there is nice separation between the market types and the SVM achieves over 98% cross-validation accuracy on this training set. Despite this,  when applied to real market data I am yet again disappointed by the performance and choose for now to no longer pursue this avenue of investigation.

In addition to all the above, I have "discovered" the Octave sourceforge nan package, which I may begin to investigate more fully in due course. I have also been working through another Massive Open Online Course, this time Statistical Learning, which is in its last week at the moment. In this last week of the course I have been alerted to a possible new area of investigation, Distance correlation, which I had heard of before but not fully appreciated.

Finally, I have also been reassessing the code I use for calculating dominant cycle periods. It is these last two, distance correlation and the period code, that I'm going to look at more fully over the coming days.


Monday, 27 January 2014

Creating Neural Net Training Features

In my last post I wrote about how I had successfully coded a basic walk forward neural net training regime and that I was going to then work on creating useful training features. To that end, for the past few weeks, I have been doing a lot of online searching and luckily I have come across the Comp-Engine timeseries website. On this site there is a code tab with a plethora of ways of extracting time series features for classification purposes, and what's more the code is MATLAB code, which is almost fully compatible with the Octave software I do my development in. The only drawback I can see is that some of the code references or depends on MATLAB toolboxes, which might not be freely available to non MATLAB license holders. It is my intention for the immediate future to investigate this resource more fully.

Thursday, 9 January 2014

Neural Net Walk Forward Octave Training Code

Following on from my last post I have now completed the basic refactoring of my existing NN Octave code to a "Walk Forward" training regime. After some simple housekeeping code to load/extract data and create inputs the basic nuts and bolts of this new, walk forward Octave code is given in the code box below.
% get the corresponding targets - map the target labels as binary vectors of 1's and 0's for the class labels
n_classes = 3 ; % 3 outputs in softmax layer
targets = eye( n_classes )( l_s_n_pos_vec, : ) ; % there are 3 labels

% get NN features for all data
[ binary_features , scaled_features ] = windowed_nn_input_features( open, high, low, close, tick ) ;

tic ;

% a uniformly distributed randomness source for seeding & reproductibility purposes
global randomness_source
load randomness_source.mat

% vector to hold final NN classification results
nn_classification = zeros( length(close) , 2 ) ;

ii_begin = 2000 ;
ii_end = 2500 ;

for ii = ii_begin : ii_end

% get the look back period 
lookback_period = max( period(ii-2,1)+2, period(ii,1) ) ;

% extract features and targets from windowed_nn_input_features output
training_binary_features = binary_features( ii-lookback_period:ii , : ) ;
training_scaled_features = scaled_features( ii-lookback_period:ii , : ) ;
training_targets = targets( ii-lookback_period:ii , : ) ;

% set some default values for NN training
n_hid = 2.0 * ( size( training_binary_features, 2 ) + size( training_scaled_features, 2 ) ) ; % units in hidden layer
lr_rbm = 0.02 ;       % default value for learning rate for rbm training
learning_rate = 0.5 ; % default value for learning rate for back prop training
n_iterations = 500 ; % number of training iterations to perform

% get the rbm trained weights for the input layer to hidden layer for training_binary_features
rbm_w_binary = train_rbm_binary_features( training_binary_features, n_hid, lr_rbm, n_iterations ) ;

% and now get the rbm trained weights of the input layer to hidden layer for training_scaled_features
rbm_w_scaled = train_rbm_scaled_features( training_scaled_features, n_hid, lr_rbm, n_iterations ) ;

% now stack these to form the initial input layer to hidden layer weight matrix
initial_input_weights = [ rbm_w_binary ; rbm_w_scaled ] ;

% now feedforward through hidden layer to get the hidden layer output
hidden_layer_output = logistic( [ training_binary_features training_scaled_features ] * initial_input_weights ) ;

% now add the hidden layer bias unit to the above output of the hidden layer and RBM train
hidden_layer_output = [ ones( size(hidden_layer_output,1), 1 ) hidden_layer_output ] ;
initial_softmax_weights = train_rbm_output_w( hidden_layer_output, n_classes, lr_rbm, n_iterations ) ;

% now using the above rbm trained weight matrices as initial weights instead of random initialisation,
% do the backpropagation training with targets by dropping the last two sets of features for which 
% there are no targets and adding in a column of ones for the input layer bias unit
% First, extract the relevant features
all_features = [ training_binary_features(1:end-2,:) training_scaled_features(1:end-2,:) ] ;
training_targets = training_targets(1:end-2,:) ;

% and do the backprop training
[ final_input_w, final_softmax_w ] = rbm_backprop_training( all_features, training_targets, initial_input_weights, initial_softmax_weights, n_hid, n_iterations, learning_rate, 0.9, false) ;

% get the NN classification of the ii-th bar for this iteration
val_1 = logistic( [ training_binary_features(end-1,:) training_scaled_features(end-1,:) ] * final_input_w ) ;
val_2 = [ 1 val_1 ] * final_softmax_w ;
class_normalizer = log_sum_exp_over_cols( val_2 ) ;
log_class_prob = val_2 - repmat( class_normalizer , [ 1 , size( val_2 , 2 ) ] ) ;
class_prob = exp( log_class_prob ) ;
[ dump , choice_1 ] = max( class_prob , [] , 2 ) ;

if ( choice_1 == 3 )
   choice_1 = ff_pos_vec(ii-2) ;
end

val_1 = logistic( [ training_binary_features(end,:) training_scaled_features(end,:) ] * final_input_w ) ;
val_2 = [ 1 val_1 ] * final_softmax_w ;
class_normalizer = log_sum_exp_over_cols( val_2 ) ;
log_class_prob = val_2 - repmat( class_normalizer , [ 1 , size( val_2 , 2 ) ] ) ;
class_prob = exp( log_class_prob ) ;
[ dump , choice ] = max( class_prob , [] , 2 ) ;

nn_classification( ii , 1 ) = choice ;

if ( choice == 3 )
   choice = choice_1 ;
end

nn_classification( ii , 2 ) = choice ;

end % end if ii loop

toc ;

% write to file for plotting in Gnuplot
axis = (ii_begin:1:ii_end)';
output_v2 = [axis,open(ii_begin:ii_end,1),high(ii_begin:ii_end,1),low(ii_begin:ii_end,1),close(ii_begin:ii_end,1),nn_classification(ii_begin:ii_end,2),ff_pos_vec(ii_begin:ii_end,:) ] ;
dlmwrite('output_v2',output_v2)
This code is quite heavily commented, but to make things clearer here is what it does:-
  1. creates a matrix of binary features and a matrix of scaled features ( in the range 0 to 1 ) by calling the C++ .oct function "windowed_nn_input_features." The binary features matrix includes the input layer bias unit
  2. RBM trains separately on each of the above features matrices to get weight matrices
  3. horizontally stacks the two weight matrices from step 2 to create a single, initial input layer to hidden layer weight matrix
  4. matrix multiplies the input features by the combined RBM trained matrix from step 3 and feeds forward through the logistic function hidden layer to create a hidden layer output
  5. adds a bias unit to the hidden layer output from step 4 and then RBM trains to get a Softmax weight matrix
  6. uses the initial weight matrices from steps 3 and 5 instead of random initialisation for backpropagation training of a Feedforward neural network via the "rbm_backprop_training" function
  7. uses the trained NN from step 6 to make prediction/classify the most recent candlestick bar and records this NN prediction/classification. Steps 2 to 7 are contained in a for loop which slides a moving window across the input data
  8. finally writes output to file
At the moment the features I'm using are very simplistic, just for unit testing purposes. My next post will show the results of the above with a fuller set of features.

Monday, 9 December 2013

Preliminary Tests of Walk Forward NN Training Idea.

In my suspending work on Brownian bands post I suggested that I wanted to do some preliminary tests of the idea of training a neural net on a walk forward basis. The test I had in mind was a Monte Carlo permutation test as described in David Aronson's book  "Evidence Based Technical Analysis."  Before I go to the trouble of refactoring all my NN code I would like to be sure that, if a NN can be successfully trained in this way, the end result will have been worth the effort. With this aim in mind I used the below implementation of the MC test routine to test the above linked post's idea. The code, as usual, is Octave .oct function C++ code, but taking advantage of Octave's in-built libraries to avoid loops where possible.
DEFUN_DLD ( position_vector_permutation_test, args, nargout, 
"-*- texinfo -*-\n\
@deftypefn {Function File} {} position_vector_permutation_test (@var{returns,position_vector_matrix,iters})\n\
This function takes as inputs a vector of returns, a position vector matrix the columns of which are\n\
the individual position vectors that are to be part of the test and the number of iterations for the test.\n\
If the matrix consists only of one position vector the test performed is the single position vector permuation\n\
test. If there are two or more position vectors the test performed is the data mining bias adjusted permutation\n\
test. The null hypothesis that is tested is that the position vector(s) give results that are no better than those\n\
a random ordering of equivalent net positions would give. The alternative hypothesis is that the position vector(s) give\n\
results that are better than a random ordering would give. The function returns a p-value(s) for the position vector(s)\n\
being tested. If small enough, we can reject the null hypothesis and accept the alternative hypothesis. The returned\n\
p-values are expressed as percentages, i.e 0.05 means a 5% significance value. The test statistic is the sum of returns.\n\
*IMPORTANT NOTE* Apart from basic input argument checks to avoid error messages and seg faults this function does not\n\
check that the returns vector and position vector matrix are properly alligned etc. with regard to accuracy of test results\n\
and makes no assumptions about this. It is the user's responsibility to ensure that the position vector(s) are offset to\n\
match the returns for the particular test in question.\n\
@end deftypefn" )

{
octave_value_list retval_list ; // the return value(s) are p value(s)
int nargin = args.length () ;

// check the input arguments
if ( nargin != 3 )
   {
   error ("Insufficient arguments. Inputs are a returns vector, a position vector matrix and a scalar value for iters.") ;
   return retval_list ;
   }

if ( args(0).length () != args(1).rows () )
   {
   error ("Dimensions mismatch. Length of returns vector should == No.rows of position_vector_matrix.") ;
   return retval_list ;
   }
   
if ( args(2).rows() > 1 )
   {
   error ("Invalid 3rd argument. This should be a scalar value for the number of permutations to perform.") ;
   return retval_list ;
   }
   
if ( args(2).length () != args(2).rows () )
   {
   error ("Invalid 3rd argument. This should be a scalar value for the number of permutations to perform.") ;
   return retval_list ;
   }   

if ( error_state )
   {
   error ("Error state. See usage") ;
   return retval_list ;
   }
// end of input checking

//disable_warning ( "Octave:broadcast" ) ;
set_warning_state ( "Octave:broadcast" , "off" ) ;

RowVector returns = args(0).row_vector_value () ; // Returns vector input
Matrix position_vector = args(1).matrix_value () ; // Positions vector input

// compute the return(s) of the candidate model(s) 
Matrix model_returns ( 1 , position_vector.cols() ) ;
model_returns = returns * position_vector ; // matrix multiplication

// normalised returns multiplier
Matrix normalised_multiplier ( 1 , position_vector.cols() ) ;
normalised_multiplier.fill ( 0.0 ) ;
   
if ( position_vector.cols() > 1 ) // more than one position vector
   {
   for ( octave_idx_type ii (0) ; ii < position_vector.cols() ; ii++ )
       { 
  for ( octave_idx_type jj (0) ; jj < position_vector.rows() ; jj++ )
      {
      normalised_multiplier( 0 , ii ) += fabs( position_vector( jj , ii ) ) ;  
      } // end of jj loop     
  normalised_multiplier( 0 , ii ) = sqrt( position_vector.rows() ) / sqrt( normalised_multiplier( 0 , ii ) ) ; 
       } // end of ii loop   
   } // end of if ( position_vector.cols() > 1 )   
   else // only one position vector
   {
   normalised_multiplier( 0 , 0 ) = 1.0 ;
   }

// use this normalised_multiplier to adjust the model returns
model_returns = product( model_returns , normalised_multiplier ) ;  

// Now do the Monte-Carlo replications
int temp ; 
int k1 ;
int k2 ;
Matrix trial_returns ( 1 , position_vector.cols() ) ;
double max_trial_return ;
Matrix counts ( 1 , position_vector.cols() ) ;
counts.fill ( 0.0 ) ;
MTRand mtrand1 ; // Declare the Mersenne Twister Class - will seed from system time

 for ( octave_idx_type ii (0) ; ii < args(2).int_value() ; ii++ ) // args(2).int_value() is the no. of permutaions to perform 
     {
       trial_returns.fill( 0.0 ) ; // set trial returns to zero

       k1 = args(0).length () - 1 ; // initialise prior to shuffling the returns vector

       while ( k1 > 0 ) // While at least 2 left to shuffle
             {          
             k2 = mtrand1.randInt ( k1 ) ; // Pick an int from 0 through k1 

               if (k2 > k1)     // check if random vector index no. k2 is > than max vector index - should never happen 
                  {
                  k2 = k1 - 1 ; // But this is cheap insurance against disaster if it does happen
                  }

             temp = returns ( k1 ) ;           // allocate the last vector index content to temp
             returns ( k1 ) = returns ( k2 ) ; // allocate random pick to the last vector index
             returns ( k2 ) = temp ;           // allocate temp content to old vector index position of random pick
             k1 = k1 - 1 ;                     // count down 
             }                                 // Shuffling is complete when this while loop exits
             
      // Compute returns for this random shuffle
      trial_returns = returns * position_vector ;
      
      // and now normalise these returns
      trial_returns = product( trial_returns , normalised_multiplier ) ;
      
      max_trial_return = *std::max_element( &trial_returns( 0,0 ), &trial_returns( 0,trial_returns.cols() ) ) ;

        for ( octave_idx_type jj (0) ; jj < trial_returns.cols() ; jj++ )
     {
            if ( max_trial_return >= model_returns(0,jj) ) // If the best random system(s) beats the candidate system(s)
               {
               counts(0,jj) += 1 ; // Count it
               }
     }
     
     } // end of main ii loop

// now write out the results     
Matrix p_values ( 1 , position_vector.cols() ) ;
Matrix iters = args(2).matrix_value () ;   
p_values = quotient( counts , iters ) ; // count divided by number of permutations
 
retval_list(0) = p_values ;
 
set_warning_state ( "Octave:broadcast" , "on" ) ;

return retval_list ; // Return the output to Octave

} // end of function
I am happy to report that the tests were passed with flying colours, although of course this shouldn't really be surprising as there is the benefit of peeking a few days into the future in the historical record.

My next task will be to implement the walk forward version of the NN, which I anticipate will take multiple weeks at a bare minimum to get satisfactorily working Octave code. More in due course.

Friday, 6 December 2013

Brownian Oscillator

In my previous post I said I was thinking about an oscillator based on Brownian bands and the lower pane in the video below shows said oscillator in action. The upper pane simply shows the prices with super imposed Brownian bands. The oscillator ranges between 0 and 1 and is effectively a measurement of the randomness of price movement. The calculation is very simple: for consecutive look back periods of 1 to the dominant cycle period a counter is incremented by 1 for each look back length in which price is between the Brownian bands for that look back length, and the sum of this counter is then divided by the dominant cycle period. The logic should be easily discernible from the code given in the code box below. According to this logic the higher the oscillator value the more random price movement is, and the lower the value the higher the "trendiness" of the price series. The value of the oscillator could also be interpreted as the percentage of look back periods which indicate random movement. Nb. I have changed the calculation of the bands to use absolute price differences rather than log differences - I find that this change leads to better behaved bands.

I have not done any testing of this oscillator as an indicator but I think it might have some value as a filter for a trend following system, or as a way of switching between trend following and mean reversion types of systems. The fact that the oscillator values lie in the range 0 to 1 also makes it suitable as an input to a neural net.

The Octave .oct code
DEFUN_DLD ( brownian_oscillator, args, nargout, 
"-*- texinfo -*-\n\
@deftypefn {Function File} {} brownian_oscillator (@var{price,period})\n\
This function takes price and period vector inputs and outputs a value\n\
normalised by the max look back period for the number of look back periods\n\
from 1 to max look back period inclusive for which the price is between\n\
Brownian bands.\n\
@end deftypefn" )

{
octave_value_list retval_list ;
int nargin = args.length () ;

// check the input arguments
if ( nargin != 2 )
   {
   error ( "Invalid arguments. Type help to see usage." ) ;
   return retval_list ;
   }
   
if ( args(0).length () < 50 ) // check length of price vector input
   {
   error ( "Invalid arguments. Type help to see usage." ) ;
   return retval_list ;
   }   
   
if ( args(1).length () != args(0).length () ) // lookback period length argument
   {
   error ( "Invalid arguments. Type help to see usage." ) ;
   return retval_list ;
   }

if ( error_state )
   {
   error ( "Invalid arguments. Type help to see usage." ) ;
   return retval_list ;
   }
// end of input checking   

ColumnVector price = args(0).column_vector_value () ;
ColumnVector period = args(1).column_vector_value () ; 
ColumnVector abs_price_diff = args(0).column_vector_value () ;
ColumnVector osc = args(0).column_vector_value () ; osc(0) = 0.0 ;
double sum ;
double osc_sum ;
double up_bb ;
double low_bb ;
int jj ;

for ( octave_idx_type ii (1) ; ii < 50 ; ii++ ) // initialising loop 
    {
    abs_price_diff(ii) = fabs( price(ii) - price(ii-1) ) ;
    osc(ii) = 0.0 ;
    }
    
for ( octave_idx_type ii (50) ; ii < args(0).length () ; ii++ ) // main loop
    {    
    // initialise calculation values  
    sum = 0.0 ;
    osc_sum = 0.0 ;
    up_bb = 0.0 ;
    low_bb = 0.0 ;

    for ( jj = 1 ; jj < period(ii) + 1 ; jj++ ) // nested jj loop
        {
   
        abs_price_diff(ii) = fabs( price(ii) - price(ii-1) ) ;
        sum += abs_price_diff(ii-jj+1) ;
        up_bb = price(ii-jj) + (sum/double(jj)) * sqrt(double(jj)) ;
        low_bb = price(ii-jj) - (sum/double(jj)) * sqrt(double(jj)) ;
      
        if ( price(ii) <= up_bb && price(ii) >= low_bb )
           {
           osc_sum += 1.0 ;
           }

        } // end of nested jj loop  
    
    osc(ii) = osc_sum / period(ii) ;
    
    } // end of main loop   

    retval_list(0) = osc ;

return retval_list ;

} // end of function

Wednesday, 27 November 2013

Brownian Bands revisited?

Further to my most recent post there have been a few anonymous comments suggesting that Brownian bands are more useful than I supposed. In response I have added a contact form to this blog ( on the right ) so these anonymous readers may contact me if they wish. I am curious to learn how they may be using Brownian bands. I might also add that I have thought of creating a new indicator ( a kind of randomness measurement oscillator ) based on the Brownian bands concept.

Sunday, 24 November 2013

Suspending Work on Brownian Bands

Recently I have been doing a lot of work looking at various uses of the Brownian bands but I have been unable to come up with anything unique that I feel can help me so for now I am going to discontinue this work. There may be a time in the future when I return to this, but for now the Brownian bands are going to be put on the shelf along with my other indicators.

I have also started to become disillusioned with my attempts at creating a market classifier neural net; I think it may very well be too large a project to complete satisfactorily, so I am going to try something simpler in concept at least. In large part this change of heart is due to some recent reading I have been doing, particularly this article in which I was struck by the phrase

"As an aside, the ideal "trades" formed by the turning points might make a good training set of trades for a neural network-based system. Rather than scanning a chart manually to come up with trades to feed into the neural network, the method described here could be used to automatically find the training set."

Also over on the Mechanical Forex site there is a whole series of articles on neural nets which I have found useful and have given me a new insight. I am now going to ask readers to suspend their disbelief for a few moments while I explain.

Imagine it is a few minutes before the trading open and you have to decide whether to enter long, short or remain out of the market when the market opens. However, unlike any other market participant, the Gods have given you a gift - the ability to see the near future - which means you know the OHLC for today, tomorrow and the next trading day. But, as with all gifts from the Gods, there is a catch - you can only enter and exit trades at the market open; there is no buying the low and selling the high of a candlestick bar or intraday trading permitted by the Gods. What would such a blessed but responsible trader do? How would they make their trade decision? Being responsible s/he might consider the reward to risk ratio and only take a trade if this ratio is greater than one: on the long side the maximum open of tomorrow or the day after minus today's open divided by today's open minus the minimum low of today, tomorrow or the day after: and a similar reasoning for the short side. If neither of these ratios is greater than one, no new trade will be entered today.

The upper pane in the video below shows the coding of such logic. The decision to go long is rendered in blue, short in red and neutral in green. The colour of the bar indicates the action to be taken at the open of the next bar. However, it might be that when a neutral signal is given there is already be a position held, in which case the existing position is held for the duration of the neutral signal. This is effectively an always in the market, stop and reverse signal, and this is shown in the lower pane of the video. To make things slightly easier to see the entry/exit action occurs at the open of a new colour bar, i.e. if the bars change from blue to red the long is exited and the short initiated at the open of the first red bar.
Even with a cursory viewing it can be seen what a great "system" this would be, and using neural nets to create such a "system" is now my ambition.

The plan is simple: roll a moving window along the price series and use the known relationships between bars and indicators within this window to train a locally optimised neural net. The purpose of the training will be to classify the bars as long entry, short entry or neutral as in the chart in the upper pane of the above video. At the hard right edge of the chart the last three bars will be unavailable to the neural net for training purposes, but the hope is that the neural net, sufficiently well trained on all data in the window immediately prior to these three bars, will have predictive ability for them. After all, in the main, market dynamics slowly evolve over a few bars rather than dramatically leap.

Before I embark on this new work there are a few optimisation tests I would like to conduct, and these tests will form the subject matter of my next few posts.