Tuesday, 13 October 2015

Giving up on Runge-Kutta Methods (for now?)

Over the last few weeks I have been looking at using Runge-Kutta methods for the creation of features, but I have decided to give up on this for now simply because I think I have found a better way to accomplish what I want. I was alerted to this possible approach by this post over at http://dsp.stackexchange.com/ and following up on this I remembered that a few years ago I coded up John Ehler's Linear Prediction Filter (the white paper for which might still be available here) and my crudely transcribed code given below:
 DEFUN_DLD (linearpredict, args, , "Help String")
octave_value retval;

 ColumnVector a = args(0).column_vector_value ();                                   // The input price                          
 ColumnVector b(a);                                                                 // This will be the output column vector returned to Octave by "retval"
 const int n = args(1).int_value();
 int lngth = 10;                                                                 
 double g[30];
 int zz;
 for (zz = 0; zz < 30; zz++)
     g[zz] = 0.0;

 double sigPredict[30];
 for (zz = 0; zz < 30; zz++)
     sigPredict[zz] = 0.0;

 double sigPower = 0.0;
 double mu = 0.0;
 double xBar = 0.0;
 int jj = 0;                                                          
 for (octave_idx_type ii (10); ii < a.length="" 10="" a="" average="" factor="" for="" href="https://draft.blogger.com/null" if="" ii="" jj="" lngth="" loop="" normalization="" ompute="" onvergence="" power="" sigpower="" start="" the=""> 0)
         mu = 0.25 / (sigPower * 10);

      //Compute signal estimate
      xBar = 0;
      for (jj = 1; jj <= lngth; jj++)
          xBar = xBar + a(ii - jj) * g[jj];

     //Compute gain coefficients
     for (jj = 1; jj <= lngth; jj++)
         g[jj] = g[jj] + (mu * (a(ii) - xBar) * a(ii - jj));

     //Compute signal prediction waveform
     for (jj = 0; jj <= lngth; jj++)
         sigPredict[jj] = a(ii - (10 - jj));

     //Extend signal prediction into the future
     int kk = 0;
     for (jj = lngth + 1; jj <= lngth + 5; jj++)
         sigPredict[jj] = 0;

         for (kk = 1; kk <= lngth; kk++)
	     sigPredict[jj] = sigPredict[jj] + sigPredict[jj - kk] * g[kk];

     b(ii) = sigPredict[lngth + n];

retval = b;                                                                         // Assign the output column vector to the return value

return retval;                                                                       // Return the output to Octave
which is very similar in concept to Burg's method. I think some application of these methods show more promise than concentrating on my naive implementation of Runge-Kutta. 

Monday, 28 September 2015

Runge-Kutta Example and Code

Following on from my last post I thought I would, as a first step, code up a "straightforward" Runge-Kutta function and show how to deal with the fact that there is no "magic mathematical formula" to calculate the slopes that are an integral part of Runge-Kutta.

My approach is to fit a quadratic function to the last n_bars of price and take the slope of this via my Savitzky-Golay filter convolution code, and in doing so the Runge-Kutta value k1 can easily be obtained. The extrapolation beyond the k1 point to the points k2 and k3 is, in effect, trying to fit to points that have a half bar lead over the last available price. To accommodate this I use the last n_bar - 1 points of a 2 bar simple moving average plus the "position" of points k2  and k3 to calculate the slopes at k2 and k3. A 2 bar simple moving average is used because this has a half bar lag and is effectively an interpolation of the known prices at the "half bar intervals," and therefore points k2 and k3 are one h step ahead of the last half bar interval. The k4 point is again simply calculated directly from prices plus the half bar interval projection from point k3. If all this seems confusing, hopefully the Octave code below will clear things up.
clear all

%  create the raw price series
period = input( 'Period? ' ) ;
sideways = zeros( 1 , 2*period ) ;
uwr = ( 1 : 1 : 2*period ) .* 1.333 / period ; uwr_end = uwr(end) ;
unr = ( 1 : 1 : 2*period ) .* 4 / period ; unr_end = unr(end) ;
dwr = ( 1 : 1 : 2*period ) .* -1.333 / period ; dwr_end = dwr(end) ;
dnr = ( 1 : 1 : 2*period ) .* -4 / period ; dnr_end = dnr(end) ;

trends = [ sideways , uwr , unr.+uwr_end , sideways.+uwr_end.+unr_end , dnr.+uwr_end.+unr_end , dwr.+uwr_end.+unr_end.+dnr_end , sideways ] .+ 2 ;
noise = randn( 1 , length(trends) ) .* 0.0 ;
price = sinewave( length( trends ) , period ) .+ trends .+ noise ;
ma_2 = sma( price , 2 ) ;
%  regress over 'n_bar' bars
n_bar = 9 ;
%  and a 'p' order fit
p = 2 ;

%  get the relevant coefficients
slope_coeffs = generalised_sgolay_filter_coeffs( n_bar , p , 1 ) ; 

%  container for 1 bar ahead projection
projection_1_bar = zeros( 1 , length( price ) ) ;

for ii = n_bar : length( price )

%  calculate k1 value i.e. values at price(ii), the most recent price
k1 = price( ii-(n_bar-1) : ii ) * slope_coeffs( : , end ) ;
projection_of_point_k2 = price(ii) + k1 / 2 ;

%  calculate k2 value
k2 = [ ma_2( ii-(n_bar-2) : ii ) ; projection_of_point_k2 ]' * slope_coeffs( : , end ) ;
projection_of_point_k3 = price(ii) + k2 / 2 ;

%  calculate k3 value
k3 = [ ma_2( ii-(n_bar-2) : ii ) ; projection_of_point_k3 ]' * slope_coeffs( : , end ) ;
projection_of_point_k4 = price(ii) + k3 / 2 ;

%  calculate k4 value
k4 = [ price( ii-(n_bar-2) : ii ) , projection_of_point_k4 ] * slope_coeffs( : , end ) ;

%  the runge-kutta weighted moving average
projection_1_bar(ii) = price(ii) + ( k1 + 2 * ( k2 + k3 ) + k4 ) / 6 ;


%  shift for plotting
projection_1_bar = shift( projection_1_bar , 1 ) ;
projection_1_bar( : , 1:n_bar ) = price( : , 1:n_bar ) ;

plot( price , 'c' , projection_1_bar , 'r' ) ;
This code produces a plot like this, without noise,

and this with noise ( line 12 of the code ).

The cyan line is the underlying price and the red line is the Runge-Kutta 1 bar ahead projection. As can be seen, when the price is moving in rather straight lines the projection is quite accurate, however, at turnings there is some overshoot, which is to be expected. I'm not unduly concerned about this overshoot as my intention is simply to get the various k values as features, but this overshoot does have some implications which I will discuss in a future post.

Friday, 25 September 2015

Runge-Kutta Methods

As stated in my previous post I have been focusing on getting some meaningful features as possible inputs to my machine learning based trading system, and one of the possible ideas that has caught my attention is using Runge-Kutta methods to project ( otherwise known as "guessing" ) future price evolution. I have used this sort of approach before in the construction of my perfect oscillator ( links here, here, here and here ) but I deem this approach unsuitable for the type of "guessing" I want to do for my mfe-mae indicator as I need to "guess" rolling maximum highs and minimum lows, which quite often are actually flat for consecutive price bars. In fact, what I want to do is predict/guess upper and lower bounds for future price evolution, which might actually turn out to be a more tractable problem than predicting price itself.

The above wiki link to Runge-Kutta methods is a pretty dense mathematical read and readers may be wondering how approximation of solutions to ordinary differential equations can possibly relate to my stated aim, however the following links visualise Runge-Kutta in an accessible way:
Readers should hopefully see that what Runge-Kutta basically does is compute a "future" position given a starting value and a function to calculate slopes. When it comes to prices our starting value can be taken to be the most recent price/OHLC/bid/offer/tick in the time series, and whilst I don't possess a mathematical function to exactly calculate future evolution of price slopes, I can use my Savitzky Golay filter convolution code to approximate these slopes. The final icing on this cake is the weighting scheme for the Runge-Kutta method, as shown in the buttersblog link above, i.e.
 y_{n+1} = y_n + {\color{magenta}b_1}k_1 + {\color{magenta}b_2}k_2 + {\color{magenta}b_3}k_3 + {\color{magenta}b_4}k_4 + \cdots + {\color{magenta}b_s}k_s
which is just linear regression on the k slope values with the most recent price set as the intercept term! Hopefully this will be a useful way to generate features for my conditional restricted boltzmann machine, and if I use regularized linear regression I might finally be able to use my particle swarm optimisation code.

More in due course.

Thursday, 10 September 2015

Recent reading

In my last post I mentioned that I was going away for the summer, but now I'm back. During the summer I didn't get to do as much reading etc. as I had hoped, but I did manage to play around with the Rssa package for singular spectrum analysis and this is still an ongoing investigation. I also briefly looked at independent component analysis and the FastICA package.

The common theme of the above is the extraction of meaningful time series features, and this general area is what I will be looking into for my next set of posts. 

Wednesday, 24 June 2015

Results of Permutation tests on Cauchy-Schwarz

Following on from my previous post, I have to say that the results have been quite disappointing - in all the tests I have conducted so far I have been unable to reject the null hypothesis. These tests are still on-going and I may yet find some gold, but it is looking increasingly unlikely and so I won't bore readers with the results of these negative tests. Unless something drastically changes I am planning on abandoning the Cauchy-Schwarz matching algorithm, at least in its current form.

For those who are interested, the test I am conducting is the data mining bias adjusted random permutation test, which is the position_vector_permutation_test.cc file on my Data Snooping Tests page on Github.

Soon I shall be going away for the summer and although I will continue working on a borrowed laptop, I am not sure what internet access I will have and so this post may be my last substantive post until some time in September. I am taking a lot of reading with me, all my Octave code and data and I have loaded up my R installation with lots of interesting packages to play around with some new ideas, so hopefully there will be some interesting new things to post about in the autumn.

Tuesday, 23 June 2015

Cauchy-Schwarz Matching Algo Revisited: More Tests

Some time ago I blogged about my Cauchy Schwarz inequality inspired matching algorithm and some tests of it here and here. More recently I have come across a nice paper about developing and back testing systematic trading strategies here, courtesy of the quantnews.com aggregating site, and being motivated by the exhortation in said paper to conduct hypothesis driven development and separate evaluation of each component of a strategy, I have decided to perform some more tests of the matching algorithm.

The above mentioned tests were of the Effect size of differences in means between random matches of price and algorithm matched prices for 5 bars following a test point, with the test statistic being the Cauchy-Schwarz value itself. This was intended to be a test of the similarity of the evolution of price after any particular test point. However, a more pertinent test is whether this similarity can be exploited for profit, and doubly so since I intend the matching algorithm to select training examples for my machine learning development of a trading system. If there is no potential to extract profit from the basic selection of matched training examples, it would be naive to expect any machine learning algorithm to somehow magic such profit from these same examples.

The first (set) of test(s) I have in mind is a simple Monte Carlo Permutation test, which will be the subject of my next post.

Monday, 25 May 2015

Accounting for Data Mining Bias

I've recently subscribed to this forexfactory thread, which is about using machine learning to develop trading systems, and the subject of data mining/data dredging has come up. This post is a short description of how mining/dredging can be accounted for, but readers should be aware that the following is not a precise description of any particular test with accompanying code, but rather a hand wavy description of a general but important principle.

Suppose one has conducted a whole series of tests on a particular set of data with a view to developing a trading system. The precise nature of this is not really important - it could be some machine learning approach, a grid search of moving average crossover parameter values, a series of elimination contests to find the "best" indicator, or whatever. While doing this we keep a record of all our results and when the search is complete we plot a histogram thus:-
which is the result of 160,000 distinct tests plotted in 200 bins. Naturally, having done this, we select the best system found, represented by the vertical cursor line at x-axis value 5.2. This 5.2 is our test metric of choice, be it Sharpe ratio, win to loss ratio, whatever. But then we ask ourselves whether we have truly found a world beating system or is this discovery the result of data mining?

To test this, we create a random set of data which has the same attributes as the real data used above. The random data can be obtained by Bootstrapping, random permutation, application of a Markov chain with state spaces derived from the original data etc. The actual choice of which to use will depend on the null hypothesis one wants to test. Having obtained our random data set, we then perform the exact same search as we did above and record the test metric of best performing system found on this random data set. We repeat this 160,000 times and then plot a histogram ( in red ) of the best test results over all these random data sets:-
We find that this random set has a mean value of 0.5 and a standard deviation of 0.2. What this red test set represents is the ability/power of our machine learning algo, grid search criteria etc. to uncover "good" systems in even meaningless data, where all relationships are, in effect, spurious and contain no predictive ability.

We must now suppose that this power to uncover spurious relationships also exists in our original set of tests on the real data, and it must be accounted for. For purposes of illustration I'm going to take a naive approach and take 4 times the standard deviation plus the mean of the red distribution and shift our original green distribution to the right by an amount equal to this sum, a value of 1.3 thus:-
We now see that our original test metric value of 5.2, which was well out in the tail of the non-shifted green distribution, is comfortably within the tail of the shifted distribution, and depending on our choice of p-value etc. we may not be able to reject our null hypothesis, whatever it may have been.

As I warned readers above, this is not supposed to be a mathematically rigorous exposition of how to account for data mining bias, but rather an illustrative explanation of the principle(s) behind accounting for it. The main take away is that the red distribution, whatever it is for the test(s) you are running, needs to be generated and then the tests on real data need to be appropriately discounted by the relevant measures taken from the red distribution before any inferences are drawn about the efficacy of the results on the real data.

For more information about data mining tests readers might care to visit a Github repository I have created, which contains code and some academic papers on the subject.