Following on from the previous post, the test I outlined in that post wasn't very satisfactory, which I put down to the fact that the Sigmoid transformation of the raw MFE/MAE indicator values is not amenable to the application of standard deviation as a meaningful measure. Instead, I changed the test to one based on the standard error of the mean, an example screen shot of which is shown below:-
The top pane shows the the long version of the indicator and the bottom pane the short version. In each there are upper and lower limits of the sample standard error of the mean above and below the population mean (mean of all values of the indicator) along with the cumulative mean value of the top N matches as shown on the x-axis. In this particular example it can be seen that around the 170-180 samples mark the cumulative mean moves inside the standard error limits, never to leave them again. The meaning I ascribe to this is that there is no value to be gained from using more than approximately 180 samples for machine learning purposes, for this example, as to use more samples would be akin to training on all available data, which makes the use of my Cauchy Schwarz matching algo superfluous. I repeated the above on all instances of sigmoid transformed and untransformed MFE/MAE indicator values to get an average of 325 samples for transformed, and an average of 446 samples for the untransformed indicator values across the 4 major forex pairs. Based on this, I have decided to use the top 450 Cauchy Schwarz matches for training purposes, which has ramifications for model complexity will be discussed shortly.
Returning to the above screen shot, the figure 2 inset shows the price bars that immediately follow the price bar for which the main screen shows the top N matches. Looking at the extreme left of the main screen it can be seen that the lower pane, short indicator has an almost maximum reading of 1 whilst the upper pane, long indicator shows a value of approx. 2.7, which is not much above the global minimum for this indicator and well below the 0.5 neutral level. This strongly suggests a short position, and looking at the inset figure it can be seen that over the 3 days following the extreme left matched bar a short position was indeed the best position to hold. This is a pattern that seems to frequently present itself during visual inspection of charts, although I am unable to quantify this in any way.
On the matter of model complexity alluded to above, I found the Learning From Data course I have recently completed on the edX platform to be very enlightening, particularly the concept of the VC dimension, which is nicely explained in the Learning From Data Video library. I'll leave it to interested readers to follow the links, but the big take away for me is that using 450 samples as described above implies that my final machine learning model must have an upper bound of approximately 45 on the VC dimension, which in turn implies a maximum of 45 weights in the neural net. This is a design constraint that I will discuss in a future post.
"Trading is statistics and time series analysis." This blog details my progress in developing a systematic trading system for use on the futures and forex markets, with discussion of the various indicators and other inputs used in the creation of the system. Also discussed are some of the issues/problems encountered during this development process. Within the blog posts there are links to other web pages that are/have been useful to me.
Pages
▼
Thursday, 11 December 2014
Wednesday, 26 November 2014
Test of the MFE/MAE Indicator
Continuing from my last post, wherein I stated I was going to conduct a more pertinent statistical test of the returns of the bars(s) immediately following the N best, Cauchy Schwarz matching algorithm matched bars in the price history, readers may recall that the basic premise behind this algorithm is that by matching current price action to the N best matches, the price action after these matches can be used to infer what will occur after the current price action. However, rather than test the price action directly I have decided to apply the test to the MFE/MAE indicator. There are several reasons for this, which are enumerated below.
This shows two sampling distributions of the mean for Long MFE/MAE indicator values > 0.5, the upper pane for sample sizes of 20 and the lower pane for 75. For simplicity I shall only discuss the Long > 0.5 version of the indicator, but everything that follows applies equally to the Short version. As expected the upper pane shows greater variance, and for the envisioned test a whole series of these sampling distributions will be produced for different sampling rates. The way I intend it to work is as follows:
where the cyan and red lines are the +/- 2 standard deviations above/below a notional mean value for the whole distribution of approximately 0.85, and the chart can be considered to be a type of control chart. The upper and lower control lines converge towards the right, reflecting the decreasing variance of increasingly large N sample means, as shown in the first chart above. The green line represents the cumulative N sample mean of the best N historical matches' future values. I have shown it as decreasing as it is to be expected that as more N matches are included, the greater the chance that incorrect matches, unexpected price reversals etc. will be caught up in this mean calculation, resulting in the mean value moving into the left tail of the sampling distribution. This effect combines with the shrinking variance to reach a critical point (rejection of the null hypothesis) at which the green line exits below the lower control line.
The purpose of all the above is provide a principled manner to choose the number N matches from the Cauchy-Schwarz matching algorithm to supply instances of training data to the envisioned neural net training. An incidental benefit of this approach is that it is indirectly a hypothesis test of the fundamental assumption underlying the matching algorithm; namely that past price action has predictive ability for future price action, and furthermore, it is a test of the MFE/MAE indicator. Discussion of the results of these tests in a future post.
- I intend to use the indicator as the target function for future Neural net training
- the indicator represents a reward to risk ratio, which indirectly reflects price action itself, but without the noise of said action
- this reward to risk ratio is of much more direct concern, from a trading perspective, than accurately predicting price
- since the indicator is now included as a feature in the matching algorithm, testing the indicator is, very indirectly, a test of the matching algorithm too
This shows two sampling distributions of the mean for Long MFE/MAE indicator values > 0.5, the upper pane for sample sizes of 20 and the lower pane for 75. For simplicity I shall only discuss the Long > 0.5 version of the indicator, but everything that follows applies equally to the Short version. As expected the upper pane shows greater variance, and for the envisioned test a whole series of these sampling distributions will be produced for different sampling rates. The way I intend it to work is as follows:
- take a single bar in the history and see what the value of the MFE/MAE indicator value is 3 bars later (assume > 0.5 for this exposition, so we compare to long sampling distributions only)
- get the top 20 matched bars for the above selected bar and the corresponding 20 indicator values for 3 bars later and take the mean of these 20 indicator values
- check if this mean falls within the sampling distribution of the mean of 20, as shown in the upper pane above by the vertical black line at 0.8 on the x axis. If it does fall with the sampling distribution, we accept the null hypothesis that the 20 best matches in history future indicator values and the value of the indicator after the bar to be matched come from the same distribution
- repeat the immediately preceding step for means of 21, 22, ... etc until such time as the null hypothesis can be rejected, shown in the lower pane above. At this point, we then then declare an upper bound on the historical number of matches for the bar to be predicted
where the cyan and red lines are the +/- 2 standard deviations above/below a notional mean value for the whole distribution of approximately 0.85, and the chart can be considered to be a type of control chart. The upper and lower control lines converge towards the right, reflecting the decreasing variance of increasingly large N sample means, as shown in the first chart above. The green line represents the cumulative N sample mean of the best N historical matches' future values. I have shown it as decreasing as it is to be expected that as more N matches are included, the greater the chance that incorrect matches, unexpected price reversals etc. will be caught up in this mean calculation, resulting in the mean value moving into the left tail of the sampling distribution. This effect combines with the shrinking variance to reach a critical point (rejection of the null hypothesis) at which the green line exits below the lower control line.
The purpose of all the above is provide a principled manner to choose the number N matches from the Cauchy-Schwarz matching algorithm to supply instances of training data to the envisioned neural net training. An incidental benefit of this approach is that it is indirectly a hypothesis test of the fundamental assumption underlying the matching algorithm; namely that past price action has predictive ability for future price action, and furthermore, it is a test of the MFE/MAE indicator. Discussion of the results of these tests in a future post.
Wednesday, 12 November 2014
First Use for the MFE/MAE Indicator
This first use is as an input to my Cauchy-Schwarz matching algorithm, previous posts about which can be read here, here and here. The screen shot below shows what I would characterise as a "good" set of matches:
The top left pane shows the original section of the price series to be matched, and the panes labelled #1, #5, etc. are the best match, 5th best match and so on respectively. The last 3 rightmost bars in each pane are "future" price bars, i.e. the 4th bar in from the right is the target bar that is being matched, matched over all the bars to the left or in the past of this target bar.I consider the above to be a set of "good" matches because, for the #1 through #25 matches for "future" bars:
- if one considers the logic of the mfe/mae indicator each pane gives indicator readings of "long," which all agree with the original "future" bars
- similarly the mae (maximum adverse excursion) occurs on the day immediately following the matched day
- the mfe (maximum favourable excursion) occurs on the 3rd "future" bar, with the slight exception of pane #10
- the marked to market returns of an entry at the open of the 1st "future" bar to the close of the 3rd "future" bar all show a profit, as does the original pane
In the above linked posts the test statistic used to judge the predictive efficacy of the matching algorithm was effect size. However, I think a more pertinent test statistic to use would be the average bar return over the bars immediately following a matched bar, and a discussion of this will be the subject of my next post.
Wednesday, 5 November 2014
A New MFE/MAE Indicator.
After stopping my investigation of tools for spectral analysis over the last few weeks I have been doing another mooc, this time Learning from Data, and also working on the idea of one of my earlier posts.
In the above linked post there is a video showing the idea as a "paint bar" study. However, I thought it would be a good idea to render it as an indicator, the C++ Octave .oct code for which is shown in the code box below.
The yellow equity curve is the cumulative, close to close, tick returns of a buy and hold strategy, the blue is the return going flat when indicated, and the red maintaining the previous position when flat is indicated. Not much to write home about. However, this second chart shows the return when one has the benefit of the "peek into the future" as discussed in my earlier post.
The colour of the curves are as before except for the addition of the green equity curve, which is the cumulative, vwap value to vwap value tick returns, a simple representation of what an equity curve with realistic slippage might look like. This second set of equity curves shows the promise of what could be achievable if a neural net to accurately predict future values of the above indicator can be trained. More in an upcoming post.
In the above linked post there is a video showing the idea as a "paint bar" study. However, I thought it would be a good idea to render it as an indicator, the C++ Octave .oct code for which is shown in the code box below.
DEFUN_DLD ( adjustable_mfe_mae_from_open_indicator, args, nargout,
"-*- texinfo -*-\n\
@deftypefn {Function File} {} adjustable_mfe_mae_from_open_indicator (@var{open,high,low,close,lookback_length})\n\
This function takes four input series for the OHLC and a value for lookback length. The main outputs are\n\
two indicators, long and short, that show the ratio of the MFE over the MAE from the open of the specified\n\
lookback in the past. The indicators are normalised to the range 0 to 1 by a sigmoid function and a MFE/MAE\n\
ratio of 1:1 is shifted in the sigmoid function to give a 'neutral' indicator reading of 0.5. A third output\n\
is the max high - min low range over the lookback_length normalised by the range of the daily support and\n\
resistance levels S1 and R1 calculated for the first bar of the lookback period. This is also normalised to\n\
give a reading of 0.5 in the sigmoid function if the ratio is 1:1. The point of this third output is to give\n\
some relative scale to the unitless MFE/MAE ratio and to act as a measure of strength or importance of the\n\
MFE/MAE ratio.\n\
@end deftypefn" )
{
octave_value_list retval_list ;
int nargin = args.length () ;
// check the input arguments
if ( nargin != 5 )
{
error ( "Invalid arguments. Arguments are price series for open, high, low and close and value for lookback length." ) ;
return retval_list ;
}
if ( args(4).length () != 1 )
{
error ( "Invalid argument. Argument 5 is a scalar value for the lookback length." ) ;
return retval_list ;
}
int lookback_length = args(4).int_value() ;
if ( args(0).length () < lookback_length )
{
error ( "Invalid argument lengths. Argument lengths for open, high, low and close vectors should be >= lookback length." ) ;
return retval_list ;
}
if ( args(1).length () != args(0).length () )
{
error ( "Invalid argument lengths. Argument lengths for open, high, low and close vectors should be equal." ) ;
return retval_list ;
}
if ( args(2).length () != args(0).length () )
{
error ( "Invalid argument lengths. Argument lengths for open, high, low and close vectors should be equal." ) ;
return retval_list ;
}
if ( args(3).length () != args(0).length () )
{
error ( "Invalid argument lengths. Argument lengths for open, high, low and close vectors should be equal." ) ;
return retval_list ;
}
if (error_state)
{
error ( "Invalid arguments. Arguments are price series for open, high, low and close and value for lookback length." ) ;
return retval_list ;
}
// end of input checking
// inputs
ColumnVector open = args(0).column_vector_value () ;
ColumnVector high = args(1).column_vector_value () ;
ColumnVector low = args(2).column_vector_value () ;
ColumnVector close = args(3).column_vector_value () ;
// outputs
ColumnVector long_mfe_mae = args(0).column_vector_value () ;
ColumnVector short_mfe_mae = args(0).column_vector_value () ;
ColumnVector range = args(0).column_vector_value () ;
// variables
double max_high = *std::max_element( &high(0), &high( lookback_length ) ) ;
double min_low = *std::min_element( &low(0), &low( lookback_length ) ) ;
double pivot_point = ( high(0) + low(0) + close(0) ) / 3.0 ;
double s1 = 2.0 * pivot_point - high(0) ;
double r1 = 2.0 * pivot_point - low(0) ;
for ( octave_idx_type ii (0) ; ii < lookback_length ; ii++ ) // initial ii loop
{
// long_mfe_mae
if ( open(0) > min_low ) // the "normal" situation
{
long_mfe_mae(ii) = 1.0 / ( 1.0 + exp( -( ( max_high - open(0) ) / ( open(0) - min_low ) - 1.0 ) ) ) ;
}
else if ( open(0) == min_low )
{
long_mfe_mae(ii) = 1.0 ;
}
else
{
long_mfe_mae(ii) = 0.5 ;
}
// short_mfe_mae
if ( open(0) < max_high ) // the "normal" situation
{
short_mfe_mae(ii) = 1.0 / ( 1.0 + exp( -( ( open(0) - min_low ) / ( max_high - open(0) ) - 1.0 ) ) ) ;
}
else if ( open(0) == max_high )
{
short_mfe_mae(ii) = 1.0 ;
}
else
{
short_mfe_mae(ii) = 0.5 ;
}
range(ii) = 1.0 / ( 1.0 + exp( -( ( max_high - min_low ) / ( r1 - s1 ) - 1.0 ) ) ) ;
} // end of initial ii loop
for ( octave_idx_type ii ( lookback_length ) ; ii < args(0).length() ; ii++ ) // main ii loop
{
// assign variable values
max_high = *std::max_element( &high( ii - lookback_length + 1 ), &high( ii + 1 ) ) ;
min_low = *std::min_element( &low( ii - lookback_length + 1 ), &low( ii + 1 ) ) ;
pivot_point = ( high(ii-lookback_length) + low(ii-lookback_length) + close(ii-lookback_length) ) / 3.0 ;
s1 = 2.0 * pivot_point - high(ii-lookback_length) ;
r1 = 2.0 * pivot_point - low(ii-lookback_length) ;
// long_mfe_mae
if ( open( ii - lookback_length + 1 ) > min_low && open( ii - lookback_length + 1 ) < max_high ) // the "normal" situation
{
long_mfe_mae(ii) = 1.0 / ( 1.0 + exp( -( ( max_high - open( ii - lookback_length + 1 ) ) / ( open( ii - lookback_length + 1 ) - min_low ) - 1.0 ) ) ) ;
}
else if ( open( ii - lookback_length + 1 ) == min_low )
{
long_mfe_mae(ii) = 1.0 ;
}
else
{
long_mfe_mae(ii) = 0.0 ;
}
// short_mfe_mae
if ( open( ii - lookback_length + 1 ) > min_low && open( ii - lookback_length + 1 ) < max_high ) // the "normal" situation
{
short_mfe_mae(ii) = 1.0 / ( 1.0 + exp( -( ( open( ii - lookback_length + 1 ) - min_low ) / ( max_high - open( ii - lookback_length + 1 ) ) - 1.0 ) ) ) ;
}
else if ( open( ii - lookback_length + 1 ) == max_high )
{
short_mfe_mae(ii) = 1.0 ;
}
else
{
short_mfe_mae(ii) = 0.0 ;
}
range(ii) = 1.0 / ( 1.0 + exp( -( ( max_high - min_low ) / ( r1 - s1 ) - 1.0 ) ) ) ;
} // end of main ii loop
retval_list(2) = range ;
retval_list(1) = short_mfe_mae ;
retval_list(0) = long_mfe_mae ;
return retval_list ;
} // end of function
The way to interpret this is as follows:- if the "long" indicator reading is above 0.5, go long
- if the "short" is above 0.5, go short
- if both are below 0.5, go flat
The yellow equity curve is the cumulative, close to close, tick returns of a buy and hold strategy, the blue is the return going flat when indicated, and the red maintaining the previous position when flat is indicated. Not much to write home about. However, this second chart shows the return when one has the benefit of the "peek into the future" as discussed in my earlier post.
The colour of the curves are as before except for the addition of the green equity curve, which is the cumulative, vwap value to vwap value tick returns, a simple representation of what an equity curve with realistic slippage might look like. This second set of equity curves shows the promise of what could be achievable if a neural net to accurately predict future values of the above indicator can be trained. More in an upcoming post.
Tuesday, 23 September 2014
High Resolution Tools for Spectral Analysis - Update
Following on from my initial enthusiasm for the code on the High Resolution Tools for Spectral Analysis page, I have say that I have been unable to get the code performing as I would like it for my intended application to price time series.
My original intent was to use the zero crossing period estimation function, the subject of my last few posts, to get a rough idea of the dominant cycle period and then use the most recent data in a rolling window of this length as input to the high resolution code. This approach, however, ran into problems.
Firstly, windows of just the dominant cycle length (approximately 10 to 30 data points only) would lead to all sorts of errors being thrown from the toolkit functions as well as core Octave functions, such as divide by zero warnings and cryptic error messages that even now I don't understand. My best guess here is that the amount of data available in such short windows is simply insufficient for the algorithm to work, in much the same way as the Fast Fourier transform may fail to work if given too little data that is not a power of 2 in length. It might be possible to substantially rewrite the relevant functions, but my understanding of the algorithm and the inner workings of Octave means this is well beyond my pay grade.
My second approach was to simply extend the amount of data available by using the Octave repmat function on the windowed data so that all the above errors disappeared. This had very hit and miss results - sometimes they were very accurate, other times just fair to middling, and on occasion just way off target. I suspect here that the problem is the introduction of signal artifacts via the use of the repmat function, which results in Aliasing of the underlying signal.
As a result I shall not continue with investigating this toolbox for now. Although I only investigated the use of the me.m function (there are other functions available) I feel that my time at the moment can be used more productively.
My original intent was to use the zero crossing period estimation function, the subject of my last few posts, to get a rough idea of the dominant cycle period and then use the most recent data in a rolling window of this length as input to the high resolution code. This approach, however, ran into problems.
Firstly, windows of just the dominant cycle length (approximately 10 to 30 data points only) would lead to all sorts of errors being thrown from the toolkit functions as well as core Octave functions, such as divide by zero warnings and cryptic error messages that even now I don't understand. My best guess here is that the amount of data available in such short windows is simply insufficient for the algorithm to work, in much the same way as the Fast Fourier transform may fail to work if given too little data that is not a power of 2 in length. It might be possible to substantially rewrite the relevant functions, but my understanding of the algorithm and the inner workings of Octave means this is well beyond my pay grade.
My second approach was to simply extend the amount of data available by using the Octave repmat function on the windowed data so that all the above errors disappeared. This had very hit and miss results - sometimes they were very accurate, other times just fair to middling, and on occasion just way off target. I suspect here that the problem is the introduction of signal artifacts via the use of the repmat function, which results in Aliasing of the underlying signal.
As a result I shall not continue with investigating this toolbox for now. Although I only investigated the use of the me.m function (there are other functions available) I feel that my time at the moment can be used more productively.
Friday, 19 September 2014
Inputs For Zero Crossing Function
Continuing with the work on the zero crossing function, for now I have decided to use a "roofing filter" to detrend and smooth the raw price data and then apply some simple cycle extractor code
It can be seen that most of the time the cycle (inphase) either leads or is approximately in sync with the price, whilst the quadrature is nicely spaced between the zero crossings of the cycle.
The lower pane shows the measured periods as in the previous posts. The zero crossing measured periods are now a bit more erratic than before, but some simple tests show that, on average, the zero crossing measurement is consistently closer to the real period than the sine wave indicator period; however, this improvement cannot said to be statistically significant at any p-value.
Now I would like to introduce some other work I've been doing recently. For many years I have wanted to measure the instantaneous period using Maximum entropy spectral estimation, which has been popularised over the years by John Ehlers. Unfortunately I had never found any code in the public domain which I could use or adapt, until now this is. My discovery is High Resolution Tools For Spectral Analysis. This might not actually be the same as Ehlers' MESA, but it certainly covers the same general area and, joy of joys, it has freely downloadable MATLAB code along with an accessible description of the theoretical background along with some academic papers.
As is usual in situations like this, I have had to refactor some of the MATLAB code so that it can run in Octave without error messages; specifically this non Octave function
x = ltitr( a , b , u , x0 ) ;
should be replaced by
x = lsim ( ss (a , b , [ ] , [ ] , -1 ) , u , [ ] , x0 ) ;
(thanks to Lukas for this)
Having made these changes, this much simplified demo.m script
The upper pane shows two sine waves (red and magenta), very close to each other in period and amplitude, combined with random noise to create price (in blue). Looking at just the blue price, it would seem to be almost impossible that there are in fact two sine waves hidden within noise in this short time series, yet the algorithm clearly picks these out as shown by the two peaks in the spectral plot in the lower pane. Powerful stuff!
It is my intention over the coming days to investigate using the zero crossing function to select the data length prior to using this spectral analysis to determine the instantaneous period of price.
DEFUN_DLD ( cycle_extractor, args, nargout,
"-*- texinfo -*-\n\
@deftypefn {Function File} {} cycle_extractor (@var{price})\n\
This function takes a single price vector input and outputs\n\
a vector of the cycle extracted from the input.\n\
@end deftypefn" )
{
octave_value_list retval_list ;
int nargin = args.length () ;
int vec_length = args(0).length () ;
// check the input argument
if ( nargin != 1 )
{
error ("Invalid argument. Input is a single price vector.") ;
return retval_list ;
}
if ( vec_length < 50 )
{
error ("Invalid argument. Input is a single price vector.") ;
return retval_list ;
}
if ( error_state )
{
error ("Invalid argument. Input is a single price vector.") ;
return retval_list ;
}
// end of input checking
// inputs
ColumnVector price = args(0).column_vector_value () ;
ColumnVector cycle = args(0).column_vector_value () ; cycle(0) = 0.0 ; cycle(1) = 0.0 ;
double alpha = 0.07 ;
for ( octave_idx_type ii (2) ; ii < vec_length ; ii ++ ) // Start the main loop
{
cycle(ii) = ( 1.0 - 0.5 * alpha ) * ( 1.0 - 0.5 * alpha ) * ( price(ii) - 2.0 * price(ii-1) + price(ii-2) ) + 2.0 * ( 1.0 - alpha ) * cycle(ii-1) - ( 1.0 - alpha ) * ( 1.0 - alpha ) * cycle(ii-2) ;
}
retval_list(0) = cycle ;
return retval_list ;
} // end of function
to create the inphase input for the zero cross function. For the imaginary or quadrature input I have decided to use the simple trigonometric identity of the derivative of a sine wave being the cosine (i.e. 90 degree phase lead), easily implemented using the bar to bar difference of the sine wave, or in our case the above simple cycle. I might yet change this, but for now it seems to work. The upper pane in the screen shot below shows the raw price in blue, the extracted cycle in black and the cycle derivative in red.It can be seen that most of the time the cycle (inphase) either leads or is approximately in sync with the price, whilst the quadrature is nicely spaced between the zero crossings of the cycle.
The lower pane shows the measured periods as in the previous posts. The zero crossing measured periods are now a bit more erratic than before, but some simple tests show that, on average, the zero crossing measurement is consistently closer to the real period than the sine wave indicator period; however, this improvement cannot said to be statistically significant at any p-value.
Now I would like to introduce some other work I've been doing recently. For many years I have wanted to measure the instantaneous period using Maximum entropy spectral estimation, which has been popularised over the years by John Ehlers. Unfortunately I had never found any code in the public domain which I could use or adapt, until now this is. My discovery is High Resolution Tools For Spectral Analysis. This might not actually be the same as Ehlers' MESA, but it certainly covers the same general area and, joy of joys, it has freely downloadable MATLAB code along with an accessible description of the theoretical background along with some academic papers.
As is usual in situations like this, I have had to refactor some of the MATLAB code so that it can run in Octave without error messages; specifically this non Octave function
function [msg,A,B,C,D] = abcdchk(A,B,C,D)
%ABCDCHK Checks dimensional consistency of A,B,C,D matrices.
% ERROR(ABCDCHK(A,B,C,D)) checks that the dimensions of A,B,C,D
% are consistent for a linear, time-invariant system model.
% An error occurs if the nonzero dimensions are not consistent.
%
% [MSG,A,B,C,D] = ABCDCHK(A,B,C,D) also alters the dimensions
% any 0-by-0 empty matrices to make them consistent with the others.
% Copyright 1984-2001 The MathWorks, Inc.
% $Revision: 1.21 $ $Date: 2001/04/15 11:59:11 $
if nargin < 4, D = []; end
if nargin < 3, C = []; end
if nargin < 2, B = []; end
if nargin < 1, A = []; end
[ma,na] = size(A);
[mb,nb] = size(B);
[mc,nc] = size(C);
[md,nd] = size(D);
if mc==0 & nc==0 & (md==0 | na==0)
mc = md; nc = na; C = zeros(mc,nc);
end
if mb==0 & nb==0 & (ma==0 | nd==0)
mb = ma; nb = nd; B = zeros(mb,nb);
end
if md==0 & nd==0 & (mc==0 | nb==0)
md = mc; nd = nb; D = zeros(md,nd);
end
if ma==0 & na==0 & (mb==0 | nc==0)
ma = mb; na = nc; A = zeros(ma,na);
end
if ma~=na & nargin>=1
msg = 'The A matrix must be square';
elseif ma~=mb & nargin>=2
msg = 'The A and B matrices must have the same number of rows.';
elseif na~=nc & nargin>=3
msg = 'The A and C matrices must have the same number of columns.';
elseif md~=mc & nargin>=4
msg = 'The C and D matrices must have the same number of rows.';
elseif nd~=nb & nargin>=4
msg = 'The B and D matrices must have the same number of columns.';
else
msg = '';
end
should be in the loadpath, and in the function file dlsim.m, on line 71, the call to the internal MATLAB functionx = ltitr( a , b , u , x0 ) ;
should be replaced by
x = lsim ( ss (a , b , [ ] , [ ] , -1 ) , u , [ ] , x0 ) ;
(thanks to Lukas for this)
Having made these changes, this much simplified demo.m script
%% SIGNAL = 2 x sinusoids + noise
% Setting up the signal parameters and time history
N = 100 ;
mag0 = 1.8 ; mag1 = 1.5 ; o1 = 1.3 ; mag2 = 2 ; o2 = 1.35 ;
t = 0 : N - 1 ; t = t(:) ;
y0 = mag0 * randn(N,1) ;
y1 = mag1 * exp( 1i * ( o1 * t + 2 * pi * rand) ) ;
y2 = mag2 * exp( 1i * ( o2 * t + 2 * pi * rand )) ;
y = real( y0 .+ y1 .+ y2 ) ;
NN = 2048 ; th = linspace( 0, 2 * pi, NN ) ;
%% setting up filter parameters and the svd of the input-to-state response
thetamid = 1.325 ;
[ A , B ] = rjordan( 5, 0.88 * exp( thetamid * 1i ) ) ;
%% obtaining state statistics
R = dlsim_real( A, B, y' ) ;
%% maximum entropy
spectrum = me( R, A, B, th ) ;
spectrum = spectrum / max( spectrum ) * 1.2 ;
figure(1) ;
subplot(211) ; plot( real(y1), 'r', real(y2), 'm', y, 'b' ) ; legend( 'Underlying Sinewave 1', 'Underlying Sinewave 2', '2 Sinewaves Plus Noise' ) ; title( 'Noisy Price series' ) ;
subplot(212) ; plot( th(400:475), spectrum(400:475) ) ; title( 'The Spectrum - Peaks should be centred at 1.3 and 1.35' ) ;
produces this plotThe upper pane shows two sine waves (red and magenta), very close to each other in period and amplitude, combined with random noise to create price (in blue). Looking at just the blue price, it would seem to be almost impossible that there are in fact two sine waves hidden within noise in this short time series, yet the algorithm clearly picks these out as shown by the two peaks in the spectral plot in the lower pane. Powerful stuff!
It is my intention over the coming days to investigate using the zero crossing function to select the data length prior to using this spectral analysis to determine the instantaneous period of price.
Wednesday, 17 September 2014
Zero Crossing Frequency/Period Measuring Function
Following on from my last post I have now finished coding the first two versions of this function, which are shown in the code boxes below.
It can be seen that they both perform admirably within their respective theoretical constraints on this ideal input. The subject of my next post will concentrate on the inputs that can be realistically extracted from live price data.
DEFUN_DLD ( iq_zc_period_indicator_v1, args, nargout,
"-*- texinfo -*-\n\
@deftypefn {Function File} {} iq_zc_period_indicator_v1 (@var{Realpart,Quadrature})\n\
This function takes two input vectors of the real part and quadrature of price,\n\
the analytic signal, and outputs the period calculated from the zero crossings of\n\
the two input vectors.\n\
@end deftypefn" ) {
octave_value_list retval_list ;
int nargin = args.length () ;
int vec_length = args(0).length () ;
// check the input arguments
if ( nargin != 2 )
{
error ( "Invalid input argument(s). Inputs are two vectors, the realpart and quadrature." ) ;
return retval_list ;
}
if ( vec_length < 50 )
{
error ( "Invalid input argument length(s). Input vectors must be > 50 in length." ) ;
return retval_list ;
}
if ( args(1).length () != vec_length )
{
error ( "Invalid input argument length(s). Input vectors differ in length." ) ;
return retval_list ;
}
if ( error_state )
{
error ( "Invalid input argument(s). Inputs are two vectors, the realpart and quadrature." ) ;
return retval_list ;
}
// end of input checking
// inputs
ColumnVector inphase_1 = args(0).column_vector_value () ;
ColumnVector quadrature_1 = args(1).column_vector_value () ;
ColumnVector f_hat ( vec_length ) ;
ColumnVector inphase_1_state ( vec_length ) ;
ColumnVector quadrature_1_state ( vec_length ) ;
for ( octave_idx_type ii (0) ; ii < 50 ; ii ++ ) // Start the initialising loop
{
f_hat(ii) = 20.0 ; // a default period length
inphase_1_state(ii) = 0.0 ; // assuming exactly on zero cross line
quadrature_1_state(ii) = 0.0 ; // assuming exactly on zero cross line
} // end of initialising ii loop
double latest_i_1_zc_time = 49.0 ; // as the "current" bar is bar no. 49
double previous_i_1_zc_time = 49.0 ; // as the "current" bar is bar no. 49
double latest_q_1_zc_time = 49.0 ; // as the "current" bar is bar no. 49
double previous_q_1_zc_time = 49.0 ; // as the "current" bar is bar no. 49
// get the initial states of the inphase_1 and quadrature_1 vectors. A value of
// 1 is assigned if > 0, -1 for < 0 & 0.0 if == 0
inphase_1_state(49) = inphase_1(49) > 0.0 ? 1.0 : ( inphase_1(49) < 0.0 ? -1.0 : 0.0 ) ;
quadrature_1_state(49) = quadrature_1(49) > 0.0 ? 1.0 : ( quadrature_1(49) < 0.0 ? -1.0 : 0.0 ) ;
for ( octave_idx_type ii (50) ; ii < vec_length ; ii ++ ) // Start the main loop
{
// first, assign current states for the inphase_1 and quadrature_1 vectors on this
// new (ii) bar
inphase_1_state(ii) = inphase_1(ii) >= 0.0 ? 1.0 : ( inphase_1(ii) < 0.0 ? -1.0 : 0.0 ) ;
quadrature_1_state(ii) = quadrature_1(ii) >= 0.0 ? 1.0 : ( quadrature_1(ii) < 0.0 ? -1.0 : 0.0 ) ;
// check for a change in state, which indicates a zero crossing
if ( inphase_1_state(ii) == 1.0 && inphase_1_state(ii-1) == -1.0 )
{ // inphase_1 has crossed up over the zero line
latest_i_1_zc_time = double(ii-1) - inphase_1(ii-1) / ( inphase_1(ii) - inphase_1(ii-1) ) ; // calculate time of zero cross
f_hat(ii) = ( latest_i_1_zc_time - previous_i_1_zc_time ) * 2.0 ; // calculate the period from times of zero crosses
previous_i_1_zc_time = latest_i_1_zc_time ; // update the previous_zc_time to equal the latest_zc_time
}
else if ( inphase_1_state(ii) == -1.0 && inphase_1_state(ii-1) == 1.0 )
{ // inphase_1 has crossed down over the zero line
latest_i_1_zc_time = double(ii-1) + inphase_1(ii-1) / ( inphase_1(ii-1) - inphase_1(ii) ) ; // calculate time of zero cross
f_hat(ii) = ( latest_i_1_zc_time - previous_i_1_zc_time ) * 2.0 ; // calculate the period from times of zero crosses
previous_i_1_zc_time = latest_i_1_zc_time ; // update the previous_zc_time to equal the latest_zc_time
}
else if ( quadrature_1_state(ii) == 1.0 && quadrature_1_state(ii-1) == -1.0 )
{ // quadrature_1 has crossed up over the zero line
latest_q_1_zc_time = double(ii-1) - quadrature_1(ii-1) / ( quadrature_1(ii) - quadrature_1(ii-1) ) ; // calculate time of zero cross
f_hat(ii) = ( latest_q_1_zc_time - previous_q_1_zc_time ) * 2.0 ; // calculate the period from times of zero crosses
previous_q_1_zc_time = latest_q_1_zc_time ; // update the previous_zc_time to equal the latest_zc_time
}
else if ( quadrature_1_state(ii) == -1.0 && quadrature_1_state(ii-1) == 1.0 )
{ // quadrature_1 has crossed down over the zero line
latest_q_1_zc_time = double(ii-1) + quadrature_1(ii-1) / ( quadrature_1(ii-1) - quadrature_1(ii) ) ; // calculate time of zero cross
f_hat(ii) = ( latest_q_1_zc_time - previous_q_1_zc_time ) * 2.0 ; // calculate the period from times of zero crosses
previous_q_1_zc_time = latest_q_1_zc_time ; // update the previous_zc_time to equal the latest_zc_time
}
else
{ // neither the inphase_1 or quadrature_1 have crossed the zeroline on this bar
f_hat(ii) = f_hat(ii-1) ; // set to most recently calcualted zero crossing period length
}
} // end of main ii loop
retval_list(0) = f_hat ;
return retval_list ;
} // end of function
and, almost identical to the above
DEFUN_DLD ( iq_zc_period_indicator_v1_1, args, nargout,
"-*- texinfo -*-\n\
@deftypefn {Function File} {} iq_zc_period_indicator_v1_1 (@var{Realpart,Quadrature})\n\
This function takes two input vectors of the real part and quadrature of price,\n\
the analytic signal, and outputs the period calculated from the zero crossings of\n\
the two input vectors.\n\
@end deftypefn" )
{
octave_value_list retval_list ;
int nargin = args.length () ;
int vec_length = args(0).length () ;
// check the input arguments
if ( nargin != 2 )
{
error ( "Invalid input argument(s). Inputs are two vectors, the realpart and quadrature." ) ;
return retval_list ;
}
if ( vec_length < 50 )
{
error ( "Invalid input argument length(s). Input vectors must be > 50 in length." ) ;
return retval_list ;
}
if ( args(1).length () != vec_length )
{
error ( "Invalid input argument length(s). Input vectors differ in length." ) ;
return retval_list ;
}
if ( error_state )
{
error ( "Invalid input argument(s). Inputs are two vectors, the realpart and quadrature." ) ;
return retval_list ;
}
// end of input checking
// inputs
ColumnVector inphase_1 = args(0).column_vector_value () ;
ColumnVector quadrature_1 = args(1).column_vector_value () ;
ColumnVector f_hat ( vec_length ) ;
ColumnVector inphase_1_state ( vec_length ) ;
ColumnVector quadrature_1_state ( vec_length ) ;
for ( octave_idx_type ii (0) ; ii < 50 ; ii ++ ) // Start the initialising loop
{
f_hat(ii) = 20.0 ; // a default period length
inphase_1_state(ii) = 0.0 ; // assuming exactly on zero cross line
quadrature_1_state(ii) = 0.0 ; // assuming exactly on zero cross line
} // end of initialising ii loop
double latest_iq_1_zc_time = 49.0 ; // as the "current" bar is bar no. 49
double previous_iq_1_zc_time = 49.0 ; // as the "current" bar is bar no. 49
// get the initial states of the inphase_1 and quadrature_1 vectors. A value of
// 1 is assigned if > 0, -1 for < 0 & 0.0 if == 0
inphase_1_state(49) = inphase_1(49) > 0.0 ? 1.0 : ( inphase_1(49) < 0.0 ? -1.0 : 0.0 ) ;
quadrature_1_state(49) = quadrature_1(49) > 0.0 ? 1.0 : ( quadrature_1(49) < 0.0 ? -1.0 : 0.0 ) ;
for ( octave_idx_type ii (50) ; ii < vec_length ; ii ++ ) // Start the main loop
{
// first, assign current states for the inphase_1 and quadrature_1 vectors on this
// new (ii) bar
inphase_1_state(ii) = inphase_1(ii) >= 0.0 ? 1.0 : ( inphase_1(ii) < 0.0 ? -1.0 : 0.0 ) ;
quadrature_1_state(ii) = quadrature_1(ii) >= 0.0 ? 1.0 : ( quadrature_1(ii) < 0.0 ? -1.0 : 0.0 ) ;
// check for a change in state, which indicates a zero crossing
if ( inphase_1_state(ii) == 1.0 && inphase_1_state(ii-1) == -1.0 )
{ // inphase_1 has crossed up over the zero line
latest_iq_1_zc_time = double(ii-1) - inphase_1(ii-1) / ( inphase_1(ii) - inphase_1(ii-1) ) ; // calculate time of zero cross
f_hat(ii) = ( latest_iq_1_zc_time - previous_iq_1_zc_time ) * 4.0 ; // calculate the period from times of zero crosses
previous_iq_1_zc_time = latest_iq_1_zc_time ; // update the previous_zc_time to equal the latest_zc_time
}
else if ( inphase_1_state(ii) == -1.0 && inphase_1_state(ii-1) == 1.0 )
{ // inphase_1 has crossed down over the zero line
latest_iq_1_zc_time = double(ii-1) + inphase_1(ii-1) / ( inphase_1(ii-1) - inphase_1(ii) ) ; // calculate time of zero cross
f_hat(ii) = ( latest_iq_1_zc_time - previous_iq_1_zc_time ) * 4.0 ; // calculate the period from times of zero crosses
previous_iq_1_zc_time = latest_iq_1_zc_time ; // update the previous_zc_time to equal the latest_zc_time
}
else if ( quadrature_1_state(ii) == 1.0 && quadrature_1_state(ii-1) == -1.0 )
{ // quadrature_1 has crossed up over the zero line
latest_iq_1_zc_time = double(ii-1) - quadrature_1(ii-1) / ( quadrature_1(ii) - quadrature_1(ii-1) ) ; // calculate time of zero cross
f_hat(ii) = ( latest_iq_1_zc_time - previous_iq_1_zc_time ) * 4.0 ; // calculate the period from times of zero crosses
previous_iq_1_zc_time = latest_iq_1_zc_time ; // update the previous_zc_time to equal the latest_zc_time
}
else if ( quadrature_1_state(ii) == -1.0 && quadrature_1_state(ii-1) == 1.0 )
{ // quadrature_1 has crossed down over the zero line
latest_iq_1_zc_time = double(ii-1) + quadrature_1(ii-1) / ( quadrature_1(ii-1) - quadrature_1(ii) ) ; // calculate time of zero cross
f_hat(ii) = ( latest_iq_1_zc_time - previous_iq_1_zc_time ) * 4.0 ; // calculate the period from times of zero crosses
previous_iq_1_zc_time = latest_iq_1_zc_time ; // update the previous_zc_time to equal the latest_zc_time
}
else
{ // neither the inphase_1 or quadrature_1 have crossed the zeroline on this bar
f_hat(ii) = f_hat(ii-1) ; // set to most recently calcualted zero crossing period length
}
} // end of main ii loop
retval_list(0) = f_hat ;
return retval_list ;
} // end of function
The first measures the period every quarter of a cycle, based on a half period measure of zero crossings, and the second every quarter of a cycle based on quarter cycle measurements. The screen shot below shows typical performance on ideal sine and cosine inputs (top pane) with the period measurements shown in the lower pane.It can be seen that they both perform admirably within their respective theoretical constraints on this ideal input. The subject of my next post will concentrate on the inputs that can be realistically extracted from live price data.
Thursday, 4 September 2014
Update on Zero Crossing Frequency Measurement
Having now returned from a long summer hiatus I've got back to work on the subject matter of my last post and I think I've made some significant progress. Below is a screen shot of the fruits of my work so far.
The top pane shows, in blue, a series of randomly selected sine waves stitched together to form a sideways "price" channel of various periods of 10 to 30 inclusive, whilst the bottom pane shows the period measurements of the "price." The blue period line is the actual period of the blue "price" in the upper pane whilst the red line is the measured period of "price" using my current implementation of a zero crossing period measurement indicator using the actual phase of "price" as input.
Of course in reality one would not know the actual phase but would have to estimate it, which I'm currently doing via a Hilbert Sine Wave indicator and which is shown in green in the upper pane above. It can be seen that this is not exactly in sync with the blue price, and the period of price as extracted via the sine wave indicator calculations is shown in green in the the lower pane above.The phase as extracted by the Sine Wave indicator is compared with the actual phase in the screen shot below, where blue is the actual and red the extracted phase.
Obviously there is a difference, which manifests itself as an extremely erratic zero crossing measurement when this extracted phase is used as input to the zero crossing indicator, shown in magenta in the screen shot below.
It is very encouraging that the zero crossing indicator performs in an almost theoretically perfect manner when supplied with good inputs, but it is disappointing that such inputs might not be able to be supplied by the Sine Wave indicator. I surmise that the delay induced by the calculations of the Hilbert transform might be partly to blame, but there might also be some improvements I can make to my implementation of the Sine Wave indicator (elimination of coding bugs or perhaps some coding kluge?). I will focus on this over the coming days.
The top pane shows, in blue, a series of randomly selected sine waves stitched together to form a sideways "price" channel of various periods of 10 to 30 inclusive, whilst the bottom pane shows the period measurements of the "price." The blue period line is the actual period of the blue "price" in the upper pane whilst the red line is the measured period of "price" using my current implementation of a zero crossing period measurement indicator using the actual phase of "price" as input.
Of course in reality one would not know the actual phase but would have to estimate it, which I'm currently doing via a Hilbert Sine Wave indicator and which is shown in green in the upper pane above. It can be seen that this is not exactly in sync with the blue price, and the period of price as extracted via the sine wave indicator calculations is shown in green in the the lower pane above.The phase as extracted by the Sine Wave indicator is compared with the actual phase in the screen shot below, where blue is the actual and red the extracted phase.
Obviously there is a difference, which manifests itself as an extremely erratic zero crossing measurement when this extracted phase is used as input to the zero crossing indicator, shown in magenta in the screen shot below.
It is very encouraging that the zero crossing indicator performs in an almost theoretically perfect manner when supplied with good inputs, but it is disappointing that such inputs might not be able to be supplied by the Sine Wave indicator. I surmise that the delay induced by the calculations of the Hilbert transform might be partly to blame, but there might also be some improvements I can make to my implementation of the Sine Wave indicator (elimination of coding bugs or perhaps some coding kluge?). I will focus on this over the coming days.
Monday, 16 June 2014
A New Method to Measure Instantaneous Frequency/Periods?
A few posts ago I stated that I was reassessing my dominant cycle code and to that end I have been doing a fair bit of research on the web and found some really cool stuff, such as working Octave/MATLAB code for Empirical Mode Decomposition, available here, and a couple of toolboxes here and here. However, the purpose of this post is to introduce a new idea that is inspired by a Master's thesis, authored by Yizheng Liao.
The big idea contained in this thesis is online instantaneous frequency measurement by means of measuring zero crossings of a signal and its Hilbert transform, otherwise known as the analytic signal. A simple demonstration is shown below
where the cyan line is the original (real part) signal and red the Hilbert transform (imaginary part) of a noisy sine wave signal. The Hilbert transform of sin(x) is -cos(x), which is the equivalent of sin(x-90) using degrees rather than radian notation for the phase of x. Since we can delay the phase by any amount, it is possible to have an array of phase delays to produce more zero crossings per full cycle period than just the 4 that occur with the analytic signal only. This is shown below
where, as before, the cyan is the noisy sine wave signal, the dark blue is the original smooth sine wave signal to which the noise is added, the red is the measured sine wave extracted from the noisy signal and the green, magenta and yellow lines are sine plots of the phase of the red sine delayed by 30, 60, 90, 120, 150 and 180 degrees. This last 180 degree delay line is actually superfluous as it crosses the zero line at the same time as the original signal, but I show it just for completeness. The white cursor line shows the zero line and the beginning of a complete cycle. Apart from being able to increase the number of measurable zero crossings, another advantage I can see from using this sine wave extraction method is that the extracted sine waves are much smoother than the analytic signal, hence the zero crossings themselves can be used rather than the crossings of hysteresis lines (read the paper!), avoiding any delays due to waiting to cross said hysteresis lines.
For those who are interested, the sine wave indicator I used for this is John Ehler's sine wave indicator, a nice exposition of which is here and code available here.
More on this in due course.
The big idea contained in this thesis is online instantaneous frequency measurement by means of measuring zero crossings of a signal and its Hilbert transform, otherwise known as the analytic signal. A simple demonstration is shown below
where the cyan line is the original (real part) signal and red the Hilbert transform (imaginary part) of a noisy sine wave signal. The Hilbert transform of sin(x) is -cos(x), which is the equivalent of sin(x-90) using degrees rather than radian notation for the phase of x. Since we can delay the phase by any amount, it is possible to have an array of phase delays to produce more zero crossings per full cycle period than just the 4 that occur with the analytic signal only. This is shown below
where, as before, the cyan is the noisy sine wave signal, the dark blue is the original smooth sine wave signal to which the noise is added, the red is the measured sine wave extracted from the noisy signal and the green, magenta and yellow lines are sine plots of the phase of the red sine delayed by 30, 60, 90, 120, 150 and 180 degrees. This last 180 degree delay line is actually superfluous as it crosses the zero line at the same time as the original signal, but I show it just for completeness. The white cursor line shows the zero line and the beginning of a complete cycle. Apart from being able to increase the number of measurable zero crossings, another advantage I can see from using this sine wave extraction method is that the extracted sine waves are much smoother than the analytic signal, hence the zero crossings themselves can be used rather than the crossings of hysteresis lines (read the paper!), avoiding any delays due to waiting to cross said hysteresis lines.
For those who are interested, the sine wave indicator I used for this is John Ehler's sine wave indicator, a nice exposition of which is here and code available here.
More on this in due course.
Sunday, 25 May 2014
Updated Cauchy-Schwarz Matching Algorithm
Following on from my previous post, below is a code box showing a slightly improved Cauchy-Schwarz matching algorithm, improved in the sense that this implementation has a slightly better effect size over random when the test runs of the previous post's version are compared with this version.
Firstly, when training a feedforward neural network it is normal that a certain number of samples are held out of the training set for use as a cross validation set. The point of this is to ensure that the trained NN will generalise well to as yet unseen data. In the case of my rolling training regime this does not apply. The NN that is being trained for the "current bar" will be used once to classify the "current bar" and then thrown away. The "next bar" will have a completely new NN trained specifically for it, which in its turn will be discarded, and so on and so on along the whole price history. There is no need to ensure generalisation of any specifically trained NN. This being the case, all the training set examples are used in the training and early stopping is implemented by a crude heuristic of classification accuracy on the training set: training stops when the classification error rate on the whole training set is <= 5%. Further experience with this in the future may lead me to make some adjustments, but for now this is what I am going with.
A second reason for adopting this approach stems from my reading of this book wherein it is stated that on financial time series the "traditional" machine learning error metrics can be misleading. It cites a (theoretical?) example of a profitable trading system that has been trained/optimised for maximum profit but has a counter-intuitive, negative R-squared. The explanation for this lies in the heavy tails of price distribution(s). It is in these tails that the extreme returns reside and where the big profits/losses are to be made. However, by using a more traditional error metric such as least squares a ML algorithm might concentrate on the central area of a price distribution in order to reduce the error metric on the majority of price instances and thereby ignore the tails, producing a nice, low error but a useless system. The converse can be true for a good system, in that the ML least squares metric can be rubbish but the relevant performance metric (max profit, min draw down, risk adjusted return etc.) of the system great.
It is for these reasons that I have adopted my current approach.
function [ top_matches ] = rolling_cauchy_schwarz_matching_algo_2( open_ch, high_ch, low_ch, close_ch, period )
% pre-allocate vectors in memory
cauchy_schwarz_values = zeros( size(close_ch,1) , 1 ) ;
top_matches = zeros( size(close_ch,1), 100 ) ;
% select price bar to train nn on
for jj = size(close_ch,1)-250 : size(close_ch,1)
lookback = period( jj ) ;
sample_to_match = [ close_ch( jj-lookback : jj )' high_ch( jj-9 : jj )' low_ch( jj-9 : jj )' ( close_ch( jj-4 : jj ).-open_ch( jj-4 : jj ) )' ] ;
norm_sample_to_match = norm( sample_to_match ) ;
% for this jj train_bar, calculate cauchy_schwarz matching values in the historical record up to index jj-2
for ii = 50 : jj - 2
cauchy_schwarz_values(ii) = abs( sample_to_match * [ close_ch( ii-lookback : ii ) ; high_ch( ii-9 : ii ) ; low_ch( ii-9 : ii ) ; ( close_ch( ii-4 : ii ).-open_ch( ii-4 : ii ) ) ] ) / ( norm_sample_to_match * norm( [ close_ch( ii-lookback : ii , 1 ) ; high_ch( ii-9 : ii ) ; low_ch( ii-9 : ii ) ; ( close_ch( ii-4 : ii ).-open_ch( ii-4 : ii ) ) ] ) ) ;
end % end of ii loop
% get the top 100 matches for this price bar
[ s, sort_index ] = sort( cauchy_schwarz_values ) ;
top_matches( jj, : ) = sort_index( end-99 : end )' ;
end % end of jj loop
end % end of function
The inputs are channel normalised prices, with the length of the channel being adaptive to the dominant cycle period. This function is called as part of a rolling neural net training regime to select the top n (n = 100 in this case) matches in the historical record as training data. The actual NN training code is a close adaptation of the code in my neural net walkforward training post, but with a couple of important caveats which are discussed below.Firstly, when training a feedforward neural network it is normal that a certain number of samples are held out of the training set for use as a cross validation set. The point of this is to ensure that the trained NN will generalise well to as yet unseen data. In the case of my rolling training regime this does not apply. The NN that is being trained for the "current bar" will be used once to classify the "current bar" and then thrown away. The "next bar" will have a completely new NN trained specifically for it, which in its turn will be discarded, and so on and so on along the whole price history. There is no need to ensure generalisation of any specifically trained NN. This being the case, all the training set examples are used in the training and early stopping is implemented by a crude heuristic of classification accuracy on the training set: training stops when the classification error rate on the whole training set is <= 5%. Further experience with this in the future may lead me to make some adjustments, but for now this is what I am going with.
A second reason for adopting this approach stems from my reading of this book wherein it is stated that on financial time series the "traditional" machine learning error metrics can be misleading. It cites a (theoretical?) example of a profitable trading system that has been trained/optimised for maximum profit but has a counter-intuitive, negative R-squared. The explanation for this lies in the heavy tails of price distribution(s). It is in these tails that the extreme returns reside and where the big profits/losses are to be made. However, by using a more traditional error metric such as least squares a ML algorithm might concentrate on the central area of a price distribution in order to reduce the error metric on the majority of price instances and thereby ignore the tails, producing a nice, low error but a useless system. The converse can be true for a good system, in that the ML least squares metric can be rubbish but the relevant performance metric (max profit, min draw down, risk adjusted return etc.) of the system great.
It is for these reasons that I have adopted my current approach.
Wednesday, 16 April 2014
Effect Size of Cauchy-Schwarz Matching Algorithm
In my last post I talked about using the Cauchy-Schwarz Inequality to match similar periods of price history to one another. This post is about the more rigorous testing of this idea.
I decided to use the Effect size as the test of choice, for which there are nice introductions here and here. A basic description of the way I implemented the test is as follows:-
Running the code on the EURUSD forex pair and plotting histograms gives this:
where figures 1 and 2 are for the Cauchy-Schwarz values and figures 3 and 4 are Distance correlation values for comparative purposes, and which I won't discuss in this post.
On seeing this for the first time I was somewhat surprised as I had expected the effect size distribution(s) to be approximately normal because all the test calculations are based on averages. However, it was a pleasant surprise due to the peak in values at the right hand side, showing a possible substantial effect size. To make things clearer here are the percentiles of the four histograms above:
All in all a successful test, which encourages me to adopt the Cauchy-Schwarz inequality, but before I do there are one or two more tweaks I would like to test. This will be the subject of my next post.
I decided to use the Effect size as the test of choice, for which there are nice introductions here and here. A basic description of the way I implemented the test is as follows:-
- Randomly pick a section of price history, which will be used as the price history for the selection algorithm to match
- Take the 5 consecutive bars immediately following the above section of price history and store as the "target"
- Create a control group of random matches to the above "target" by randomly selecting 10 separate 5 bar pieces of price history and calculating the Cauchy-Schwarz values of these 10 compared to the target and record the average value of these values. Repeat this step N times to create a distribution of randomly matched, average target-to-random-price Cauchy-Schwarz values. By virtue of the Central limit theorem it can be expected that this distribution is approximately normal
- Using the matching algorithm (as described in the previous post) get the closest 10 matches in the price history to the random selection from step 1
- Get the 5 consecutive bars immediately following the 10 matches from step 4 and calculate their Cauchy-Schwarz values viz-a-viz the "target" and record the average value of these 10 values. This average value is the "experimental" value
- Using the mean and standard deviation of the control group distribution from step 3, calculate the effect size of the experimental value and record this effect size value
- Repeat all the above steps M times to form an effect size value distribution
clear all
% load price file of interest
filename = input( 'Enter filename for prices, e.g. es or esmatrix: ' , 's' ) ;
data = load( "-ascii" , filename ) ;
% get tick size
switch filename
case { "cc" }
tick = 1 ;
case { "gc" "lb" "pl" "sm" "sp" }
tick = 0.1 ;
case { "ausyen" "bo" "cl" "ct" "dx" "euryen" "gbpyen" "sb" "usdyen" }
tick = 0.01 ;
case { "c" "ng" }
tick = 0.001 ;
case { "auscad" "aususd" "euraus" "eurcad" "eurchf" "eurgbp" "eurusd" "gbpchf" "gbpusd" "ho" "rb" "usdcad" "usdchf" }
tick = 0.0001 ;
case { "c" "o" "s" "es" "nd" "w" }
tick = 0.25 ;
case { "fc" "lc" "lh" "pb" }
tick = 0.025 ;
case { "ed" }
tick = 0.0025 ;
case { "si" }
tick = 0.5 ;
case { "hg" "kc" "oj" "pa" }
tick = 0.05 ;
case { "ty" "us" }
tick = 0.015625 ;
case { "ccmatrix" }
tick = 1 ;
case { "gcmatrix" "lbmatrix" "plmatrix" "smmatrix" "spmatrix" }
tick = 0.1 ;
case { "ausyenmatrix" "bomatrix" "clmatrix" "ctmatrix" "dxmatrix" "euryenmatrix" "gbpyenmatrix" "sbmatrix" "usdyenmatrix" }
tick = 0.01 ;
case { "cmatrix" "ngmatrix" }
tick = 0.001 ;
case { "auscadmatrix" "aususdmatrix" "eurausmatrix" "eurcadmatrix" "eurchfmatrix" "eurgbpmatrix" "eurusdmatrix" "gbpchfmatrix" "gbpusdmatrix" "homatrix" "rbmatrix" "usdcadmatrix" "usdchfmatrix" }
tick = 0.0001 ;
case { "cmatrix" "omatrix" "smatrix" "esmatrix" "ndmatrix" "wmatrix" }
tick = 0.25 ;
case { "fcmatrix" "lcmatrix" "lhmatrix" "pbmatrix" }
tick = 0.025 ;
case { "edmatrix" }
tick = 0.0025 ;
case { "simatrix" }
tick = 0.5 ;
case { "hgmatrix" "kcmatrix" "ojmatrix" "pamatrix" }
tick = 0.05 ;
case { "tymatrix" "usmatrix" }
tick = 0.015625 ;
endswitch
open = data( : , 4 ) ;
high = data( : , 5 ) ;
low = data( : , 6 ) ;
close = data( : , 7 ) ;
price = vwap( open, high, low, close, tick ) ;
clear -exclusive price tick
% first, get the lookback parameters on real prices
[ sine, sinelead, period ] = sinewave_indicator( price ) ;
[ max_price, min_price, channel_price ] = adaptive_lookback_max_min( price, period, tick ) ;
smooth_price = smooth_2_5( price ) ;
[ max_smooth_price, min_smooth_price, smooth_channel_price ] = adaptive_lookback_max_min( smooth_price, period, tick ) ;
cauchy_schwarz_values = zeros( size(channel_price,1) , 1 ) ;
cauchy_schwarz_values_smooth = zeros( size(channel_price,1) , 1 ) ;
% set up all recording vectors
N = 10 ; % must be >= 10
% record these values
matches_values = zeros( N, 1 ) ;
matches_smooth_values = zeros( N, 1 ) ;
distcorr_values = zeros( N, 1 ) ;
distcorr_values_smooth = zeros( N, 1 ) ;
% vectors to record averages
random_matches_values_averages = zeros( 750, 1 ) ;
random_matches_smooth_values_averages = zeros( 750, 1 ) ;
random_distcorr_averages = zeros( 750, 1 ) ;
random_distcorr_smooth_averages = zeros( 750, 1 ) ;
% effect size vectors
effect_size = zeros( 750, 1 ) ;
effect_size_smooth = zeros( 750, 1 ) ;
effect_size_distcorr = zeros( 750, 1 ) ;
effect_size_distcorr_smooth = zeros( 750, 1 ) ;
for kk = 1 : 750
% first, get a random pick from the price history and all its associated values
sample_index = randperm( (size(price,1)-55), 1 ) .+ 50 ;
lookback = period( sample_index ) ;
sample_to_match = channel_price( sample_index-lookback : sample_index )' ;
sample_to_match_smooth = smooth_channel_price( sample_index-lookback : sample_index )' ;
projection_to_match = ( ( price( (sample_index+1):(sample_index+5) ) .- min_price(sample_index) ) ./ ( max_price(sample_index)-min_price(sample_index) ) )' ;
projection_to_match_smooth = ( ( price( (sample_index+1):(sample_index+5) ) .- min_smooth_price(sample_index) ) ./ ( max_smooth_price(sample_index)-min_smooth_price(sample_index) ) )' ;
% for this pick, calculate cauchy_schwarz_values
for ii = 50 : size( price, 1 )
cauchy_schwarz_values(ii) = abs( sample_to_match * channel_price( ii-lookback : ii ) ) / ( norm(sample_to_match) * norm( channel_price( ii-lookback : ii , 1 ) ) ) ;
cauchy_schwarz_values_smooth(ii) = abs( sample_to_match_smooth * smooth_channel_price( ii-lookback : ii ) ) / ( norm(sample_to_match_smooth) * norm( smooth_channel_price( ii-lookback : ii , 1 ) ) ) ;
end
% now set the values for sample_to_match +/- 2 to zero to avoid matching with itself
cauchy_schwarz_values( sample_index-2 : sample_index+2 ) = 0.0 ;
cauchy_schwarz_values_smooth( sample_index-2 : sample_index+2 ) = 0.0 ;
% set the last six values to zero to allow for projections
cauchy_schwarz_values( end-5 : end ) = 0.0 ;
cauchy_schwarz_values_smooth( end-5 : end ) = 0.0 ;
% get the top N matches
for ii = 1 : N
[ max_val, ix ] = max( cauchy_schwarz_values ) ;
norm_price_proj_match = ( ( price( ((ix)+1):((ix)+5) ) .- min_price(ix) ) ./ ( max_price(ix)-min_price(ix) ) ) ;
matches_values(ii) = abs( projection_to_match * norm_price_proj_match ) / ( norm(projection_to_match) * norm( norm_price_proj_match ) ) ;
cauchy_schwarz_values( ix-2 : ix+2 ) = 0.0 ;
[ max_val, ix ] = max( cauchy_schwarz_values_smooth ) ;
norm_price_smooth_proj_match = ( ( price( ((ix)+1):((ix)+5) ) .- min_smooth_price(ix) ) ./ ( max_smooth_price(ix)-min_smooth_price(ix) ) ) ;
matches_smooth_values(ii) = abs( projection_to_match_smooth * norm_price_smooth_proj_match ) / ( norm(projection_to_match_smooth) * norm( norm_price_smooth_proj_match ) ) ;
cauchy_schwarz_values_smooth( ix-2 : ix+2 ) = 0.0 ;
distcorr_values(ii) = distcorr( projection_to_match', norm_price_proj_match ) ;
distcorr_values_smooth(ii) = distcorr( projection_to_match_smooth', norm_price_smooth_proj_match ) ;
end % end of top N matches loop
% get and record averages for the top N matches
matches_values_average = mean( matches_values ) ;
matches_smooth_values_average = mean( matches_smooth_values ) ;
distcorr_average = mean( distcorr_values ) ;
distcorr_smooth_average = mean( distcorr_values_smooth ) ;
% now create a null distribution of random price projections
% randomly choosen from prices
for jj = 1 : 750
random_index = randperm( (size(price,1)-55), 10 ) .+ 50 ;
for ii = 1 : 10
norm_price_proj_match = ( ( price( (random_index(ii)+1):(random_index(ii)+5) ) .- min_price(random_index(ii)) ) ./ ( max_price(random_index(ii))-min_price(random_index(ii)) ) ) ;
matches_values(ii) = abs( projection_to_match * norm_price_proj_match ) / ( norm(projection_to_match) * norm( norm_price_proj_match ) ) ;
norm_price_smooth_proj_match = ( ( price( (random_index(ii)+1):(random_index(ii)+5) ) .- min_smooth_price(random_index(ii)) ) ./ ( max_smooth_price(random_index(ii))-min_smooth_price(random_index(ii)) ) ) ;
matches_smooth_values(ii) = abs( projection_to_match_smooth * norm_price_smooth_proj_match ) / ( norm(projection_to_match_smooth) * norm( norm_price_smooth_proj_match ) ) ;
distcorr_values(ii) = distcorr( projection_to_match', norm_price_proj_match ) ;
distcorr_values_smooth(ii) = distcorr( projection_to_match_smooth', norm_price_smooth_proj_match ) ;
end % end of random index ii loop
random_matches_values_averages(jj) = mean( matches_values ) ;
random_matches_smooth_values_averages(jj) = mean( matches_smooth_values ) ;
random_distcorr_averages(jj) = mean( distcorr_values ) ;
random_distcorr_smooth_averages(jj) = mean( distcorr_values_smooth ) ;
end % end jj loop
effect_size(kk) = ( matches_values_average - mean( random_matches_values_averages ) ) / std( random_matches_values_averages ) ;
effect_size_smooth(kk) = ( matches_smooth_values_average - mean( random_matches_smooth_values_averages ) ) / std( random_matches_smooth_values_averages ) ;
effect_size_distcorr(kk) = ( distcorr_average - mean( random_distcorr_averages ) ) / std( random_distcorr_averages ) ;
effect_size_distcorr_smooth(kk) = ( distcorr_smooth_average - mean( random_distcorr_smooth_averages ) ) / std( random_distcorr_smooth_averages ) ;
end % end kk loop
all_effect_sizes = [ effect_size, effect_size_smooth, effect_size_distcorr, effect_size_distcorr_smooth ] ;
dlmwrite( 'all_effect_sizes', all_effect_sizes )
ResultsRunning the code on the EURUSD forex pair and plotting histograms gives this:
where figures 1 and 2 are for the Cauchy-Schwarz values and figures 3 and 4 are Distance correlation values for comparative purposes, and which I won't discuss in this post.
On seeing this for the first time I was somewhat surprised as I had expected the effect size distribution(s) to be approximately normal because all the test calculations are based on averages. However, it was a pleasant surprise due to the peak in values at the right hand side, showing a possible substantial effect size. To make things clearer here are the percentiles of the four histograms above:
0.00000 -5.08931 -4.79836 -3.05912 -3.65668
0.01000 -3.61724 -3.20229 -2.46932 -2.45201
0.02000 -3.39841 -2.81969 -2.21764 -2.20515
0.03000 -3.00404 -2.49009 -1.89562 -2.05380
0.04000 -2.66393 -2.35174 -1.80412 -1.91032
0.05000 -2.52514 -2.03670 -1.68800 -1.71335
0.06000 -2.22298 -1.91877 -1.59624 -1.61089
0.07000 -2.07188 -1.88256 -1.52058 -1.48763
0.08000 -1.93247 -1.79727 -1.45786 -1.42828
0.09000 -1.71065 -1.66522 -1.36500 -1.35917
0.10000 -1.59803 -1.58943 -1.31570 -1.31809
0.11000 -1.44325 -1.53087 -1.24996 -1.28199
0.12000 -1.38234 -1.44477 -1.20741 -1.21903
0.13000 -1.22440 -1.32961 -1.17397 -1.17619
0.14000 -1.14728 -1.29863 -1.12755 -1.10768
0.15000 -1.05431 -1.19564 -1.09108 -1.08591
0.16000 -0.93505 -1.10204 -1.06018 -1.04149
0.17000 -0.88272 -1.05314 -1.00478 -1.00248
0.18000 -0.79723 -1.01394 -0.96389 -0.97786
0.19000 -0.66914 -0.98012 -0.92679 -0.96108
0.20000 -0.58700 -0.88085 -0.89990 -0.91932
0.21000 -0.52548 -0.84929 -0.86971 -0.87901
0.22000 -0.44446 -0.82412 -0.83585 -0.84796
0.23000 -0.40282 -0.76732 -0.80526 -0.82919
0.24000 -0.36407 -0.68691 -0.75698 -0.80794
0.25000 -0.32960 -0.65915 -0.73488 -0.77562
0.26000 -0.21295 -0.61977 -0.64435 -0.73739
0.27000 -0.13202 -0.57937 -0.60995 -0.70502
0.28000 -0.07516 -0.50076 -0.54194 -0.67219
0.29000 -0.00845 -0.43592 -0.51490 -0.61872
0.30000 0.04592 -0.35829 -0.49879 -0.59214
0.31000 0.08091 -0.29488 -0.47284 -0.56236
0.32000 0.11649 -0.24116 -0.44727 -0.52599
0.33000 0.20059 -0.20343 -0.38769 -0.48137
0.34000 0.29594 -0.17594 -0.32956 -0.46426
0.35000 0.33832 -0.12867 -0.31033 -0.44284
0.36000 0.38473 -0.10445 -0.28196 -0.41119
0.37000 0.42759 -0.07363 -0.25178 -0.37141
0.38000 0.45809 -0.03128 -0.21921 -0.33732
0.39000 0.51545 0.00103 -0.19434 -0.30017
0.40000 0.56191 0.05818 -0.16896 -0.26556
0.41000 0.60728 0.09308 -0.15057 -0.23521
0.42000 0.63342 0.13244 -0.13961 -0.21845
0.43000 0.67951 0.17094 -0.11061 -0.20428
0.44000 0.69882 0.22192 -0.05734 -0.19437
0.45000 0.75193 0.25773 -0.03497 -0.16183
0.46000 0.79911 0.30891 -0.00695 -0.13580
0.47000 0.84183 0.35623 0.01927 -0.11969
0.48000 0.91024 0.38352 0.05030 -0.10521
0.49000 0.94791 0.42460 0.06230 -0.07570
0.50000 1.01034 0.48288 0.08379 -0.05241
0.51000 1.04269 0.54956 0.11360 -0.03448
0.52000 1.07527 0.62407 0.13003 -0.00864
0.53000 1.10908 0.65434 0.16910 0.01793
0.54000 1.12665 0.69819 0.19257 0.03546
0.55000 1.13850 0.75071 0.20893 0.05331
0.56000 1.17187 0.78859 0.24099 0.08191
0.57000 1.19397 0.82243 0.25359 0.10432
0.58000 1.22162 0.87152 0.26988 0.13012
0.59000 1.24032 0.91341 0.29813 0.16376
0.60000 1.26567 0.96977 0.32279 0.20620
0.61000 1.29286 1.00221 0.36456 0.23991
0.62000 1.32750 1.03669 0.37966 0.28647
0.63000 1.35170 1.07326 0.43526 0.31652
0.64000 1.38017 1.12882 0.45922 0.35653
0.65000 1.39101 1.15719 0.47552 0.37813
0.66000 1.41716 1.17241 0.49585 0.41064
0.67000 1.44582 1.21725 0.50760 0.42996
0.68000 1.46310 1.26081 0.56082 0.44876
0.69000 1.47664 1.27710 0.58793 0.49889
0.70000 1.49066 1.31164 0.60148 0.54122
0.71000 1.49891 1.34165 0.64747 0.57689
0.72000 1.50470 1.36688 0.67315 0.59469
0.73000 1.51436 1.38746 0.70662 0.63938
0.74000 1.52604 1.41351 0.75330 0.66263
0.75000 1.54430 1.43842 0.78925 0.67884
0.76000 1.55633 1.46536 0.81250 0.69540
0.77000 1.56282 1.48012 0.84801 0.72899
0.78000 1.57245 1.49574 0.86657 0.73934
0.79000 1.58277 1.51564 0.90696 0.76147
0.80000 1.59149 1.53226 0.93265 0.81038
0.81000 1.59883 1.54450 0.97456 0.85287
0.82000 1.60587 1.55777 1.00809 0.90534
0.83000 1.61216 1.56334 1.02570 0.96566
0.84000 1.61803 1.57583 1.05052 1.02102
0.85000 1.62568 1.58589 1.07218 1.03485
0.86000 1.63091 1.59593 1.11747 1.09383
0.87000 1.64307 1.60745 1.14659 1.16075
0.88000 1.65033 1.61638 1.17268 1.21484
0.89000 1.65691 1.62442 1.21196 1.24922
0.90000 1.66307 1.63321 1.25644 1.30013
0.91000 1.67429 1.64781 1.30644 1.33641
0.92000 1.68702 1.66001 1.34919 1.37382
0.93000 1.69829 1.67226 1.39081 1.41904
0.94000 1.70893 1.68142 1.47874 1.48799
0.95000 1.72625 1.70083 1.62107 1.58719
0.96000 1.73656 1.71328 1.82299 1.63232
0.97000 1.77279 1.74188 1.99231 1.72630
0.98000 1.89750 1.79882 2.19662 1.94227
0.99000 2.34395 2.06873 2.34937 2.24499
1.00000 3.73384 4.27923 4.11659 2.74557
where the first column contains the percentiles, and the 2nd, 3rd, 4th and 5th columns correspond to figures 1, 2, 3 and 4 above, and contain the effect size values. Looking at the 1st column it can be seen that if Cohen's "scale" is applied, over 50% of the effect size values can be describe as "large," with an approximate further 15% being "medium" effect.All in all a successful test, which encourages me to adopt the Cauchy-Schwarz inequality, but before I do there are one or two more tweaks I would like to test. This will be the subject of my next post.
Sunday, 6 April 2014
The Cauchy-Schwarz Inequality
In my previous post I said I was looking into my code for the dominant cycle, mostly with a view to either improving my code or perhaps replacing/augmenting it with some other method of calculating the cycle period. To this end I have recently enrolled on a discrete time signals and systems course offered by edx. One of the lectures was about the Cauchy-Schwarz inequality, which is what this post is about.
The basic use I have in mind is to use the inequality to select sections of price history that are most similar to one another and use these as training cases for neural net training. My initial Octave code is given in the code box below:-
First off, although the above code randomly selects a section of price history to match, I deliberately hand chose a section to match for illustrative purposes in this post. Below is the section
where the section ends at the point where the vertical cursor crosses the price and begins at the high just below the horizontal cursor, for a look back period of 16 bars. For context, here is a zoomed out view.
I chose this section because it represents a "difficult" set of prices, i.e. moving sideways at the end of a retracement and perhaps reacting to a previous low acting as resistance, as well as being in a Fibonacci retracement zone.
The first set of code outputs is this chart
which shows the Cauchy-Schwarz values for the whole range of the price series, with the upper pane being values for the raw price matching and the lower pane being the smoothed price matching. Note that in the code the values are set to zero after the max function has selected the best match and so the spikes down to zero show the points in time where the top N, in this case 10, matches were taken from.
The next chart output shows the the normalised prices that the matching is done against, with the cyan being the original sample (the same in all subplots), the red being the raw price matches and the yellow being the smoothed price matches.
The closest match is the top left subplot, and then reading horizontally and down to the 10th best in the bottom right subplot.
The next plot shows the price matches un-normalised, for the raw price matching, with the original sample being blue,
and next for the smoothed matching,
and finally, side by side for easy visual comparison.
N.b. For all the smoothed plots above, although the matching is done on smoothed prices, the unsmoothed, raw prices for these matches are plotted.
After plotting all the above, the code prints to terminal some details thus:
The basic use I have in mind is to use the inequality to select sections of price history that are most similar to one another and use these as training cases for neural net training. My initial Octave code is given in the code box below:-
clear all
% load price file of interest
filename = 'eurusdmatrix' ; %input( 'Enter filename for prices, e.g. es or esmatrix: ' , 's' ) ;
data = load( "-ascii" , filename ) ;
% get tick size
switch filename
case { "cc" }
tick = 1 ;
case { "gc" "lb" "pl" "sm" "sp" }
tick = 0.1 ;
case { "ausyen" "bo" "cl" "ct" "dx" "euryen" "gbpyen" "sb" "usdyen" }
tick = 0.01 ;
case { "c" "ng" }
tick = 0.001 ;
case { "auscad" "aususd" "euraus" "eurcad" "eurchf" "eurgbp" "eurusd" "gbpchf" "gbpusd" "ho" "rb" "usdcad" "usdchf" }
tick = 0.0001 ;
case { "c" "o" "s" "es" "nd" "w" }
tick = 0.25 ;
case { "fc" "lc" "lh" "pb" }
tick = 0.025 ;
case { "ed" }
tick = 0.0025 ;
case { "si" }
tick = 0.5 ;
case { "hg" "kc" "oj" "pa" }
tick = 0.05 ;
case { "ty" "us" }
tick = 0.015625 ;
case { "ccmatrix" }
tick = 1 ;
case { "gcmatrix" "lbmatrix" "plmatrix" "smmatrix" "spmatrix" }
tick = 0.1 ;
case { "ausyenmatrix" "bomatrix" "clmatrix" "ctmatrix" "dxmatrix" "euryenmatrix" "gbpyenmatrix" "sbmatrix" "usdyenmatrix" }
tick = 0.01 ;
case { "cmatrix" "ngmatrix" }
tick = 0.001 ;
case { "auscadmatrix" "aususdmatrix" "eurausmatrix" "eurcadmatrix" "eurchfmatrix" "eurgbpmatrix" "eurusdmatrix" "gbpchfmatrix" "gbpusdmatrix" "homatrix" "rbmatrix" "usdcadmatrix" "usdchfmatrix" }
tick = 0.0001 ;
case { "cmatrix" "omatrix" "smatrix" "esmatrix" "ndmatrix" "wmatrix" }
tick = 0.25 ;
case { "fcmatrix" "lcmatrix" "lhmatrix" "pbmatrix" }
tick = 0.025 ;
case { "edmatrix" }
tick = 0.0025 ;
case { "simatrix" }
tick = 0.5 ;
case { "hgmatrix" "kcmatrix" "ojmatrix" "pamatrix" }
tick = 0.05 ;
case { "tymatrix" "usmatrix" }
tick = 0.015625 ;
endswitch
open = data( : , 4 ) ;
high = data( : , 5 ) ;
low = data( : , 6 ) ;
close = data( : , 7 ) ;
period = data( : , 12 ) ;
price = vwap( open, high, low, close, tick ) ;
[ max_adj, min_adj, channel_price ] = adaptive_lookback_max_min( price, period, tick ) ;
smooth_price = smooth_2_5( price ) ;
[ max_adj, min_adj, smooth_channel_price ] = adaptive_lookback_max_min( smooth_price, period, tick ) ;
clear -exclusive channel_price smooth_channel_price period price
% randomly choose vwap prices to match
sample_index = randperm( size(channel_price,1), 1 )
lookback = period( sample_index )
sample_to_match = channel_price( sample_index-lookback : sample_index )' ;
sample_to_match_smooth = smooth_channel_price( sample_index-lookback : sample_index )' ;
cauchy_schwarz_values = zeros( size(channel_price,1) , 1 ) ;
cauchy_schwarz_values_smooth = zeros( size(channel_price,1) , 1 ) ;
for ii = 50 : size(channel_price,1)
% match_size = size( channel_price( ii-lookback : ii ) )
cauchy_schwarz_values(ii) = abs( sample_to_match * channel_price( ii-lookback : ii ) ) / ( norm(sample_to_match) * norm( channel_price( ii-lookback : ii , 1 ) ) ) ;
cauchy_schwarz_values_smooth(ii) = abs( sample_to_match_smooth * smooth_channel_price( ii-lookback : ii ) ) / ( norm(sample_to_match_smooth) * norm( smooth_channel_price( ii-lookback : ii , 1 ) ) ) ;
end
% now set the values for sample_to_match +/- 2 to zero to avoid matching with itself
cauchy_schwarz_values( sample_index-2 : sample_index+2 ) = 0.0 ;
cauchy_schwarz_values_smooth( sample_index-2 : sample_index+2 ) = 0.0 ;
N = 10 ; % must be >= 10
% get index values of the top N matches
matches = zeros( N, 1 ) ;
matches_smooth = zeros( N, 1 ) ;
% record these values
matches_values = zeros( N, 1 ) ;
matches_smooth_values = zeros( N, 1 ) ;
for ii = 1: N
[ max_val, ix ] = max( cauchy_schwarz_values ) ;
matches(ii) = ix ;
matches_values(ii) = cauchy_schwarz_values(ix) ;
cauchy_schwarz_values( ix-2 : ix+2 ) = 0.0 ;
[ max_val, ix ] = max( cauchy_schwarz_values_smooth ) ;
matches_smooth(ii) = ix ;
matches_smooth_values(ii) = cauchy_schwarz_values_smooth(ix) ;
cauchy_schwarz_values_smooth( ix-2 : ix+2 ) = 0.0 ;
end
% Plot for visual inspection
clf ;
% the matched index values
figure(1) ;
subplot(2,1,1) ; plot( cauchy_schwarz_values, 'c' ) ;
subplot(2,1,2) ; plot( cauchy_schwarz_values_smooth, 'c' ) ;
set( gcf() , 'color' , [0 0 0] )
% the top N matched price sequences
figure(2) ;
subplot(5, 2, 1) ; plot( sample_to_match, 'c', channel_price( matches(1)-lookback : matches(1) ), 'r', channel_price( matches_smooth(1)-lookback : matches_smooth(1) ), 'y' ) ;
subplot(5, 2, 2) ; plot( sample_to_match, 'c', channel_price( matches(2)-lookback : matches(2) ), 'r', channel_price( matches_smooth(2)-lookback : matches_smooth(2) ), 'y' ) ;
subplot(5, 2, 3) ; plot( sample_to_match, 'c', channel_price( matches(3)-lookback : matches(3) ), 'r', channel_price( matches_smooth(3)-lookback : matches_smooth(3) ), 'y' ) ;
subplot(5, 2, 4) ; plot( sample_to_match, 'c', channel_price( matches(4)-lookback : matches(4) ), 'r', channel_price( matches_smooth(4)-lookback : matches_smooth(4) ), 'y' ) ;
subplot(5, 2, 5) ; plot( sample_to_match, 'c', channel_price( matches(5)-lookback : matches(5) ), 'r', channel_price( matches_smooth(5)-lookback : matches_smooth(5) ), 'y' ) ;
subplot(5, 2, 6) ; plot( sample_to_match, 'c', channel_price( matches(6)-lookback : matches(6) ), 'r', channel_price( matches_smooth(6)-lookback : matches_smooth(6) ), 'y' ) ;
subplot(5, 2, 7) ; plot( sample_to_match, 'c', channel_price( matches(7)-lookback : matches(7) ), 'r', channel_price( matches_smooth(7)-lookback : matches_smooth(7) ), 'y' ) ;
subplot(5, 2, 8) ; plot( sample_to_match, 'c', channel_price( matches(8)-lookback : matches(8) ), 'r', channel_price( matches_smooth(8)-lookback : matches_smooth(8) ), 'y' ) ;
subplot(5, 2, 9) ; plot( sample_to_match, 'c', channel_price( matches(9)-lookback : matches(9) ), 'r', channel_price( matches_smooth(9)-lookback : matches_smooth(9) ), 'y' ) ;
subplot(5, 2, 10) ; plot( sample_to_match, 'c', channel_price( matches(10)-lookback : matches(10) ), 'r', channel_price( matches_smooth(10)-lookback : matches_smooth(10) ), 'y' ) ;
set( gcf() , 'color' , [0 0 0] )
figure(3)
subplot(5, 2, 1) ; plotyy( (1:1:length(price( sample_index-lookback : sample_index ))) , price( sample_index-lookback : sample_index ) , (1:1:length(price( sample_index-lookback : sample_index ))) , price( matches(1)-lookback : matches(1) ) ) ;
subplot(5, 2, 2) ; plotyy( (1:1:length(price( sample_index-lookback : sample_index ))) , price( sample_index-lookback : sample_index ) , (1:1:length(price( sample_index-lookback : sample_index ))) , price( matches(2)-lookback : matches(2) ) ) ;
subplot(5, 2, 3) ; plotyy( (1:1:length(price( sample_index-lookback : sample_index ))) , price( sample_index-lookback : sample_index ) , (1:1:length(price( sample_index-lookback : sample_index ))) , price( matches(3)-lookback : matches(3) ) ) ;
subplot(5, 2, 4) ; plotyy( (1:1:length(price( sample_index-lookback : sample_index ))) , price( sample_index-lookback : sample_index ) , (1:1:length(price( sample_index-lookback : sample_index ))) , price( matches(4)-lookback : matches(4) ) ) ;
subplot(5, 2, 5) ; plotyy( (1:1:length(price( sample_index-lookback : sample_index ))) , price( sample_index-lookback : sample_index ) , (1:1:length(price( sample_index-lookback : sample_index ))) , price( matches(5)-lookback : matches(5) ) ) ;
subplot(5, 2, 6) ; plotyy( (1:1:length(price( sample_index-lookback : sample_index ))) , price( sample_index-lookback : sample_index ) , (1:1:length(price( sample_index-lookback : sample_index ))) , price( matches(6)-lookback : matches(6) ) ) ;
subplot(5, 2, 7) ; plotyy( (1:1:length(price( sample_index-lookback : sample_index ))) , price( sample_index-lookback : sample_index ) , (1:1:length(price( sample_index-lookback : sample_index ))) , price( matches(7)-lookback : matches(7) ) ) ;
subplot(5, 2, 8) ; plotyy( (1:1:length(price( sample_index-lookback : sample_index ))) , price( sample_index-lookback : sample_index ) , (1:1:length(price( sample_index-lookback : sample_index ))) , price( matches(8)-lookback : matches(8) ) ) ;
subplot(5, 2, 9) ; plotyy( (1:1:length(price( sample_index-lookback : sample_index ))) , price( sample_index-lookback : sample_index ) , (1:1:length(price( sample_index-lookback : sample_index ))) , price( matches(9)-lookback : matches(9) ) ) ;
subplot(5, 2, 10) ; plotyy( (1:1:length(price( sample_index-lookback : sample_index ))) , price( sample_index-lookback : sample_index ) , (1:1:length(price( sample_index-lookback : sample_index ))) , price( matches(10)-lookback : matches(10) ) ) ;
figure(4)
subplot(5, 2, 1) ; plotyy( (1:1:length(price( sample_index-lookback : sample_index ))) , price( sample_index-lookback : sample_index ) , (1:1:length(price( sample_index-lookback : sample_index ))) , price( matches_smooth(1)-lookback : matches_smooth(1) ) ) ;
subplot(5, 2, 2) ; plotyy( (1:1:length(price( sample_index-lookback : sample_index ))) , price( sample_index-lookback : sample_index ) , (1:1:length(price( sample_index-lookback : sample_index ))) , price( matches_smooth(2)-lookback : matches_smooth(2) ) ) ;
subplot(5, 2, 3) ; plotyy( (1:1:length(price( sample_index-lookback : sample_index ))) , price( sample_index-lookback : sample_index ) , (1:1:length(price( sample_index-lookback : sample_index ))) , price( matches_smooth(3)-lookback : matches_smooth(3) ) ) ;
subplot(5, 2, 4) ; plotyy( (1:1:length(price( sample_index-lookback : sample_index ))) , price( sample_index-lookback : sample_index ) , (1:1:length(price( sample_index-lookback : sample_index ))) , price( matches_smooth(4)-lookback : matches_smooth(4) ) ) ;
subplot(5, 2, 5) ; plotyy( (1:1:length(price( sample_index-lookback : sample_index ))) , price( sample_index-lookback : sample_index ) , (1:1:length(price( sample_index-lookback : sample_index ))) , price( matches_smooth(5)-lookback : matches_smooth(5) ) ) ;
subplot(5, 2, 6) ; plotyy( (1:1:length(price( sample_index-lookback : sample_index ))) , price( sample_index-lookback : sample_index ) , (1:1:length(price( sample_index-lookback : sample_index ))) , price( matches_smooth(6)-lookback : matches_smooth(6) ) ) ;
subplot(5, 2, 7) ; plotyy( (1:1:length(price( sample_index-lookback : sample_index ))) , price( sample_index-lookback : sample_index ) , (1:1:length(price( sample_index-lookback : sample_index ))) , price( matches_smooth(7)-lookback : matches_smooth(7) ) ) ;
subplot(5, 2, 8) ; plotyy( (1:1:length(price( sample_index-lookback : sample_index ))) , price( sample_index-lookback : sample_index ) , (1:1:length(price( sample_index-lookback : sample_index ))) , price( matches_smooth(8)-lookback : matches_smooth(8) ) ) ;
subplot(5, 2, 9) ; plotyy( (1:1:length(price( sample_index-lookback : sample_index ))) , price( sample_index-lookback : sample_index ) , (1:1:length(price( sample_index-lookback : sample_index ))) , price( matches_smooth(9)-lookback : matches_smooth(9) ) ) ;
subplot(5, 2, 10) ; plotyy( (1:1:length(price( sample_index-lookback : sample_index ))) , price( sample_index-lookback : sample_index ) , (1:1:length(price( sample_index-lookback : sample_index ))) , price( matches_smooth(10)-lookback : matches_smooth(10) ) ) ;
% Print results to terminal
results = zeros( N , 2 ) ;
for ii = 1 : N
results(ii,1) = distcorr( sample_to_match', channel_price( matches(ii)-lookback : matches(ii) ) ) ;
results(ii,2) = distcorr( sample_to_match', channel_price( matches_smooth(ii)-lookback : matches_smooth(ii) ) ) ;
end
results = [ matches_values matches_smooth_values results ]
After some basic "housekeeping" code to load the price file of interest and normalise the prices, a random section of the price history is selected and then, in a loop, the top N matches in the history are found using the inequality as the metric for matching. A value of 0 means that the price series being compared are orthogonal, and hence as dissimilar to each other as possible, whilst a value of 1 means the opposite. There are two types of matching; the raw price matched with raw price, and a smoothed price matched with smoothed price.First off, although the above code randomly selects a section of price history to match, I deliberately hand chose a section to match for illustrative purposes in this post. Below is the section
where the section ends at the point where the vertical cursor crosses the price and begins at the high just below the horizontal cursor, for a look back period of 16 bars. For context, here is a zoomed out view.
I chose this section because it represents a "difficult" set of prices, i.e. moving sideways at the end of a retracement and perhaps reacting to a previous low acting as resistance, as well as being in a Fibonacci retracement zone.
The first set of code outputs is this chart
which shows the Cauchy-Schwarz values for the whole range of the price series, with the upper pane being values for the raw price matching and the lower pane being the smoothed price matching. Note that in the code the values are set to zero after the max function has selected the best match and so the spikes down to zero show the points in time where the top N, in this case 10, matches were taken from.
The next chart output shows the the normalised prices that the matching is done against, with the cyan being the original sample (the same in all subplots), the red being the raw price matches and the yellow being the smoothed price matches.
The closest match is the top left subplot, and then reading horizontally and down to the 10th best in the bottom right subplot.
The next plot shows the price matches un-normalised, for the raw price matching, with the original sample being blue,
and next for the smoothed matching,
and finally, side by side for easy visual comparison.
N.b. For all the smoothed plots above, although the matching is done on smoothed prices, the unsmoothed, raw prices for these matches are plotted.
After plotting all the above, the code prints to terminal some details thus:
lookback = 16
results =
0.95859 0.98856 0.89367 0.86361
0.95733 0.98753 0.93175 0.86839
0.95589 0.98697 0.87398 0.67945
0.95533 0.98538 0.85346 0.83079
0.95428 0.98293 0.91212 0.77225
0.94390 0.98292 0.79350 0.66563
0.93908 0.98150 0.71753 0.77458
0.93894 0.97992 0.86839 0.72492
0.93345 0.97969 0.74456 0.79060
0.93286 0.97940 0.86361 0.61103
results =
0.95859 0.98856 0.89367 0.86361
0.95733 0.98753 0.93175 0.86839
0.95589 0.98697 0.87398 0.67945
0.95533 0.98538 0.85346 0.83079
0.95428 0.98293 0.91212 0.77225
0.94390 0.98292 0.79350 0.66563
0.93908 0.98150 0.71753 0.77458
0.93894 0.97992 0.86839 0.72492
0.93345 0.97969 0.74456 0.79060
0.93286 0.97940 0.86361 0.61103
which, column wise, are the Cauchy-Schwarz values for the raw price matching and the smoothed price matching, and the Distance correlation values for the raw price matching and the smoothed price matching respectively.
The code used to calculate the Distance correlation is given below.
% Copyright (c) 2013, Shen Liu
% All rights reserved.
% Redistribution and use in source and binary forms, with or without
% modification, are permitted provided that the following conditions are
% met:
% * Redistributions of source code must retain the above copyright
% notice, this list of conditions and the following disclaimer.
% * Redistributions in binary form must reproduce the above copyright
% notice, this list of conditions and the following disclaimer in
% the documentation and/or other materials provided with the distribution
% THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
% AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
% IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
% ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
% LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
% CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
% SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
% INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
% CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
% ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
% POSSIBILITY OF SUCH DAMAGE.
function dcor = distcorr(x,y)
% This function calculates the distance correlation between x and y.
% Reference: http://en.wikipedia.org/wiki/Distance_correlation
% Date: 18 Jan, 2013
% Author: Shen Liu (shen.liu@hotmail.com.au)
% Check if the sizes of the inputs match
if size(x,1) ~= size(y,1) ;
error('Inputs must have the same number of rows')
end
% Delete rows containing unobserved values
N = any([isnan(x) isnan(y)],2) ;
x(N,:) = [] ;
y(N,:) = [] ;
% Calculate doubly centered distance matrices for x and y
a = pdist([x,x]) ; % original MATLAB call is to pdist2( x, x )
mcol = mean(a) ;
mrow = mean(a,2) ;
ajbar = ones(size(mrow))*mcol ;
akbar = mrow*ones(size(mcol)) ;
abar = mean(mean(a))*ones(size(a)) ;
A = a - ajbar - akbar + abar ;
b = pdist([y,y]) ;
mcol = mean(b) ;
mrow = mean(b,2) ;
bjbar = ones(size(mrow))*mcol ;
bkbar = mrow*ones(size(mcol)) ;
bbar = mean(mean(b))*ones(size(b)) ;
B = b - bjbar - bkbar + bbar ;
% Calculate squared sample distance covariance and variances
dcov = sum(sum(A.*B))/(size(mrow,1)^2) ;
dvarx = sum(sum(A.*A))/(size(mrow,1)^2) ;
dvary = sum(sum(B.*B))/(size(mrow,1)^2) ;
% Calculate the distance correlation
dcor = sqrt(dcov/sqrt(dvarx*dvary)) ;
These results show promise, and I intend to apply a more rigorous test to them for the subject of a future post.