Friday, 17 September 2021

Matrix Profile and Weakly Labelled Data - Update 1

This is the first post in a short series detailing my recent work following on from my previous post. This post will be about some problems I have had and how I partially solved them.

The main problem was simply the speed at which the code (available from the companion website) seems to run. The first stage Matrix Profile code runs in a few seconds, the second, individual evaluation stage in no more than a few minutes, but the third stage, greedy search, which uses Golden Section Search over the pattern candidates, can take many, many hours. My approach to this was simply to optimise the code to the best of my ability. My optimisations, all in the compute_f_meas.m function, are shown in the following code boxes. This while loop

i = 1;
while true
    
 if i >= length(anno_st)
    break;
 endif
       
first_part = anno_st(1:i);
second_part = anno_st(i+1:end);
bad_st = abs(second_part - anno_st(i)) < sub_len;
second_part = second_part(~bad_st);
anno_st = [first_part; second_part;];
i = i + 1;
      
endwhile
is replaced by this .oct compiled version of the same while loop
#include 
#include 

DEFUN_DLD ( stds_f_meas_while_loop_replace, args, nargout,
"-*- texinfo -*-\n\
@deftypefn {Function File} {} stds_f_meas_while_loop_replace (@var{input_vector,sublen})\n\
This function takes an input vector and a scalar sublen\n\
length. The function sets to zero those elements in the\n\
input vector that are closer to the preceeding value than\n\
sublen. This function replaces a time consuming .m while loop\n\
in the stds compute_f_meas.m function.\n\
@end deftypefn" )

{
octave_value_list retval_list ;
int nargin = args.length () ;

// check the input arguments
if ( nargin != 2 ) // there must be a vector and a scalar sublen
   {
   error ("Invalid arguments. Inputs are a column vector and a scalar value sublen.") ;
   return retval_list ;
   }

if ( args(0).length () < 2 )
   {
   error ("Invalid 1st argument length. Input is a column vector of length > 1.") ;
   return retval_list ;
   }
   
if ( args(1).length () > 1 )
   {
   error ("Invalid 2nd argument length. Input is a scalar value for sublen.") ;
   return retval_list ;
   }
// end of input checking  
  
ColumnVector input = args(0).column_vector_value () ;
double sublen = args(1).double_value () ;
double last_iter ;

// initialise last_iter value
last_iter = input( 0 ) ;
     
for ( octave_idx_type ii ( 1 ) ; ii < args(0).length () ; ii++ )
    {
    
      if ( input( ii ) - last_iter >= sublen )
      {
        last_iter = input( ii ) ;
      }
      else
      {
        input( ii ) = 0.0 ;
      }
      
    } // end for loop
   
retval_list( 0 ) = input ;

return retval_list ;

} // end of function
and called thus
anno_st = stds_f_meas_while_loop_replace( anno_st , sub_len ) ;
anno_st( anno_st == 0 ) = [] ;
This for loop
is_tp = false(length(anno_st), 1);
for i = 1:length(anno_st)
    if anno_ed(i) > length(label)
        anno_ed(i) = length(label);
    end
    if sum(label(anno_st(i):anno_ed(i))) > 0.8*sub_len
        is_tp(i) = true;
    end
end
tp_pre = sum(is_tp);
is replaced by use of cellslices.m and cellfun.m thus
label_length = length( label ) ;
anno_ed( anno_ed > label_length ) = label_length ;
cell_slices = cellslices( label , anno_st , anno_ed ) ;
cell_sums = cellfun( @sum , cell_slices ) ;
tp_pre = sum( cell_sums > 0.8 * sub_len ) ;
and a further for loop
is_tp = false(length(pos_st), 1);
for i = 1:length(pos_st)
    if sum(anno(pos_st(i):pos_ed(i))) > 0.8*sub_len
        is_tp(i) = true;
    end
end
tp_rec = sum(is_tp);
is replaced by
cell_slices = cellslices( anno , pos_st , pos_ed ) ;
cell_sums = cellfun( @sum , cell_slices ) ;
tp_rec = sum( cell_sums > 0.8 * sub_len ) ;

Although the above measurably improves running times, overall the code of the third stage is still sluggish. I have found that the best way to deal with this, on the advice of the original paper's author, is to limit the number of patterns to search for, the "pat_max" variable, to the minimum possible to achieve a satisfactory result. What I mean by this is that if  pat_max = 5 and the result returned also has 5 identified patterns, incrementally increase pat_max until such time that the number of identified patterns is less than pat_max. This does, by necessity, mean running the whole routine a few times, but it is still quicker this way than drastically over estimating pat_max, i.e. choosing a value of say 50 to finally identify maybe only 5/6 patterns.

More in due course.

Saturday, 4 September 2021

"Matrix profile: Using Weakly Labeled Time Series to Predict Outcomes" Paper

Back in May of this year I posted about how I had intended to use Matrix Profile (MP) to somehow cluster the "initial balance" of Market Profile charts with a view to getting a heads up on immediately following price action. Since then, my thinking has evolved due to my learning about the paper "Matrix profile: Using Weakly Labeled Time Series to Predict Outcomes" and its companion website. This very much seems to accomplish the same end I had envisaged with my clustering of initial balances, so I am going to try and use this approach instead.

As a preliminary, I have decided to "weakly label" my time series data using the simple code loop shown below.

for ii = 1 : numel( ix )
  
y_values = train_data( ix( ii ) + 1 : ix( ii ) + 19 , 1 ) ;
london_session_ret = y_values( end ) - y_values( 1 ) ;

[ max_y , max_ix ] = max( y_values ) ;
max_long_ex = max_y - y_values( 1 ) ;

[ min_y , min_ix ] = min( y_values ) ;
max_short_ex = min_y - y_values( 1 ) ;

if ( london_session_ret > 0 && ( max_long_ex / ( -1 * max_short_ex ) ) >= 3 && max_ix > min_ix )
 labels( ix( ii ) - 11 : ix( ii ) , 1 ) = 1 ; 
elseif ( london_session_ret < 0 && ( max_short_ex / max_long_ex ) <= -3 && max_ix < min_ix )
  labels( ix( ii ) - 11 : ix( ii ) , 1 ) = -1 ;
endif
 
endfor
What this essentially does (for the long side) is ensure that price is higher at the end of y_values than at the beginning and there is a reward/risk opportunity of at least 3:1 for at least 1 trade during the period covered by the time range of y_values (either the London a.m. session or the combined New York a.m./London p.m. session) following a 7a.m. to 8.50a.m. (local time) formation of an opening Market profile/initial balance and the maximum adverse excursion occurs before the maximum favourable excursion. A typical chart on the long side looks like this.
This would have the "weak" label for a long trade, and the label would be applied to the Market Profile data that immediately precedes this price action. On the other side, a short labelled chart typically looks like this.
As can be seen, trading "against the label" offers few opportunities for profitable entries/exits. My hope is that a "dictionary" of long/short biased Market Profile patterns can be discovered using the ideas/code in the links above. For completeness, the following chart is typical of price action which does not meet the looped code bias for either long or short.

It is easy to envisage trading this type of price action by fading moves that go outside the "value area" of a Market Profile chart.

More in due course.


Friday, 27 August 2021

Another Iterative Improvement of my Volume/Market Profile Charts

Below is a screenshot of this new chart version, of today's (Friday's) price action at a 10 minute bar scale:

Just by looking at the chart it might not be obvious to readers what has changed, so the changes are detailed below.

The first change is in how the volume profile (the horizontal histogram on the left) is calculated. The "old" version of the chart calculates the profile by assuming the "model" that tick volume for each 10 minute bar is normally distributed across the high/low range of the bar, and then the profile histogram is the accumulation of these individual, 10 minute, normally distributed "mini profiles." A more complete description of this is given in my Market Profile Chart in Octave blog post, with code.

The new approach is more data centric rather than model based. Every 10 minutes, instead of downloading the 10 minute OHLC and tick volume, the last 10 minutes worth of 5 second OHLC and tick volume is downloaded. The whole tick volume of each 5 second period is assigned to a price level equivalent to the Typical price (rounded to the nearest pip) of said 5 second period, and the volume profile is then the accumulation of these volume ticks per price level. I think this is a much more accurate reflection of the price levels at which tick volume actually occurred compared to the old, model based charts. This second screenshot is of the old chart over the exact same price data as the first, improved version of the chart.

It can be seen that the two volume profile histograms of the respective charts differ from each other in terms of their overall shape and the number and price levels of peaks (Points of Control) and troughs (Low Volume Nodes).

The second change in the new chart is in how the background heatmap is plotted. The heatmap is a different presentation of the volume profile whereby higher volume price levels are shown by the brighter yellow colours. The old chart only displays the heatmap associated with the latest calculated volume profile histogram, which is projected back in time. This is, of course, a form of lookahead bias when plotting past prices over the latest heatmap. The new chart solves this by plotting a "rolling" version of the heatmap which reflects the volume profile that was in force at the time each 10 minute OHLC candle formed. It is easy to see how the Points of Control and Low Volume Nodes price levels ebb and flow throughout the trading day.

The third change, which naturally followed on from the downloading of 5 second data, is in the plotting of the candlesticks. Rather than having a normal, open to close candlestick body, the candlesticks show the "mini volume profiles" of the tick volume within each bar, plotted via Octave's patch function. The white candlestick wicks indicate the usual high/low range, and the open and close levels are shown by grey and black dots respectively. This is more clearly seen in the zoomed in screenshot below.

I wanted to plot these types of bars because recently I have watched some trading webcasts, which talked about "P", "b" and "D" shaped bar profiles at "areas of interest." The upshot of these webcasts is that, in general, a "P" bar is bullish, a "b" is bearish and a "D" is "in balance" when they intersect an "area of interest" such as Point of Control, Low Volume Node, support and resistance etc. This is supposed to be indicative of future price direction over the immediate short term. With this new version of chart, I shall be in a position to investigate these claims for myself.

Monday, 5 July 2021

Market Profile Low Volume Node Chart

As a diversion to my recent work with Matrix Profile I have recently completed work on a new chart type in Octave, namely a Market Profile Low Volume Node (LVN) chart, two slightly different versions of which are shown below.

This first one is derived from a TPO chart, whilst the next
is derived from a Volume profile chart.

The horizontal lines are drawn at levels which are considered to be "lows" in the underlying, but not shown, TPO/Volume profiles. The yellow lines are "stronger lows" than the green lines, and the blue lines are extensions of the previous day's "strong lows" in force at the end of that day's trading.

The point of all this, according to online guru theory, is that price is expected to be "rejected" at LVNs by either bouncing, a la support or resistance, or by price powering through the LVN level, usually on increased volume. The charts show the rolling development of the LVNs as the underlying profiles change throughout the day, hence lines can appear and disappear and change colour. As this is a new avenue of investigation for me I feel it is too soon to make a comment on these lines' efficacy, but it does seem uncanny how price very often seems to react to these levels.

More in due course.

Wednesday, 26 May 2021

Update on Recent Matrix Profile Work

Since my previous post, on Matrix Profile (MP), I have been doing a lot of online reading about MP and going back to various source papers and code that are available at the UCR Matrix Profile page. I have been doing this because, despite my initial enthusiasm, the R tsmp package didn't turn out to be suitable for what I wanted to do, or perhaps more correctly I couldn't hack it to get the sort of results I wanted, hence my need to go to "first principles" and code from the UCR page.

Readers may recall that my motivation was to look for time series motifs that form "initial balance (IB)" set ups of Market Profile charts. The rationale for this is that different IBs are precursors to specific market tendencies which may provide a clue or an edge in subsequent market action. A typical scenario from the literature on Market Profile might be "an Open Test Drive can often indicate one of the day's extremes." If this is actually true, one could go long/short with a high confidence stop at the identified extreme. Below is a screenshot of some typical IB profiles:

where each letter typically represents a 30 minute period of market action. The problem is that Market Profile charts, to me at least, are inherently visual and therefore do not easily lend themselves to an algorithmic treatment, which makes it difficult to back test in a robust fashion. This is why I have been trying to use MP.

The first challenge I faced was how to preprocess price action data such as OHLC and volume such that I could use MP. In the end I resorted to using the mid-price, the high-low range and (tick) volume as proxies for market direction, market volatility and market participation. Because IBs occur over market opens, I felt it was important to use the volatility and participation proxies as these are important markers for the sentiment of subsequent price action. This choice necessitated using a multivariate form of MP, and I used the basic MP STAMP code that is available at Matrix Profile VI: Meaningful Multidimensional Motif Discovery, with some slight tweaks for my use case.

Having the above tools in hand, what should they be used for? I decided that Cluster analysis is what is needed, i.e. cluster using the motifs that MP could discover. For this purpose, I used the approach outlined in section 3.9 of the paper "The Swiss Army Knife of Time Series Data Mining." The reasoning behind this choice is that if, for example, an "Open Test Drive IB" is a real thing, it should occur frequently enough that time series sub-sequences of it can be clustered or associated with an "Open Test Drive IB" motif. If all such prototype motifs can be identified and all IBs can be assigned to one of them, subsequent price action can be investigated to check the anecdotal claims, such as quoted above.

My Octave code implementation of the linked Swiss Army Knife routine is shown in the code box below.

data = dlmread( '/path/to/mv_data' ) ;
skip_loc = dlmread( '/path/to/skip_loc' ) ;
skip_loc_copy = find( skip_loc ) ; skip_loc_copy2 = skip_loc_copy ; skip_loc_copy3 = skip_loc_copy ;
sub_len = 9 ;
data_len = size( data , 1 ) ;
data_to_use = [ (data(:,2).+data(:,3))./2 , data(:,2).-data(:,3) , data(:,5) ] ;

must_dim = [] ;
exc_dim = [] ;
[ pro_mul , pro_idx , data_freq , data_mu , data_sig ] = multivariate_stamp( data_to_use, sub_len, must_dim, exc_dim, skip_loc ) ;
original_single_MP = pro_mul( : , 1 ) ; ## just mid price
original_single_MP2 = original_single_MP .+ pro_mul( : , 2 ) ; ## mid price and hi-lo range
original_single_MP3 = original_single_MP2 .+ pro_mul( : , 3 ) ; ## mid price, hi-lo range and volume

## Swiss Army Knife Clustering
RelMP = original_single_MP ; RelMP2 = original_single_MP2 ; RelMP3 = original_single_MP3 ;
DissMP = inf( length( RelMP ) , 1 ) ; DissMP2 = DissMP ; DissMP3 = DissMP ; 
minValStore = [] ; minIdxStore = [] ; minValStore2 = [] ; minIdxStore2 = [] ; minValStore3 = [] ; minIdxStore3 = [] ;
## set up a recording matrix 
all_dist_pro = zeros( size( RelMP , 1 ) , size( data_to_use , 2 ) ) ;

for ii = 1 : 500
## reset recording matrix for this ii loop  
all_dist_pro( : , : ) = 0 ;

## just mid price
[ minVal , minIdx ] = min( RelMP ) ;
minValStore = [ minValStore ; minVal ] ; minIdxStore = [ minIdxStore ; minIdx ] ;
DissmissRange = data_to_use( minIdx : minIdx + sub_len - 1 , : ) ;
[ dist_pro , ~ ] = multivariate_mass (data_freq(:,1), DissmissRange(:,1), data_len, sub_len, data_mu(:,1), data_sig(:,1), data_mu(minIdx,1), data_sig(minIdx,1) ) ;
all_dist_pro( : , 1 ) = real( dist_pro ) ;
JMP = all_dist_pro( : , 1 ) ;
DissMP = min( DissMP , JMP ) ; ## dismiss all motifs discovered so far
RelMP = original_single_MP ./ DissMP ;
skip_loc_copy = unique( [ skip_loc_copy ; ( minIdx : 1 : minIdx + sub_len - 1 )' ] ) ;
RelMP( skip_loc_copy ) = 1 ;

## mid price and hi-lo range
[ minVal , minIdx ] = min( RelMP2 ) ;
minValStore2 = [ minValStore2 ; minVal ] ; minIdxStore2 = [ minIdxStore2 ; minIdx ] ;
DissmissRange = data_to_use( minIdx : minIdx + sub_len - 1 , : ) ;
[ dist_pro , ~ ] = multivariate_mass (data_freq(:,1), DissmissRange(:,1), data_len, sub_len, data_mu(:,1), data_sig(:,1), data_mu(minIdx,1), data_sig(minIdx,1) ) ;
all_dist_pro( : , 2 ) = real( dist_pro ) ;
[ dist_pro , ~ ] = multivariate_mass (data_freq(:,2), DissmissRange(:,2), data_len, sub_len, data_mu(:,2), data_sig(:,2), data_mu(minIdx,2), data_sig(minIdx,2) ) ;
all_dist_pro( : , 2 ) = all_dist_pro( : , 2 ) .+ real( dist_pro ) ;
JMP2 = all_dist_pro( : , 2 ) ;
DissMP2 = min( DissMP2 , JMP2 ) ; ## dismiss all motifs discovered so far
RelMP2 = original_single_MP2 ./ DissMP2 ;
skip_loc_copy2 = unique( [ skip_loc_copy2 ; ( minIdx : 1 : minIdx + sub_len - 1 )' ] ) ;
RelMP2( skip_loc_copy2 ) = 1 ;

## mid price, hi-lo range and volume
[ minVal , minIdx ] = min( RelMP3 ) ;
minValStore3 = [ minValStore3 ; minVal ] ; minIdxStore3 = [ minIdxStore3 ; minIdx ] ;
DissmissRange = data_to_use( minIdx : minIdx + sub_len - 1 , : ) ;
[ dist_pro , ~ ] = multivariate_mass (data_freq(:,1), DissmissRange(:,1), data_len, sub_len, data_mu(:,1), data_sig(:,1), data_mu(minIdx,1), data_sig(minIdx,1) ) ;
all_dist_pro( : , 3 ) = real( dist_pro ) ;
[ dist_pro , ~ ] = multivariate_mass (data_freq(:,2), DissmissRange(:,2), data_len, sub_len, data_mu(:,2), data_sig(:,2), data_mu(minIdx,2), data_sig(minIdx,2) ) ;
all_dist_pro( : , 3 ) = all_dist_pro( : , 3 ) .+ real( dist_pro ) ;
[ dist_pro , ~ ] = multivariate_mass (data_freq(:,3), DissmissRange(:,3), data_len, sub_len, data_mu(:,3), data_sig(:,3), data_mu(minIdx,3), data_sig(minIdx,3) ) ;
all_dist_pro( : , 3 ) = all_dist_pro( : , 3 ) .+ real( dist_pro ) ;
JMP3 = all_dist_pro( : , 3 ) ;
DissMP3 = min( DissMP3 , JMP3 ) ; ## dismiss all motifs discovered so far
RelMP3 = original_single_MP3 ./ DissMP3 ;
skip_loc_copy3 = unique( [ skip_loc_copy3 ; ( minIdx : 1 : minIdx + sub_len - 1 )' ] ) ;
RelMP3( skip_loc_copy3 ) = 1 ;

endfor ## end ii loop

There are a few things to note about this code:

  • the use of a skip_loc vector 
  • a sub_len value of 9
  • 3 different calculations for DissMP and RelMP vectors

i) The skip_loc vector is a vector of time series indices (Idx) for which the MP and possible cluster motifs should not be calculated to avoid identifying motifs from data sequences that do not occur in the underlying data due to the way I concatenated it during pre-processing, i.e. 7am to 9am, 7am to 9am, ... etc.

ii) sub_len value of 9 means 9 x 10 minute OHLC bars, to match the 30 minute A, B and C of the above IB screenshot.

iii)  3 different calculations because different combinations of the underlying data are used. 

This last part probably needs more explanation. A multivariate RelMP is created by adding together individual dist_pros (distance profiles), and the cluster motif identification is achieved by finding minimums in the RelMP; however, a minimum in a multivariate RelMP is generally a different minimum to the minimums of the individual, univariate RelMPs. What my code does is use a univariate RelMP of the mid price, and 2 multivariate RelMPs of mid price plus high-low range and mid price, high-low range and volume. This gives 3 sets of minValues and minValueIdxs, one for each set of data. The idea is to run the ii loop for, e.g. 500 iterations, and to then identify possible "robust" IB cluster motifs by using the Octave intersect function to get the minIdx that are common to all 3 sets of Idx data. 

By way of example, setting the ii loop iteration to just 100 results in only one intersect Idx value on some EUR_USD forex data, the plot of which is shown below:

Comparing this with the IB screenshot above, I would say this represents a typical "Open Auction" process with prices rotating upwards/downwards with no real conviction either way, with a possible long breakout on the last bar or alternatively, a last upwards test before a price plunge.

My intent is to use the above methodology to get a set of candidate IB motifs upon which a clustering algorithm can be based. This clustering algorithm will be the subject of my next post.

Friday, 26 March 2021

Market/Volume Profile and Matrix Profile

A quick preview of what I am currently working on: using Matrix Profile to search for time series motifs, using the R tsmp package. The exact motifs I'm looking for are the various "initial balance" set ups of Market Profile charts. 

To do so, I'm concentrating the investigation around both the London and New York opening times, with a custom annotation vector (av). Below is a simple R function to set up this custom av, which is produced separately in Octave and then loaded into R.

mp_adjusted_by_custom_av <- function( mp_object , custom_av ){
## https://stackoverflow.com/questions/66726578/custom-annotation-vector-with-tsmp-r-package
mp_object$av <- custom_av
class( mp_object ) <- tsmp:::update_class( class( mp_object ) , "AnnotationVector" )
mp_adjusted_by_custom_av <- tsmp::av_apply( mp_object )
return( mp_adjusted_by_custom_av )
}
This animated GIF shows plots of short, exemplar adjusted market profile objects highlighting the London only, New York only and combined results of the relevant annotation vectors.
This is currently a work in progress and so I shall report results in due course.

Friday, 5 February 2021

A Forex Pair Snapshot Chart

After yesterday's Heatmap Plot of Forex Temporal Clustering post I thought I would consolidate all the chart types I have recently created into one easy, snapshot overview type of chart. Below is a typical example of such a chart, this being today's 10 minute EUR_USD forex pair chart up to a few hours after the London session close (the red vertical line).


The top left chart is a Market/Volume Profile Chart with added rolling Value Area upper and lower bounds (the cyan, red and white lines) and also rolling Volume Weighted Average Price with upper and lower standard deviation lines (magenta).

The bottom left chart is the turning point heatmap chart as described in yesterday's post.

The two rightmost charts are also Market/Volume Profile charts, but of my Currency Strength Candlestick Charts based on my Currency Strength Indicator. The upper one is the base currency, i.e. EUR, and the lower is the quote currency. 

The following charts are the same day's charts for:

GBP_USD,

USD_CHF
and finally USD_JPY
The regularity of the turning points is easily seen in the lower lefthand charts although, of course, this is to be expected as they all share the USD as a common currency. However, there are also subtle differences to be seen in the "shadows" of the lighter areas.

For the nearest future my self-assigned task will be to observe the forex pairs, in real time, through the prism of the above style of chart and do some mental paper trading, and perhaps some really small size, discretionary live trading, in additional to my normal routine of research and development.