Friday, 1 November 2019

Preliminary Results from Weight Agnostic Training

Following on from my last post, below is a selection of the typical resultant output from the Bayesopt Library minimisation
    3    3    2    2    2    8   99   22   30    1
    3    3    2    3    2   39    9   25   25    1
    2    2    3    2    2   60   43   83   54    3
    2    1    2    2    2    2    0   90   96   43
    3    2    3    2    2    2    2   43   33    1
    2    3    2    3    2    2    0   62   98   21
    2    2    2    2    2   18   43   49    2    2
    2    3    2    4    1    2    0   23    0    0
    2    2    1    2    3    2    0   24   63   65
    3    2    2    2    3    5   92   49    1    0
    2    3    2    1    1    7   84   22   17    1
    3    2    4    1    1   46    1    0   99    7
    2    2    3    2    2    2    0   74   82   50
    3    3    2    2    2   45   14   81   23    2
    2    3    3    2    2    2    0   99   79    4
    2    2    2    2    2    2    0    0   68    0
    3    3    3    2    2   67   17   37   84    1
    3    2    3    2    2   24   39   56   55    1
    3    3    4    3    2    2   30   62   67    1
    2    2    2    2    2    2    1    0    4    0
    2    2    2    2    2    2    9    8   45    1
    2    3    3    2    2   48    1   18   28    1
    2    3    3    2    2    2    0   34   42   18
    2    2    2    3    2    2    0   70   81   10
    2    2    3    3    2    2    0   85   23   11
where the rows are separate run's results, the first five columns show the type of activation function per layer and the last five columns show the number of neurons per layer ( see function code in my last post for details. )

Some quick take aways from this are:
  1. the sigmoid activation function ( bounded 0 to 1 ) is not favoured, with either the Tanh or "Lecun" sigmoid ( see section 4.4 of this paper ) being preferred
  2. 40% of the network architectures are single hidden layer with just 2 neurons in the hidden layer 
  3. 8% have only two hidden layers with the second hidden layer having just one neuron
I would say these preliminary results suggest that a deep architecture is not necessary for the features/targets being tested and obviously the standard sigmoid/logistic function should be avoided.

As is my wont whilst waiting for lengthy computer tests to complete I have also been browsing online, motivated by the early results of the above, and discovered Random Vector Functional Link Networks, which seem to be a precursor to Extreme Learning Machines. However, there appears to be some controversy about whether or not Extreme Learning Machines are just plagiarism of earlier ideas, such as RVFL networks. 

Readers may remember that I have used ELMs before with the Bayesopt library ( see my post here ) and now that the above results point towards the use of shallow networks I intend to replicate the above, but using RVFL networks. This will be the subject of my next post.   

Wednesday, 30 October 2019

Weight Agnostic Neural Net Training

I have recently come across the idea of weight agnostic neural net training and have implemented a crude version of this combined with the recent work I have been doing on Taken's Theorem ( see my posts here, here and here ) and using the statistical mechanics approach to creating synthetic data.

Using the simple Octave function below with the Akaike Information Criterion as the minimisation objective
## Copyright (C) 2019 dekalog
## 
## This program is free software: you can redistribute it and/or modify it
## under the terms of the GNU General Public License as published by
## the Free Software Foundation, either version 3 of the License, or
## (at your option) any later version.
## 
## This program is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
## GNU General Public License for more details.
## 
## You should have received a copy of the GNU General Public License
## along with this program.  If not, see
## .

## -*- texinfo -*- 
## @deftypefn {} {@var{J} =} wann_training_of_cyclic_embedding()
##
## @seealso{}
## @end deftypefn

## Author: dekalog 
## Created: 2019-10-26

function J = wann_training_of_cyclic_embedding( x )
global sample_features ; global sample_targets ;
epsilon = 1e-15 ; ## to ensure log() does not give out a nan

## get the parameters from input x
activation_funcs = floor( x( 1 : 5 ) ) ; ## get the activations, 1 == sigmoid, 2 == tanh, 3 == Lecun sigmoid
layer_size = floor( x( 6 : 10 ) ) ;

[ min_layer_size , ix_min ] = min( layer_size ) ;
  if( min_layer_size > 0 ) ## to be expected most of the time
    nn_depth = length( layer_size ) ;
  elseif( min_layer_size == 0 ) ## one layer has no nodes, hence limits depth of nn
    nn_depth = ix_min - 1 ;
  endif 

length_jj_loop = 25 ;
all_aic_values = zeros( length_jj_loop , 1 ) ;  
  
for jj = 1 : length_jj_loop 
  
  previous_layer_out = sample_features ;
  sum_of_k = 0 ;
    
  for ii = 1 : nn_depth
    
    new_weight_matrix = ones( size( previous_layer_out , 2 ) , layer_size( ii ) ) ./ sqrt( size( previous_layer_out , 2 ) ) ;
    sum_of_k = sum_of_k + numel( new_weight_matrix ) ;
    prior_to_activation_input = previous_layer_out * new_weight_matrix ;

    ## select the activation function 
    if( activation_funcs( ii ) == 1 ) ## sigmoid activation
      previous_layer_out = 1.0 ./ ( 1.0 .+ exp( -prior_to_activation_input ) ) ;
    elseif( activation_funcs( ii ) == 2 ) ## tanh activation
      previous_layer_out = tanh( prior_to_activation_input ) ;
    elseif( activation_funcs( ii ) == 3 ) ## lecun sigmoid activation
      previous_layer_out = sigmoid_lecun_m( prior_to_activation_input ) ;
    endif 
    
  endfor

  ## the final logistic output
  new_weight_matrix = ones( size( previous_layer_out , 2 ) , 1 ) ./ sqrt( size( previous_layer_out , 2 ) ) ;
  sum_of_k = sum_of_k + numel( new_weight_matrix ) ;
  final_output = previous_layer_out * new_weight_matrix ;
  final_output = 1.0 ./ ( 1.0 .+ exp( -final_output ) ) ;

  max_likelihood = sum( log( final_output .+ epsilon ) .* sample_targets + log( 1 .- final_output .+ epsilon ) .* ( 1 .- sample_targets ) ) ;
  
  ## get Akaike Information criteria
  all_aic_values( jj ) = 2 * sum_of_k - 2 * max_likelihood ;

endfor ## end of jj loop

J = mean( all_aic_values ) ;

endfunction
and the Octave interface of the Bayesopt Library I am currently iterating over different architectures ( up to 5 hidden layers deep with a max of 100 nodes per layer and using a choice of 3 hidden activations ) for a simple Logistic Regression model to predict turning points in different sets of statistical mechanics synthetic data using just features based on Taken's embedding.

More in due course.

Saturday, 12 October 2019

Another Method of Creating Synthetic Data

Over the years I have posted about several different methodologies for creating synthetic data and I have recently come across yet another one which readers may find useful.

One of my first posts was Creation of Synthetic Data, which essentially is a random scrambling of historic data for a single time series with an attempt to preserve some of the bar to bar dependencies based upon a bar's position in relation to upper and lower price envelopes, a la Bollinger Bands, although the code provided in this post doesn't actually use Bollinger Bands. Another post, Creating Synthetic Data Using the Fast Fourier Transform, randomly scrambles data in the frequency domain.

Rather than random scrambling of existing data, another approach is to take measurements from existing data and then use these measurements to recreate new data series with similar characteristics. My Hidden Markov Modelling of Synthetic Periodic Time Series Data post utilises this approach and can be used to superimpose known sinusoidal waveforms onto historical trends. The resultant synthetic data can be used, as I have used it, in a form of controlled experiment whereby indicators etc. can be measured against known cyclic prices.

All of the above share the fact that they are applied to univariate time series only, although I have no doubt that they could probably be extended to the multivariate case. However, the new methodology I have come across is Statistical Mechanics of Time Series and its matlab central file exchange "toolbox." My use of this is to produce ensembles of my Currency Strength Indicator, from which I am now able to produce 50/60+ separate, synthetic time series representing, for example, all the Forex pairs, and preserving their inter-relationships whilst only suffering the computational burden of applying this methodology to a dozen or so underlying "fundamental" time series.

This may very well become my "go to" methodology for generating unlimited amounts of training data.

Tuesday, 1 October 2019

Ideal Cyclic Tau Embedding as Times Series Features

Continuing on from my Ideal Tau for Time Series Embedding post, I have now written an Octave function based on these ideas to produce features for time series modelling. The function outputs are two slightly different versions of features, examples of which are shown in the following two plots, which show up and down trends in black, following a sinusoidal sideways market partially visible to the left:
and

The function outputs are normalised price and price delayed by a quarter and a half of the cycle period (in this case 20.) The trend slopes have been chosen to exemplify the difference in the features between up and down trends. The function outputs are identical for the unseen cyclic prices to the left.

When the sets of function outputs are plotted as 3D phase space plots typical results are
with the green, blue and red phase trajectories corresponding to the cyclic, up trending and down trending portions of the above synthetic price series. The markers in this plot correspond to points in the phase trajectories at which turns in the underlying price series occur. The following plot is the above plot rotated in phase space such that the green cyclic price phase trajectory is horizontally orientated.
It is clearer in this second phase space plot that there is a qualitative difference in the phase space trajectories of the different, underlying market types.

As a final check of feature relevance I used the Boruta R package, with the above turning point markers as classification targets (a turning point vs. not a turning point,) to assess the utility of this approach in general. These tests were conducted on real price series and also on indices created by my currency strength methodology. In all such Boruta tests the features are deemed "relevant" up to a delay of five bars on daily data (I ceased the tests at the five bar mark because it was becoming tedious to carry on.)

In summary, therefore, it can be concluded that features derived using Taken's theorem are useful on financial time series provided that:
  1. the underlying time series are normalised (detrended) and,
  2. the embedding delay (Tau) is set to the theoretical optimum of a quarter (and multiples thereof) of the measured cyclic period of the time series.

Monday, 16 September 2019

The Ideal Tau for Time Series Embedding?

In my Preliminary Test Results of Time Series Embedding post I got a bit ahead of myself and mistakenly quoted the ideal embedding length (Tau) to be half a period for cyclic prices. This should actually have been a quarter of a period and I have now corrected my earlier post.

This post is about my further investigations, which uncovered the above, and is written in a logical progression rather than the order in which I actually did things.

It was important to look at the Tau value for ideal sinusoidal data to confirm the quarter period, which checked out because I was able to easily recreate the 2D Lissajous curves in the Novel Method for Topological Embedding of Time-Series Data paper. I then extended this approach to 3D to produce plots such as this one, which may require some explaining.
This is a 3D Phase space plot of adaptive Max Min Normalised "sinusoidal prices" normalised to fall in the range [ -1 +1 ] and adaptive to the known, underlying period of the sine wave. This was done for prices with and without a "trend." The circular orbits are for the pure sine wave, representing cyclic price action plotted in 3D as current price, price delayed by a quarter of a cycle (Tau 1) and by half a cycle (Tau 2). The mini, blue and red "constellations" outside the circular orbits are the same pure sine wave(s) with an uptrend and downtrend added. The utility of this representation for market classification etc. is obvious, but is not the focus of this post.

Looking at Tau values for ideal prices with various trends taken from real market data, normalised as described above, across a range of known periods (because they were synthetically generated) the following results were obtained.
This is a plot of periods 10 to 47 (left to right along the x-axis) vs the global tau value (y-axis) measured using my Octave version of the mdDelay.m function from the mdembedding github. Each individual, coloured line plot is based on the underlying trend of one of 66 different, real time series. Each point per period per tradeable is the median value of 20 Monte Carlo runs over that tradeable's trend + synthetic, ideal sine wave price. The thick, black line is the median of the median values per period. Although noisy, it can be seen that the median line(s) straddle the theoretical, quarter cycle Tau (0.25) quite nicely. It should be remembered that this is on normalised prices. For comparison, the below plot shows the same on the unnormalised prices.
where the Tau value starts at over 0.5 at a cyclic period of 10 at the left and then slowly decreases to just over 0.1 at period 47.

More in due course.

Thursday, 5 September 2019

Preliminary Test Results of Time Series Embedding

Following on from my post yesterday, this post presents some preliminary results from the test I was running while writing yesterday's post. However, before I get to these results I would like to talk a bit about the hypothesis being tested.

I had an inkling that the dominant cycle period might having some bearing on tau, the time delay for the time series embedding implied by Taken's theorem, so I set up the test (described below) and after writing the post yesterday, and while the test was still running, I did some online research to see if there is any theoretical justification available.

One of the papers I found online is A Novel Method for Topological Embedding of Time-Series Data. From the abstract

"...we propose a novel method for embedding one-dimensional, periodic time-series data into higher-dimensional topological spaces to support robust recovery of signal features via topological data analysis under noisy sampling conditions...To provide evidence for the viability of this method, we analyze the simple case of sinusoidal data..."

One of the conclusions drawn is that for sinusoidal data the ideal embedding length is a quarter of the cycle period, i.e. Π/2.

Another paper I found was Topological time-series analysis with delay-variant embedding. This paper investigates "delay-variant embedding," essentially making the embedding length tau variable. From the concluding remarks

"We have demonstrated that the topological features that are constructed using delay-variant embedding can capture the topological variation in a time series when the time-delay value changes... These results indicate that the topological features that are deduced with delay-variant embedding can be used to reveal the representative features of the original time series."

My inkling was that tau should be adaptive to the measured dominant cycle, and both the above linked papers do provide some justification. Anyway, on to the test.

Reusing the code from my earlier post here I created synthetic price series with known but variable dominant cycle periods and based on real price trends, which results in synthetic series that are highly correlated with real price series but with underlying characteristics being exactly known. I then used my version of mdembedding code to calculate tau for numerous Monte Carlo replications of time series of prices based on a range of real forex pairs prices, gold and silver prices and different indices created by my currency strength indicator methodology. I then created a ratio of tau/average_measured_period measured as a global value across the whole of each individual time series replication as the test statistic, with the period being measured by an autocorrelation periodogram function: a ratio value of 0.5, for example, meaning that the tau for that time series is half the average period over the whole series.

Below is a plot of the histogram of these ratios:
which shows the ratios being centred approximately around 0.4 but with a long right tail. The following histogram is of the means of the bootstrap with replacement of the above distribution,
which is centred around a mean value of 0.436. Using this bootstrapped mean ratio of tau/period implies, for example, that the ideal tau for an average period of 20 is 8.72, which when rounded up to 9, is almost twice the theoretical ideal of a 5 day tau for 20 day cyclic period prices.

Personally I think this is an encouraging set of preliminary test results, with some caveats. More in due course.

Wednesday, 4 September 2019

Taken's Theorem and Time Series Embedding

I am now back from my summer break and am currently looking at using Taken's theorem and am using an adapted version of this mdembedding code ( adapted to run smoothly in Octave ). At the moment of writing this post I have a Monte Carlo test running in the background on my computer, the results of which shall be the subject of my next blog post within the next few days.