Friday, 29 November 2019

RVFL Network Results

Having let the tests outlined in my previous post run, I can say that the results are inconclusive as the optimised number of neurons in the hidden layer turns out to be one, the minimum possible to even have a hidden layer. This leads me to believe that the raw features, the ideal cyclic tau embedding, are sufficient in and of themselves for the purpose I have in mind.

To that end, I have indulged in a bit of feature engineering and written the following Octave function,
## Copyright (C) 2019 dekalog
## 
## This program is free software: you can redistribute it and/or modify it
## under the terms of the GNU General Public License as published by
## the Free Software Foundation, either version 3 of the License, or
## (at your option) any later version.
## 
## This program is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
## GNU General Public License for more details.
## 
## You should have received a copy of the GNU General Public License
## along with this program.  If not, see
## www.gnu.org.

## -*- texinfo -*- 
## @deftypefn {} {@var{emb1}, @var{emb2} =} cyclic_embedding (@var{price}, @var{period})
##
## Inputs are a price vector and a period vector.
##
## The function normalises the price between -1 and +1 over an adaptive period lookback.
##
## The outputs are two matrices of 3 columns each - the first column is normalised price
## and the second and third are delay embeddings of Tau and 2 x Tau, Tau being equal to
## one quarter of the adaptive period vector, the theoretical ideal Tau for sinusoidal
## waveforms.
##
## The first output matrix, EMB1, normalises the Tau and 2 x Tau columns according to the
## most recent max high/min low; EMB2 Tau and 2 x Tau are normalised according to the 
## max high/min low in force at the delay embedding time.
##
## @seealso{}
## @end deftypefn

## Author: dekalog 
## Created: 2019-09-18

function [ emb1 , emb2 , prob_matrix ] = cyclic_embedding( price , period )
  
price_smooth = price ;
emb1 = repmat( price , 1 , 3 ) ;
emb2 = emb1 ;
coeffs = generalised_sgolay_filter_coeffs( 5 , 2 , 0 ) ; coeffs = coeffs' ;

## initialising loop
price_smooth( 1 : 3 ) = coeffs( 1 : 3 , : ) * price( 1 : 5 ) ;  
for ii = 4 : 48 
  price_smooth( ii ) = coeffs( 3 , : ) * price( ii - 2 : ii + 2 ) ;
endfor
price_smooth( 49 : 50 ) = coeffs( 4 : 5 , : ) * price( 46 : 50 ) ;
## end initialising loop 

coeffs( 1 : 2 , : ) = [] ;

for ii = 51 : size( price , 1 )

  price_smooth( ii - 2 : ii ) = coeffs * price( ii - 4 : ii ) ;
  max_r = max( price_smooth( ii - period( ii ) : ii ) ) ;
  min_r = min( price_smooth( ii - period( ii ) : ii ) ) ;
  
  ## period is exactly divisable by 4 ( and 2 ), e.g. 8 12 16 20 24 28 32 36 40 44 48 etc?
  if ( rem( period( ii ) , 4 ) == 0 ) 
    
  emb1( ii , 1 ) = 2 * ( ( price_smooth( ii ) - min_r ) / ( max_r - min_r ) - 0.5 ) ;
  emb1( ii , 2 ) = 2 * ( ( price_smooth( ii - round( period( ii ) / 4 ) ) - min_r ) / ( max_r - min_r ) - 0.5 ) ;
  emb1( ii , 3 ) = 2 * ( ( price_smooth( ii - round( period( ii ) / 2 ) ) - min_r ) / ( max_r - min_r ) - 0.5 ) ;
  emb2( ii , 1 ) = emb1( ii , 1 ) ;
  emb2( ii , 2 ) = emb2( ii - round( period( ii ) / 4 ) , 1 ) ;
  emb2( ii , 3 ) = emb2( ii - round( period( ii ) / 2 ) , 1 ) ;
  
  ## periods 10 14 18 22 26 30 34 38 42 46 50
  elseif ( rem( period( ii ) , 2 ) == 0 && rem( period( ii ) , 4 ) == 2 )

  emb1( ii , 1 ) = 2 * ( ( price_smooth( ii ) - min_r ) / ( max_r - min_r ) - 0.5 ) ;
  emb1( ii , 2 ) = 2 * ( ( ( 0.5*price_smooth( ii - round( period( ii ) / 4 ) ) + 0.5*price_smooth( ii - round( period( ii ) / 4 ) + 1 ) ) - min_r )...
                       / ( max_r - min_r ) - 0.5 ) ;
  emb1( ii , 3 ) = 2 * ( ( price_smooth( ii - round( period( ii ) / 2 ) ) - min_r ) / ( max_r - min_r ) - 0.5 ) ;
  emb2( ii , 1 ) = emb1( ii , 1 ) ;
  emb2( ii , 2 ) = 0.5*emb2( ii - round( period( ii ) / 4 ) , 1 ) + 0.5*emb2( ii - round( period( ii ) / 4 ) + 1 , 1 ) ;
  emb2( ii , 3 ) = emb2( ii - round( period( ii ) / 2 ) , 1 ) ;

  ## periods 9 13 17 21 25 29 33 37 41 45 49
  elseif ( rem( period( ii ) , 2 ) == 1 && rem( period( ii ) , 4 ) == 1 )

  emb1( ii , 1 ) = 2 * ( ( price_smooth( ii ) - min_r ) / ( max_r - min_r ) - 0.5 ) ;
  emb1( ii , 2 ) = 2 * ( ( ( 0.75*price_smooth( ii - round( period( ii ) / 4 ) ) + 0.25*price_smooth( ii - round( period( ii ) / 4 ) - 1 ) ) - min_r )...
                       / ( max_r - min_r ) - 0.5 ) ;
  emb1( ii , 3 ) = 2 * ( ( ( 0.5*price_smooth( ii - round( period( ii ) / 2 ) ) + 0.5*price_smooth( ii - round( period( ii ) / 2 ) + 1 ) ) - min_r )...
                       / ( max_r - min_r ) - 0.5 ) ;
  emb2( ii , 1 ) = emb1( ii , 1 ) ;
  emb2( ii , 2 ) = 0.75*emb2( ii - round( period( ii ) / 4 ) , 1 ) + 0.25*emb2( ii - round( period( ii ) / 4 ) - 1 , 1 ) ;
  emb2( ii , 3 ) = 0.5*emb2( ii - round( period( ii ) / 2 ) , 1 ) + 0.5*emb2( ii - round( period( ii ) / 2 ) + 1 , 1 ) ;
  
  ## periods 11 15 19 23 27 31 35 39 43 47
  elseif ( rem( period( ii ) , 2 ) == 1 && rem( period( ii ) , 4 ) == 3 )

  emb1( ii , 1 ) = 2 * ( ( price_smooth( ii ) - min_r ) / ( max_r - min_r ) - 0.5 ) ;
  emb1( ii , 2 ) = 2 * ( ( ( 0.75*price_smooth( ii - round( period( ii ) / 4 ) ) + 0.25*price_smooth( ii - round( period( ii ) / 4 ) + 1 ) ) - min_r )...
                       / ( max_r - min_r ) - 0.5 ) ;
  emb1( ii , 3 ) = 2 * ( ( ( 0.5*price_smooth( ii - round( period( ii ) / 2 ) ) + 0.5*price_smooth( ii - round( period( ii ) / 2 ) + 1 ) ) - min_r )...
                       / ( max_r - min_r ) - 0.5 ) ;
  emb2( ii , 1 ) = emb1( ii , 1 ) ;
  emb2( ii , 2 ) = 0.75*emb2( ii - round( period( ii ) / 4 ) , 1 ) + 0.25*emb2( ii - round( period( ii ) / 4 ) + 1 , 1 ) ;
  emb2( ii , 3 ) = 0.5*emb2( ii - round( period( ii ) / 2 ) , 1 ) + 0.5*emb2( ii - round( period( ii ) / 2 ) + 1 , 1 ) ;
  
  endif
  
endfor ## end of embedding features creation

feature_peak = emb2 * [ 1 0 ; -1 1 ; 0 -1 ] * [ 1 ; 1 ] ;
feature_trough = zeros( size( feature_peak ) ) ;
ix = find( feature_peak < 0 ) ; 
feature_trough( ix ) = abs( feature_peak( ix ) ) ;
feature_peak( feature_peak <= 0 ) = 0 ;

## https://stats.stackexchange.com/questions/132652/how-to-determine-which-distribution-fits-my-data-best
## Weibull distribution with shape = 219.68 and scale = 1.94 for turn +/- 1 bar
## Weibull distribution with shape = 85.88 and scale = 1.84 for turn +/- 2 bar
## This comes from bayesian testing of cutoff value of feature_peak/feature_trough for highs/lows +/- 1 bar.
## The function used to get these values is "bayes_train_cyclic_turn_prob_of_embedding.m" which calls
## "bayes_optim_of_cyclic_embedding_conv_function" in /home/dekalog/Documents/octave/turning_points
scale = 1.84 ; shape = 85.88 ;
prob_matrix = zeros( size( feature_peak , 1 ) , 3 ) ;
prob_matrix( : , 1 ) = wblcdf( feature_peak , scale , shape ) ;
prob_matrix( : , 2 ) = wblcdf( feature_trough , scale , shape ) ;
prob_matrix( : , 3 ) = 1 .- sum( prob_matrix( : , 1 : 2 ) , 2 ) ;

endfunction
which again is a work in progress.

This takes as input a price and a period vector and outputs features as per the plots in the above linked ideal cyclic tau post plus a matrix of probabilities for price action being at a cyclic high/low. This matrix is the result of Monte Carlo Bayesian Optimisation over ideal sine wave prices to get a cutoff value for the derived feature in the above function. The probability distribution used for this probability matrix is the Weibull distribution, which has been determined by following the routine outlined in this "how to determine which distribution fits my data best" forum post and using the R statistical software platform.

The hard coded values for the cutoff in the above code are from the results of optimisation on pure sine waves. As I write this post there are optimisation routines running on sine waves with 20 db noise added.

More in due course.

Wednesday, 13 November 2019

Random Vector Functional Link Networks

In my last post I briefly mentioned Random Vector Functional Link networks and that this post would be about them, inspired by the fact that the preliminary results of the last post suggest a shallow rather than a deep network structure.

The idea of RVFL networks is over two decades old and has perhaps been over shadowed by the more recent Extreme Learning Machine, although as mentioned in my last post there is some controversy about plagiarism with regard to ELMs. An RVFL network is basically an ELM with additional direct connections from the input layer to the output layer. The connections from the input layer to the single hidden layer are randomly generated and then fixed, the hidden layer is concatenated with the original input layer to form a new layer, H, and the connections from H to the output layer are solved in one step using the Moore Penrose inverse to get the Linear least squares solution, or alternatively using Regularised least squares. The advantage of this closed-form approach is the fast training times compared with more general optimisation routines such as gradient descent.

The above linked paper details a series of comparative tests run over various configurations of RVFL networks over a number of different data sets. Some of the main conclusions drawn are:
  1. the direct links from input to output enhance network performance
  2. whether to include output bias or not is a data dependent tunable factor
  3. the radial basis function for the hidden units always leads to better performance
  4. regularised least squares (ridge regression) performs better than the Moore Penrose inverse
Based on the code provided by the study's authors I have written the following two objective functions for use with the BayesOpt Library.
## Copyright (C) 2019 dekalog
## 
## This program is free software: you can redistribute it and/or modify it
## under the terms of the GNU General Public License as published by
## the Free Software Foundation, either version 3 of the License, or
## (at your option) any later version.
## 
## This program is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
## GNU General Public License for more details.
## 
## You should have received a copy of the GNU General Public License
## along with this program.  If not, see
## .

## -*- texinfo -*- 
## @deftypefn {} {@var{J} =} rvfl_training_of_cyclic_embedding_with_cv (@var{x})
##
## Function for Bayesian training of RVFL networks with fixed parameteres of:
##
##    direct links,
##    radial basis function activation,
##    ridge regression for regularized least squares,
##
## and optimisable parameters of:
##
##    number of neurons in hidden layer,
##    lambda for the least squares regression,
##    scaling of hidden layer inputs,
##    with or without an output bias.
##
## The input X is a vector of 6 values to be optimised by the BayesOpt library
## function 'bayesoptcont.'
## The output J is the Brier Score for the test fold cross validated data.
## @seealso{}
## @end deftypefn

## Author: dekalog 
## Created: 2019-11-04

function J = rvfl_training_of_cyclic_embedding_with_cv ( x )
global sample_features ; global sample_targets ;
epsilon = 1e-15 ; ## to ensure log() does not give out a nan

Nfea = size( sample_features , 2 ) ;

## check input x
if ( numel( x ) != 6 )
  error( 'The input vector x must be of length 6.' ) ;
endif

## get the parameters from input x
hidden_layer_size = floor( x( 1 ) ) ;    ## number of neurons in hidden layer
randomisation_type = floor( x( 2 ) ) ;   ## 1 == uniform, 2 == Gaussian
scale_mode = floor( x( 3 ) ) ;           ## 1 will scale the features for all neurons, 2 will scale the features for each hidden
##                                          neuron separately, 3 will scale the range of the randomization for uniform distribution
scale = x( 4 ) ;                         ## Linearly scale the random features before feeding into the nonlinear activation function. 
##                                          In this implementation, we consider the threshold which leads to 0.99 of the maximum/minimum 
##                                          value of the activation function as the saturating threshold.
##                                          scale = 0.9 means all the random features will be linearly scaled
##                                          into 0.9 * [ lower_saturating_threshold , upper_saturating_threshold ].
if_output_bias = floor( x( 5 ) + 0.5 ) ; ## Use output bias, or not? 1 == yes , 0 == no.
lambda = x( 6 ) ;                        ## the regularization coefficient lambda 

length_jj_loop = 25 ;
all_brier_values = zeros( length_jj_loop , 1 ) ;  

##rand( 'seed' , 0 ) ;
##randn( 'seed' , 0 ) ;
##U_sample_targets = unique( sample_targets ) ;
##nclass = numel( U_sample_targets ) ;

##sample_targets_temp = zeros( numel( sample_targets ) , nclass ) ; 
##
#### get the 0 - 1 one hot coding for the target,  
##for i = 1 : nclass
##  idx = sample_targets == U_sample_targets( i ) ;
##  sample_targets_temp( idx , i ) = 1 ;
##endfor

###### information for splitting into training and test sets ###############
ix_positive_targets = find( sample_targets == 1 ) ;
ix_negative_targets = ( 1 : numel( sample_targets ) )' ; 
ix_negative_targets( ix_positive_targets ) = [] ;
## split 20/80
split_no1 = round( 0.2 * numel( ix_positive_targets ) ) ;
split_no2 = round( 0.2 * numel( ix_negative_targets ) ) ;

######### get type of randomisation from input x #################
if ( randomisation_type == 1 ) ## uniform randomisation 
  
  if ( scale_mode == 3 ) ## range scaled for uniform randomisation
     Weight = scale * ( rand( Nfea , hidden_layer_size ) * 2 - 1 ) ; ## scaled uniform random input weights to hidden layer
     Bias = scale * rand( 1 , hidden_layer_size ) ;                  ## scaled random bias weights to hidden layer
  else
     Weight = rand( Nfea , hidden_layer_size ) * 2 - 1 ; ## unscaled random input weights to hidden layer
     Bias = rand( 1 , hidden_layer_size ) ;              ## unscaled random bias weights to hidden layer
  endif
    
elseif ( randomisation_type == 2 ) ## gaussian randomisation
  Weight = randn( Nfea , hidden_layer_size ) ; ## gaussian random input weights to hidden layer
  Bias = randn( 1 , hidden_layer_size ) ;      ## gaussian random bias weights to hidden layer
else
  error( 'only Gaussian and Uniform are supported' )
endif
############################################################################

## Activation Function    
Saturating_threshold = [ -2.1 , 2.1 ] ;
Saturating_threshold_activate = [ 0 , 1 ] ;

for jj = 1 : length_jj_loop

## shuffle
randperm1 = randperm( numel( ix_positive_targets) ) ;
randperm2 = randperm( numel( ix_negative_targets) ) ;
test_ix1 = ix_positive_targets( randperm1( 1 : split_no1 ) ) ;
test_ix2 = ix_negative_targets( randperm2( 1 : split_no2 ) ) ;
test_ix = [ test_ix1 ; test_ix2 ] ;
train_ix1 = ix_positive_targets( randperm1( split_no1 + 1 : end ) ) ;
train_ix2 = ix_negative_targets( randperm2( split_no2 + 1 : end ) ) ;
train_ix = [ train_ix1 ; train_ix2 ] ;

sample_targets_train = sample_targets( train_ix ) ;
sample_features_train = sample_features( train_ix , : ) ;

Nsample = size( sample_features_train , 1 ) ;

Bias_train = repmat( Bias , Nsample , 1 ) ;
H = sample_features_train * Weight + Bias_train ;

if ( scale_mode == 1 )
  ## scale the features for all neurons 
  [ H , k , b ] = Scale_feature( H , Saturating_threshold , scale ) ;
elseif ( scale_mode == 2 ) 
  ## else scale the features for each hidden neuron separately
  [ H , k , b ] = Scale_feature_separately( H , Saturating_threshold , scale ) ;
endif
  
## actual activation, the radial basis function
H = exp( -abs( H ) ) ;

if ( if_output_bias == 1 )
  ## we will use an output bias
  H = [ H , ones( Nsample , 1 ) ] ; 
endif

## the direct link scaling options, concatenate hidden layer and sample_features_train 
if ( scale_mode == 1 )
  ## scale the features for all neurons
  sample_features_train = sample_features_train .* k + b ;
  H = [ H , sample_features_train ] ;
elseif ( scale_mode == 2 )
  ## else scale the features for each hidden neuron separately
  [ sample_features_train , ktr , btr ] = Scale_feature_separately( sample_features_train , Saturating_threshold_activate , scale ) ;
  H = [ H , sample_features_train ] ;
else
  H = [ H , sample_features_train ] ;
endif

H( isnan( H ) ) = 0 ; ## avoids any 'blowups' due to nans in H

## do the regularized least squares for concatenated hidden layer output
## and the original, possibly scaled, input sample_features
if ( hidden_layer_size < Nsample )
 beta = ( eye( size( H , 2 ) ) / lambda + H' * H ) \ H' * sample_targets_train ;
else
 beta = H' * ( ( eye( size( H , 1 ) ) / lambda + H * H' ) \ sample_targets_train ) ; 
endif

############# now the test on test data ####################################
Bias_test = repmat( Bias , numel( sample_targets( test_ix ) ) , 1 ) ;
H_test = sample_features( test_ix , : ) * Weight + Bias_test ;

if ( scale_mode == 1 )
  ## scale the features for all neurons
  H_test = H_test .* k + b ;
elseif ( scale_mode == 2 )
  ## else scale the features for each hidden neuron separately
  nSamtest = size( H_test , 1 ) ; 
  kt = repmat( k , nSamtest , 1 ) ;
  bt = repmat( b , nSamtest , 1 ) ;
  H_test = H_test .* kt + bt ;
endif

## actual activation, the radial basis function
H_test = exp( -abs( H_test ) ) ;

if ( if_output_bias == 1 )
  ## we will use an output bias
  H_test = [ H_test , ones( numel( sample_targets( test_ix ) ) , 1 ) ] ; 
endif

## the direct link scaling options, concatenate hidden layer and sample_features_train 
if ( scale_mode == 1 )
  ## scale the features for all neurons 
  testX_temp = sample_features( test_ix , : ) .* k + b ;
  H_test = [ H_test , testX_temp ] ;
elseif ( scale_mode == 2 )
  ## else scale the features for each hidden neuron separately
  nSamtest = size( H_test , 1 ) ; 
  kt = repmat( ktr , nSamtest , 1 ) ;
  bt = repmat( btr , nSamtest , 1 ) ;
  testX_temp = sample_features( test_ix , : ) .* kt + bt ;   
  H_test = [ H_test , testX_temp ] ; 
else
  H_test = [ H_test , sample_features( test_ix , : ) ] ; 
endif

H_test( isnan( H_test ) ) = 0 ; ## avoids any 'blowups' due to nans in H_test

## get the test predicted target output
test_targets = H_test * beta ;

##Y_temp = zeros( Nsample , 1 ) ;
##% decode the target output
##for i = 1 : Nsample
##    [ maxvalue , idx ] = max( sample_targets_temp( i , : ) ) ;
##    Y_temp( i ) = U_sample_targets( idx ) ;
##endfor

############################################################################

## the final logistic output
final_output = 1.0 ./ ( 1.0 .+ exp( -test_targets ) ) ;

## get the Brier_score
## https://en.wikipedia.org/wiki/Brier_score
all_brier_values( jj ) = mean( ( final_output .- sample_targets( test_ix ) ) .^ 2 ) ;

rand( 'state' ) ; randn( 'state' ) ; ## reset rng

endfor ## end of jj loop

J = mean( all_brier_values ) ;

endfunction

## Various measures of goodness
## https://stats.stackexchange.com/questions/312780/why-is-accuracy-not-the-best-measure-for-assessing-classification-models
## https://www.fharrell.com/post/classification/
## https://stats.stackexchange.com/questions/433628/what-is-a-reliable-measure-of-accuracy-for-logistic-regression
## https://www.jstatsoft.org/article/view/v090i12
## https://stats.idre.ucla.edu/other/mult-pkg/faq/general/faq-what-are-pseudo-r-squareds/
## https://stats.stackexchange.com/questions/319666/aic-with-test-data-is-it-possible
## https://www.learningmachines101.com/lm101-076-how-to-choose-the-best-model-using-aic-or-gaic/
## https://stackoverflow.com/questions/48185090/how-to-get-the-log-likelihood-for-a-logistic-regression-model-in-sklearn
## https://stats.stackexchange.com/questions/67903/does-down-sampling-change-logistic-regression-coefficients
## https://stats.stackexchange.com/questions/163221/whats-the-measure-to-assess-the-binary-classification-accuracy-for-imbalanced-d
## https://stats.stackexchange.com/questions/168929/logistic-regression-is-predicting-all-1-and-no-0
## https://stats.stackexchange.com/questions/435307/multiple-linear-regression-lse-when-one-of-parameter-is-known

These functions stand on the shoulders of the above and hard code direct links, radial basis function activation and ridge regression, with the number of neurons in the hidden layer, lambda for the ridge regression, different scaling options and inclusion of output bias or not as the optimisable parameters. The function minimisation objective is the Brier score.

This second function is slightly different in that the Akaike information criterion is the minimisation objective and there is the option to use the Netlab Generalised linear model function to solve for the hidden to output weights (comment out the relevant code as necessary.)
## Copyright (C) 2019 dekalog
## 
## This program is free software: you can redistribute it and/or modify it
## under the terms of the GNU General Public License as published by
## the Free Software Foundation, either version 3 of the License, or
## (at your option) any later version.
## 
## This program is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
## GNU General Public License for more details.
## 
## You should have received a copy of the GNU General Public License
## along with this program.  If not, see
## .

## -*- texinfo -*- 
## @deftypefn {} {@var{J} =} rvfl_training_of_cyclic_embedding (@var{x})
##
## Function for Bayesian training of RVFL networks with fixed parameteres of:
##
##    direct links,
##    radial basis function activation,
##    ridge regression for regularized least squares,
##
## and optimisable parameters of:
##
##    number of neurons in hidden layer,
##    lambda for the least squares regression,
##    scaling of hidden layer inputs,
##    with or without an output bias.
##
## The input X is a vector of 6 values to be optimised by the BayesOpt library
## function 'bayesoptcont.'
## The output J is the AIC value for the tested model.
## @seealso{}
## @end deftypefn

## Author: dekalog 
## Created: 2019-11-04

function J = rvfl_training_of_cyclic_embedding ( x )
global sample_features ; global sample_targets ;
epsilon = 1e-15 ; ## to ensure log() does not give out a nan

## check input x
if ( numel( x ) != 4 )
  error( 'The input vector x must be of length 6.' ) ;
endif

## get the parameters from input x
hidden_layer_size = floor( x( 1 ) ) ;    ## number of neurons in hidden layer
randomisation_type = floor( x( 2 ) ) ;   ## 1 == uniform, 2 == Gaussian
scale_mode = floor( x( 3 ) ) ;           ## 1 will scale the features for all neurons, 2 will scale the features for each hidden
##                                          neuron separately, 3 will scale the range of the randomization for uniform distribution
scale = x( 4 ) ;                         ## Linearly scale the random features before feeding into the nonlinear activation function. 
##                                          In this implementation, we consider the threshold which leads to 0.99 of the maximum/minimum 
##                                          value of the activation function as the saturating threshold.
##                                          scale = 0.9 means all the random features will be linearly scaled
##                                          into 0.9 * [ lower_saturating_threshold , upper_saturating_threshold ].
##if_output_bias = floor( x( 5 ) + 0.5 ) ; ## Use output bias, or not? 1 == yes , 0 == no.
##lambda = x( 6 ) ;                        ## the regularization coefficient lambda 

##length_jj_loop = 25 ;
##all_aic_values = zeros( length_jj_loop , 1 ) ;  

rand( 'seed' , 0 ) ;
randn( 'seed' , 0 ) ;
##U_sample_targets = unique( sample_targets ) ;
##nclass = numel( U_sample_targets ) ;

##sample_targets_temp = zeros( numel( sample_targets ) , nclass ) ; 
##
#### get the 0 - 1 one hot coding for the target,  
##for i = 1 : nclass
##  idx = sample_targets == U_sample_targets( i ) ;
##  sample_targets_temp( idx , i ) = 1 ;
##endfor

sample_targets_temp = sample_targets ;

[ Nsample , Nfea ] = size( sample_features ) ;

######### get type of randomisation from input x #################
if ( randomisation_type == 1 ) ## uniform randomisation 
  
  if ( scale_mode == 3 ) ## range scaled for uniform randomisation
     Weight = scale * ( rand( Nfea , hidden_layer_size ) * 2 - 1 ) ; ## scaled uniform random input weights to hidden layer
     Bias = scale * rand( 1 , hidden_layer_size ) ;                  ## scaled random bias weights to hidden layer
  else
     Weight = rand( Nfea , hidden_layer_size ) * 2 - 1 ; ## unscaled random input weights to hidden layer
     Bias = rand( 1 , hidden_layer_size ) ;              ## unscaled random bias weights to hidden layer
  endif
    
elseif ( randomisation_type == 2 ) ## gaussian randomisation
  Weight = randn( Nfea , hidden_layer_size ) ; ## gaussian random input weights to hidden layer
  Bias = randn( 1 , hidden_layer_size ) ;      ## gaussian random bias weights to hidden layer
else
  error( 'only Gaussian and Uniform are supported' )
endif
############################################################################

Bias_train = repmat( Bias , Nsample , 1 ) ;
H = sample_features * Weight + Bias_train ;

k_parameters = numel( Weight ) + numel( Bias_train ) ;

## Activation Function    
Saturating_threshold = [ -2.1 , 2.1 ] ;
Saturating_threshold_activate = [ 0 , 1 ] ;

if ( scale_mode == 1 )
  ## scale the features for all neurons 
  [ H , k , b ] = Scale_feature( H , Saturating_threshold , scale ) ;
elseif ( scale_mode == 2 ) 
  ## else scale the features for each hidden neuron separately
  [ H , k , b ] = Scale_feature_separately( H , Saturating_threshold , scale ) ;
endif
  
## actual activation, the radial basis function
H = exp( -abs( H ) ) ;

## glm training always applies a bias, so comment out if training with netlab glm
##if ( if_output_bias == 1 )
##  ## we will use an output bias
##  H = [ H , ones( Nsample , 1 ) ] ; 
##endif

## the direct link scaling options, concatenate hidden layer and sample_features 
if ( scale_mode == 1 )
  ## scale the features for all neurons
  sample_features_temp = sample_features .* k + b ;
  H = [ H , sample_features_temp ] ;
elseif ( scale_mode == 2 )
  ## else scale the features for each hidden neuron separately
  [ sample_features_temp , ktr , btr ] = Scale_feature_separately( sample_features , Saturating_threshold_activate , scale ) ;
  H = [ H , sample_features_temp ] ;
else
  H = [ H , sample_features ] ;
endif

H( isnan( H ) ) = 0 ; ## avoids any 'blowups' due to nans in H

############ THE ORIGINAL REGULARISED LEAST SQUARES CODE ###################
## do the regularized least squares for concatenated hidden layer output            
## and the original, possibly scaled, input sample_features                         
##if ( hidden_layer_size < Nsample )                                                  
## beta = ( eye( size( H , 2 ) ) / lambda + H' * H ) \ H' * sample_targets_temp ;     
##else                                                                                
## beta = H' * ( ( eye( size( H , 1 ) ) / lambda + H * H' ) \ sample_targets_temp ) ; 
##endif                                                                               
############################################################################

##k_parameters = k_parameters + numel( beta ) ;

## get the model predicted target output
##sample_targets_temp = H * beta ;

## the final logistic output
##final_output = 1.0 ./ ( 1.0 .+ exp( -sample_targets_temp ) ) ;

############ REPLACED BY GLM TRAINING USING NETLAB #########################
net = glm( size( H , 2 ) , 1 , 'logistic' ) ; ## Create a generalized linear model structure.
options = foptions ; ## Set default parameters for optimisation routines, for compatibility with MATLAB's foptions()
options( 1 ) = -1 ;   ## change default value
## OPTIONS(1) is set to 1 to display error values during training. If
## OPTIONS(1) is set to 0, then only warning messages are displayed.  If
## OPTIONS(1) is -1, then nothing is displayed.
options( 14 ) = 5 ; ## change default value
## OPTIONS(14) is the maximum number of iterations for the IRLS
## algorithm;  default 100.
net = glmtrain( net , options , H , sample_targets ) ;

k_parameters = k_parameters + net.nwts ;

## get output of trained glm model
final_output = glmfwd( net , H ) ;
############################################################################

##Y_temp = zeros( Nsample , 1 ) ;
##% decode the target output
##for i = 1 : Nsample
##    [ maxvalue , idx ] = max( sample_targets_temp( i , : ) ) ;
##    Y_temp( i ) = U_sample_targets( idx ) ;
##endfor

############################################################################

##  https://machinelearningmastery.com/logistic-regression-with-maximum-likelihood-estimation/
##
##  likelihood = yhat * y + (1 – yhat) * (1 – y)
##
##  We can update the likelihood function using the log to transform it into a log-likelihood function:
##
##      log-likelihood = log(yhat) * y + log(1 – yhat) * (1 – y)

##  Finally, we can sum the likelihood function across all examples in the dataset to maximize the likelihood:
##
##      maximize sum i to n log(yhat_i) * y_i + log(1 – yhat_i) * (1 – y_i)

log_likelihood = sum( log( final_output .+ epsilon ) .* sample_targets + log( 1 .- final_output .+ epsilon ) .* ( 1 .- sample_targets ) ) ;

## get Akaike Information criteria
J = 2 * k_parameters - 2 * log_likelihood ;

## get the Brier_score
## https://en.wikipedia.org/wiki/Brier_score
##J = mean( ( final_output .- sample_targets_temp ) .^ 2 ) ;

rand( 'state' ) ; randn( 'state' ) ; ## reset rng

endfunction

## Various measures of goodness
## https://stats.stackexchange.com/questions/312780/why-is-accuracy-not-the-best-measure-for-assessing-classification-models
## https://www.fharrell.com/post/classification/
## https://stats.stackexchange.com/questions/433628/what-is-a-reliable-measure-of-accuracy-for-logistic-regression
## https://www.jstatsoft.org/article/view/v090i12
## https://stats.idre.ucla.edu/other/mult-pkg/faq/general/faq-what-are-pseudo-r-squareds/
## https://stats.stackexchange.com/questions/319666/aic-with-test-data-is-it-possible
## https://www.learningmachines101.com/lm101-076-how-to-choose-the-best-model-using-aic-or-gaic/
## https://stackoverflow.com/questions/48185090/how-to-get-the-log-likelihood-for-a-logistic-regression-model-in-sklearn
## https://stats.stackexchange.com/questions/67903/does-down-sampling-change-logistic-regression-coefficients
## https://stats.stackexchange.com/questions/163221/whats-the-measure-to-assess-the-binary-classification-accuracy-for-imbalanced-d
## https://stats.stackexchange.com/questions/168929/logistic-regression-is-predicting-all-1-and-no-0
## https://stats.stackexchange.com/questions/435307/multiple-linear-regression-lse-when-one-of-parameter-is-known
Both of these functions are working code and are heavily commented and perhaps not very polished.

As I write this post I have various tests running in the background and will report on the results in due course.

Friday, 1 November 2019

Preliminary Results from Weight Agnostic Training

Following on from my last post, below is a selection of the typical resultant output from the Bayesopt Library minimisation
    3    3    2    2    2    8   99   22   30    1
    3    3    2    3    2   39    9   25   25    1
    2    2    3    2    2   60   43   83   54    3
    2    1    2    2    2    2    0   90   96   43
    3    2    3    2    2    2    2   43   33    1
    2    3    2    3    2    2    0   62   98   21
    2    2    2    2    2   18   43   49    2    2
    2    3    2    4    1    2    0   23    0    0
    2    2    1    2    3    2    0   24   63   65
    3    2    2    2    3    5   92   49    1    0
    2    3    2    1    1    7   84   22   17    1
    3    2    4    1    1   46    1    0   99    7
    2    2    3    2    2    2    0   74   82   50
    3    3    2    2    2   45   14   81   23    2
    2    3    3    2    2    2    0   99   79    4
    2    2    2    2    2    2    0    0   68    0
    3    3    3    2    2   67   17   37   84    1
    3    2    3    2    2   24   39   56   55    1
    3    3    4    3    2    2   30   62   67    1
    2    2    2    2    2    2    1    0    4    0
    2    2    2    2    2    2    9    8   45    1
    2    3    3    2    2   48    1   18   28    1
    2    3    3    2    2    2    0   34   42   18
    2    2    2    3    2    2    0   70   81   10
    2    2    3    3    2    2    0   85   23   11
where the rows are separate run's results, the first five columns show the type of activation function per layer and the last five columns show the number of neurons per layer ( see function code in my last post for details. )

Some quick take aways from this are:
  1. the sigmoid activation function ( bounded 0 to 1 ) is not favoured, with either the Tanh or "Lecun" sigmoid ( see section 4.4 of this paper ) being preferred
  2. 40% of the network architectures are single hidden layer with just 2 neurons in the hidden layer 
  3. 8% have only two hidden layers with the second hidden layer having just one neuron
I would say these preliminary results suggest that a deep architecture is not necessary for the features/targets being tested and obviously the standard sigmoid/logistic function should be avoided.

As is my wont whilst waiting for lengthy computer tests to complete I have also been browsing online, motivated by the early results of the above, and discovered Random Vector Functional Link Networks, which seem to be a precursor to Extreme Learning Machines. However, there appears to be some controversy about whether or not Extreme Learning Machines are just plagiarism of earlier ideas, such as RVFL networks. 

Readers may remember that I have used ELMs before with the Bayesopt library ( see my post here ) and now that the above results point towards the use of shallow networks I intend to replicate the above, but using RVFL networks. This will be the subject of my next post.   

Wednesday, 30 October 2019

Weight Agnostic Neural Net Training

I have recently come across the idea of weight agnostic neural net training and have implemented a crude version of this combined with the recent work I have been doing on Taken's Theorem ( see my posts here, here and here ) and using the statistical mechanics approach to creating synthetic data.

Using the simple Octave function below with the Akaike Information Criterion as the minimisation objective
## Copyright (C) 2019 dekalog
## 
## This program is free software: you can redistribute it and/or modify it
## under the terms of the GNU General Public License as published by
## the Free Software Foundation, either version 3 of the License, or
## (at your option) any later version.
## 
## This program is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
## GNU General Public License for more details.
## 
## You should have received a copy of the GNU General Public License
## along with this program.  If not, see
## .

## -*- texinfo -*- 
## @deftypefn {} {@var{J} =} wann_training_of_cyclic_embedding()
##
## @seealso{}
## @end deftypefn

## Author: dekalog 
## Created: 2019-10-26

function J = wann_training_of_cyclic_embedding( x )
global sample_features ; global sample_targets ;
epsilon = 1e-15 ; ## to ensure log() does not give out a nan

## get the parameters from input x
activation_funcs = floor( x( 1 : 5 ) ) ; ## get the activations, 1 == sigmoid, 2 == tanh, 3 == Lecun sigmoid
layer_size = floor( x( 6 : 10 ) ) ;

[ min_layer_size , ix_min ] = min( layer_size ) ;
  if( min_layer_size > 0 ) ## to be expected most of the time
    nn_depth = length( layer_size ) ;
  elseif( min_layer_size == 0 ) ## one layer has no nodes, hence limits depth of nn
    nn_depth = ix_min - 1 ;
  endif 

length_jj_loop = 25 ;
all_aic_values = zeros( length_jj_loop , 1 ) ;  
  
for jj = 1 : length_jj_loop 
  
  previous_layer_out = sample_features ;
  sum_of_k = 0 ;
    
  for ii = 1 : nn_depth
    
    new_weight_matrix = ones( size( previous_layer_out , 2 ) , layer_size( ii ) ) ./ sqrt( size( previous_layer_out , 2 ) ) ;
    sum_of_k = sum_of_k + numel( new_weight_matrix ) ;
    prior_to_activation_input = previous_layer_out * new_weight_matrix ;

    ## select the activation function 
    if( activation_funcs( ii ) == 1 ) ## sigmoid activation
      previous_layer_out = 1.0 ./ ( 1.0 .+ exp( -prior_to_activation_input ) ) ;
    elseif( activation_funcs( ii ) == 2 ) ## tanh activation
      previous_layer_out = tanh( prior_to_activation_input ) ;
    elseif( activation_funcs( ii ) == 3 ) ## lecun sigmoid activation
      previous_layer_out = sigmoid_lecun_m( prior_to_activation_input ) ;
    endif 
    
  endfor

  ## the final logistic output
  new_weight_matrix = ones( size( previous_layer_out , 2 ) , 1 ) ./ sqrt( size( previous_layer_out , 2 ) ) ;
  sum_of_k = sum_of_k + numel( new_weight_matrix ) ;
  final_output = previous_layer_out * new_weight_matrix ;
  final_output = 1.0 ./ ( 1.0 .+ exp( -final_output ) ) ;

  max_likelihood = sum( log( final_output .+ epsilon ) .* sample_targets + log( 1 .- final_output .+ epsilon ) .* ( 1 .- sample_targets ) ) ;
  
  ## get Akaike Information criteria
  all_aic_values( jj ) = 2 * sum_of_k - 2 * max_likelihood ;

endfor ## end of jj loop

J = mean( all_aic_values ) ;

endfunction
and the Octave interface of the Bayesopt Library I am currently iterating over different architectures ( up to 5 hidden layers deep with a max of 100 nodes per layer and using a choice of 3 hidden activations ) for a simple Logistic Regression model to predict turning points in different sets of statistical mechanics synthetic data using just features based on Taken's embedding.

More in due course.

Saturday, 12 October 2019

Another Method of Creating Synthetic Data

Over the years I have posted about several different methodologies for creating synthetic data and I have recently come across yet another one which readers may find useful.

One of my first posts was Creation of Synthetic Data, which essentially is a random scrambling of historic data for a single time series with an attempt to preserve some of the bar to bar dependencies based upon a bar's position in relation to upper and lower price envelopes, a la Bollinger Bands, although the code provided in this post doesn't actually use Bollinger Bands. Another post, Creating Synthetic Data Using the Fast Fourier Transform, randomly scrambles data in the frequency domain.

Rather than random scrambling of existing data, another approach is to take measurements from existing data and then use these measurements to recreate new data series with similar characteristics. My Hidden Markov Modelling of Synthetic Periodic Time Series Data post utilises this approach and can be used to superimpose known sinusoidal waveforms onto historical trends. The resultant synthetic data can be used, as I have used it, in a form of controlled experiment whereby indicators etc. can be measured against known cyclic prices.

All of the above share the fact that they are applied to univariate time series only, although I have no doubt that they could probably be extended to the multivariate case. However, the new methodology I have come across is Statistical Mechanics of Time Series and its matlab central file exchange "toolbox." My use of this is to produce ensembles of my Currency Strength Indicator, from which I am now able to produce 50/60+ separate, synthetic time series representing, for example, all the Forex pairs, and preserving their inter-relationships whilst only suffering the computational burden of applying this methodology to a dozen or so underlying "fundamental" time series.

This may very well become my "go to" methodology for generating unlimited amounts of training data.

Tuesday, 1 October 2019

Ideal Cyclic Tau Embedding as Times Series Features

Continuing on from my Ideal Tau for Time Series Embedding post, I have now written an Octave function based on these ideas to produce features for time series modelling. The function outputs are two slightly different versions of features, examples of which are shown in the following two plots, which show up and down trends in black, following a sinusoidal sideways market partially visible to the left:
and

The function outputs are normalised price and price delayed by a quarter and a half of the cycle period (in this case 20.) The trend slopes have been chosen to exemplify the difference in the features between up and down trends. The function outputs are identical for the unseen cyclic prices to the left.

When the sets of function outputs are plotted as 3D phase space plots typical results are
with the green, blue and red phase trajectories corresponding to the cyclic, up trending and down trending portions of the above synthetic price series. The markers in this plot correspond to points in the phase trajectories at which turns in the underlying price series occur. The following plot is the above plot rotated in phase space such that the green cyclic price phase trajectory is horizontally orientated.
It is clearer in this second phase space plot that there is a qualitative difference in the phase space trajectories of the different, underlying market types.

As a final check of feature relevance I used the Boruta R package, with the above turning point markers as classification targets (a turning point vs. not a turning point,) to assess the utility of this approach in general. These tests were conducted on real price series and also on indices created by my currency strength methodology. In all such Boruta tests the features are deemed "relevant" up to a delay of five bars on daily data (I ceased the tests at the five bar mark because it was becoming tedious to carry on.)

In summary, therefore, it can be concluded that features derived using Taken's theorem are useful on financial time series provided that:
  1. the underlying time series are normalised (detrended) and,
  2. the embedding delay (Tau) is set to the theoretical optimum of a quarter (and multiples thereof) of the measured cyclic period of the time series.

Monday, 16 September 2019

The Ideal Tau for Time Series Embedding?

In my Preliminary Test Results of Time Series Embedding post I got a bit ahead of myself and mistakenly quoted the ideal embedding length (Tau) to be half a period for cyclic prices. This should actually have been a quarter of a period and I have now corrected my earlier post.

This post is about my further investigations, which uncovered the above, and is written in a logical progression rather than the order in which I actually did things.

It was important to look at the Tau value for ideal sinusoidal data to confirm the quarter period, which checked out because I was able to easily recreate the 2D Lissajous curves in the Novel Method for Topological Embedding of Time-Series Data paper. I then extended this approach to 3D to produce plots such as this one, which may require some explaining.
This is a 3D Phase space plot of adaptive Max Min Normalised "sinusoidal prices" normalised to fall in the range [ -1 +1 ] and adaptive to the known, underlying period of the sine wave. This was done for prices with and without a "trend." The circular orbits are for the pure sine wave, representing cyclic price action plotted in 3D as current price, price delayed by a quarter of a cycle (Tau 1) and by half a cycle (Tau 2). The mini, blue and red "constellations" outside the circular orbits are the same pure sine wave(s) with an uptrend and downtrend added. The utility of this representation for market classification etc. is obvious, but is not the focus of this post.

Looking at Tau values for ideal prices with various trends taken from real market data, normalised as described above, across a range of known periods (because they were synthetically generated) the following results were obtained.
This is a plot of periods 10 to 47 (left to right along the x-axis) vs the global tau value (y-axis) measured using my Octave version of the mdDelay.m function from the mdembedding github. Each individual, coloured line plot is based on the underlying trend of one of 66 different, real time series. Each point per period per tradeable is the median value of 20 Monte Carlo runs over that tradeable's trend + synthetic, ideal sine wave price. The thick, black line is the median of the median values per period. Although noisy, it can be seen that the median line(s) straddle the theoretical, quarter cycle Tau (0.25) quite nicely. It should be remembered that this is on normalised prices. For comparison, the below plot shows the same on the unnormalised prices.
where the Tau value starts at over 0.5 at a cyclic period of 10 at the left and then slowly decreases to just over 0.1 at period 47.

More in due course.

Thursday, 5 September 2019

Preliminary Test Results of Time Series Embedding

Following on from my post yesterday, this post presents some preliminary results from the test I was running while writing yesterday's post. However, before I get to these results I would like to talk a bit about the hypothesis being tested.

I had an inkling that the dominant cycle period might having some bearing on tau, the time delay for the time series embedding implied by Taken's theorem, so I set up the test (described below) and after writing the post yesterday, and while the test was still running, I did some online research to see if there is any theoretical justification available.

One of the papers I found online is A Novel Method for Topological Embedding of Time-Series Data. From the abstract

"...we propose a novel method for embedding one-dimensional, periodic time-series data into higher-dimensional topological spaces to support robust recovery of signal features via topological data analysis under noisy sampling conditions...To provide evidence for the viability of this method, we analyze the simple case of sinusoidal data..."

One of the conclusions drawn is that for sinusoidal data the ideal embedding length is a quarter of the cycle period, i.e. Π/2.

Another paper I found was Topological time-series analysis with delay-variant embedding. This paper investigates "delay-variant embedding," essentially making the embedding length tau variable. From the concluding remarks

"We have demonstrated that the topological features that are constructed using delay-variant embedding can capture the topological variation in a time series when the time-delay value changes... These results indicate that the topological features that are deduced with delay-variant embedding can be used to reveal the representative features of the original time series."

My inkling was that tau should be adaptive to the measured dominant cycle, and both the above linked papers do provide some justification. Anyway, on to the test.

Reusing the code from my earlier post here I created synthetic price series with known but variable dominant cycle periods and based on real price trends, which results in synthetic series that are highly correlated with real price series but with underlying characteristics being exactly known. I then used my version of mdembedding code to calculate tau for numerous Monte Carlo replications of time series of prices based on a range of real forex pairs prices, gold and silver prices and different indices created by my currency strength indicator methodology. I then created a ratio of tau/average_measured_period measured as a global value across the whole of each individual time series replication as the test statistic, with the period being measured by an autocorrelation periodogram function: a ratio value of 0.5, for example, meaning that the tau for that time series is half the average period over the whole series.

Below is a plot of the histogram of these ratios:
which shows the ratios being centred approximately around 0.4 but with a long right tail. The following histogram is of the means of the bootstrap with replacement of the above distribution,
which is centred around a mean value of 0.436. Using this bootstrapped mean ratio of tau/period implies, for example, that the ideal tau for an average period of 20 is 8.72, which when rounded up to 9, is almost twice the theoretical ideal of a 5 day tau for 20 day cyclic period prices.

Personally I think this is an encouraging set of preliminary test results, with some caveats. More in due course.

Wednesday, 4 September 2019

Taken's Theorem and Time Series Embedding

I am now back from my summer break and am currently looking at using Taken's theorem and am using an adapted version of this mdembedding code ( adapted to run smoothly in Octave ). At the moment of writing this post I have a Monte Carlo test running in the background on my computer, the results of which shall be the subject of my next blog post within the next few days.

Friday, 21 June 2019

Process Noise Covariance Matrix Q for a Kalman Filter

Since my last post I have been working on the process noise covariance matrix Q, with a view to optimising both the Q and R matrices for an Extended Kalman filter to model the cyclic component of price action as a Sine wave. However, my work to date has produced unsatisfactory results and I have decided to give up trying to make it work.

The reasons for this failure are unclear to me, and I don't intend to spend any more time investigating, but some educated guesses are that the underlying model of sinusoid is mismatched and my estimation of the process and/or measurement noise variances is lacking; either way, the end result is that the EKF is diverging and my earlier leading signal 1, leading signal 2 and leading signal 3 posts outline what I think will be a more promising line of investigation in the future.

Nevertheless, below I provide the rough working code that I have been using in my EKF work, and maybe readers will find something of value in it. There is a lot of commenting and some blocked out code and I'm afraid readers will have to wade through this as best they can.
clear all ;
1 ;

pkg load signal ;

% function declarations
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function Y = ekf_sine_h( x )
% Measurement model function for the sine signal.
% Takes the state input vector x of sine, phase, angular frequency and amplitude and 
% calculates the current value of the sine given the state vector values.

f = x( 2 , : ) ;    % phase value in radians
a = x( 4 , : ) ;    % amplitude
Y = a .* sin( f ) ; % the sine value

endfunction
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function [ cost ] = sine_ekf_Q_optim( Q_mult , data , targets , Q , R )
  
% initial state from noisy data input measurements
s = data( : , 1 ) ;

n = size( data , 1 ) ;

% initial state covariance 
P = eye( n ) ;

% allocate memory
xV = zeros( size( data ) ) ;  % record estmates of states

% basic Jacobian of state transition matrix 
A = [ 0 0 0 0 ;   % sine value
      0 1 1 0 ;   % phase
      0 0 1 0 ;   % angular frequency
      0 0 0 1 ] ; % amplitude
      
H = eye( n ) ; % measurement matrix

Q = Q_mult .* Q ;

  for k = 2 : size( data , 2 )

  % do ekf
  % nonlinear update x and linearisation at current state s
  x = s ;
  x( 2 ) = x( 2 ) + x( 3 ) ;        % advance phase value by angular frequency
  x( 2 ) = mod( x( 2 ) , 2 * pi ) ; % limit phase values to range 0 --- 2 * pi
  x( 1 ) = x( 4 ) * sin( x( 2 ) ) ; % sine value calculation

  % update the 1st row of the jacobian matrix at state vector s values
  A( 1 , : ) = [ 0 s( 4 ) * cos( s( 2 ) ) 0 sin( s( 2 ) ) ] ;

  P = A * P * A' + Q ; % state transition model update of covariance matrix P

  measurement_residual = [ data( 1 , k ) - x( 1 ) ;   % sine residual
                           data( 2 , k ) - x( 2 ) ;   % phase residual
                           data( 3 , k ) - x( 3 ) ;   % angular frequency residual
                           data( 4 , k ) - x( 4 ) ] ; % amplitude residual
                           
  innovation_residual_covariance = H * P * H' + R ;
  kalman_gain = P * H' / innovation_residual_covariance ;

  % update the state vector s with kalman_gain 
  s = x + kalman_gain * measurement_residual ;

  % some reality based post hoc adjustments
  s( 2 ) = abs( s( 2 ) ) ; % no negative phase values
  s( 3 ) = abs( s( 3 ) ) ; % no negative angular frequencies
  s( 4 ) = abs( s( 4 ) ) ; % no negative amplitudes

  % update the state covariance matrix P
  % NOTE
  % The Joseph formula is given by P+ = ( I − KH ) P− ( I − KH )' + KRK', where I is the identity matrix,
  % K is the gain, H is the measurement mapping matrix, R is the measurement noise covariance matrix, 
  % and P−, P+ are the pre and post measurement update estimation error covariance matrices, respectively.  
  % The optimal linear unbiased estimator (equivalently the optimal linear minimum mean square error estimator)  
  % or Kalman filter often utilizes simplified covariance update equations such as P+ = (I−KH)P− and P+ = P− −K(HP−H'+R)K'.  
  % While these alternative formulations require fewer computations than the Joseph formula, they are only valid 
  % when K is chosen as the optimal Kalman gain. In engineering applications, situations arise where the optimal 
  % Kalman gain is not utilized and the Joseph formula must be employed to update the estimation error covariance.  
  % Two examples of such a scenario are underweighting measurements and considering states. 
  % Even when the optimal gain is used, the Joseph formulation is still preferable because it possesses 
  % greater numerical accuracy than the simplified equations.
  P = ( eye( n ) - kalman_gain * H ) * P * ( eye( n ) - kalman_gain * H )' + kalman_gain * R * kalman_gain' ;
   
  xV( : , k ) = s ; % save estimated updated states
   
  endfor
  
Y = ekf_sine_h( xV ) ;
cost = mean( ( Y( 4 : end - 3 ) .- targets( 4 : end - 3 ) ).^2 ) ;
%cost = mean( ( Y .- targets ).^2 ) ;
%output = xV( 1 , : ) ;

endfunction
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function output = run_sine_ekf_Q_optim( data , Q , R )
  
% initial state from noisy data input measurements
s = data( : , 1 ) ;

n = size( data , 1 ) ;

% initial state covariance 
P = eye( n ) ;

% allocate memory
xV = zeros( size( data ) ) ;  % record estmates of states

% basic Jacobian of state transition matrix 
A = [ 0 0 0 0 ;   % sine value
      0 1 1 0 ;   % phase
      0 0 1 0 ;   % angular frequency
      0 0 0 1 ] ; % amplitude
      
H = eye( n ) ; % measurement matrix

  for k = 2 : size( data , 2 )

  % do ekf
  % nonlinear update x and linearisation at current state s
  x = s ;
  x( 2 ) = x( 2 ) + x( 3 ) ;        % advance phase value by angular frequency
  x( 2 ) = mod( x( 2 ) , 2 * pi ) ; % limit phase values to range 0 --- 2 * pi
  x( 1 ) = x( 4 ) * sin( x( 2 ) ) ; % sine value calculation

  % update the 1st row of the jacobian matrix at state vector s values
  A( 1 , : ) = [ 0 s( 4 ) * cos( s( 2 ) ) 0 sin( s( 2 ) ) ] ;

  P = A * P * A' + Q ; % state transition model update of covariance matrix P

  measurement_residual = [ data( 1 , k ) - x( 1 ) ;   % sine residual
                           data( 2 , k ) - x( 2 ) ;   % phase residual
                           data( 3 , k ) - x( 3 ) ;   % angular frequency residual
                           data( 4 , k ) - x( 4 ) ] ; % amplitude residual
                           
  innovation_residual_covariance = H * P * H' + R ;
  kalman_gain = P * H' / innovation_residual_covariance ;

  % update the state vector s with kalman_gain 
  s = x + kalman_gain * measurement_residual ;

  % some reality based post hoc adjustments
  s( 2 ) = abs( s( 2 ) ) ; % no negative phase values
  s( 3 ) = abs( s( 3 ) ) ; % no negative angular frequencies
  s( 4 ) = abs( s( 4 ) ) ; % no negative amplitudes

  % update the state covariance matrix P
  % NOTE
  % The Joseph formula is given by P+ = ( I − KH ) P− ( I − KH )' + KRK', where I is the identity matrix,
  % K is the gain, H is the measurement mapping matrix, R is the measurement noise covariance matrix, 
  % and P−, P+ are the pre and post measurement update estimation error covariance matrices, respectively.  
  % The optimal linear unbiased estimator (equivalently the optimal linear minimum mean square error estimator)  
  % or Kalman filter often utilizes simplified covariance update equations such as P+ = (I−KH)P− and P+ = P− −K(HP−H'+R)K'.  
  % While these alternative formulations require fewer computations than the Joseph formula, they are only valid 
  % when K is chosen as the optimal Kalman gain. In engineering applications, situations arise where the optimal 
  % Kalman gain is not utilized and the Joseph formula must be employed to update the estimation error covariance.  
  % Two examples of such a scenario are underweighting measurements and considering states. 
  % Even when the optimal gain is used, the Joseph formulation is still preferable because it possesses 
  % greater numerical accuracy than the simplified equations.
  P = ( eye( n ) - kalman_gain * H ) * P * ( eye( n ) - kalman_gain * H )' + kalman_gain * R * kalman_gain' ;
   
  xV( : , k ) = s ; % save estimated updated states
   
  endfor 

output = xV ;

endfunction
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% load data
cd /home/dekalog/Documents/octave/indices ;

raw_data = dlmread( 'raw_data_for_indices_and_strengths' ) ;
delete_hkd_crosses = [ 3 9 12 18 26 31 34 39 43 51 61 ] .+ 1 ; % +1 to account for date column
raw_data( : , delete_hkd_crosses ) = [] ;
raw_data( : , 1 ) = [] ; % delete date vector
% so now
%    1       2       3       4       5       6       7       8      9      10      11      12      13     14
% aud_cad aud_chf aud_jpy aud_nzd aud_sgd aud_usd cad_chf cad_jpy cad_sgd chf_jpy eur_aud eur_cad eur_chf eur_gbp
%
%   15      16      17      18      19      20      21      22      23      24      25      26      27      28
% eur_jpy eur_nzd eur_sgd eur_usd gbp_aud gbp_cad gbp_chf gbp_jpy gbp_nzd gbp_sgd gbp_usd nzd_cad nzd_chf nzd_jpy
%
%    29      30      31      32      33      34      35      36      37      38     39       40      41      42       
% nzd_sgd nzd_usd sgd_chf sgd_jpy usd_cad usd_chf usd_jpy usd_sgd xag_aud xag_cad xag_chf xag_eur xag_gbp xag_jpy
%
%    43      44     45      46      47      48      49      50      51      52     53       54      55
% xag_nzd xag_sgd xag_usd xau_aud xau_cad xau_chf xau_eur xau_gbp xau_jpy xau_nzd xau_sgd xau_usd xau_xag 

% aud_x = x(1) ; cad_x = x(2) ; chf_x = x(3) ; eur_x = x(4) ; gbp_x = x(5) ; hkd_x = x(6) ;
% jpy_x = x(7) ; nzd_x = x(8) ; sgd_x = x(9) ; usd_x = x(10) ; gold_x = x(11) ; silver_x = x(12) ;
all_g_c = dlmread( "all_g_mults_c" ) ; % the currency g mults
all_g_c( : , 7 ) = [] ; % delete hkd index
all_g_s = dlmread( "all_g_sv" ) ;      % the gold and silver g mults   
all_g_c = [ all_g_c all_g_s(:,2:3) ] ; % a combination of the above 2
all_g_c( : , 2 : end ) = cumprod( all_g_c( : , 2 : end) , 1 ) ;
all_g_c( : , 1 ) = [] ; % delete date vector
% so now index ix are
% aud_x = 56 ; cad_x = 57 ; chf_x = 58 ; eur_x = 59 ; gbp_x = 60 ; jpy_x = 61 ; 
% nzd_x = 62 ; sgd_x = 63 ; usd_x = 64 ; gold_x = 65 ; silver_x = 66 ;

all_raw_data = [ raw_data all_g_c ] ; clear -x all_raw_data ;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

cd /home/dekalog/Documents/octave/kalman/ekf ;

##  >> bs_phase_error = mean( all_bootstrap_means(:,1)) + 2 * std( all_bootstrap_means(:,1))
##  bs_phase_error =  0.50319
##  >> bs_period_ang_frequency_error = mean( all_bootstrap_means(:,2)) + 2 * std( all_bootstrap_means(:,2))
##  bs_period_ang_frequency_error =  0.17097
##  >> bs_amp_error = mean( all_bootstrap_means(:,3)) + 2 * std( all_bootstrap_means(:,3))
##  bs_amp_error =  0.18179

price_ix = 65 ;
price = all_raw_data( : , price_ix ) ; % plot(price) ;

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% get measurements of price action
period = autocorrelation_periodogram( price ) ;
measured_angular_frequency = ( 2 * pi ) ./ period ; % plot( measured_angular_frequency ) ;
trend = sma( price , period ) ;
cycle = price .- trend ;
%smooth_cycle = sgolayfilt( cycle , 2 , 7 ) ;
smooth_cycle = smooth_2_5( cycle ) ;
% plot( price,'k',trend,'r') ;
[ ~ , ~ , ~ , ~ , ~ , ~ , measured_phase ] = sinewave_indicator( cycle ) ; % figure(1) ; plot( deg2rad( measured_phase ) ) ;
% figure(2) ; plot( sin( deg2rad(measured_phase)) ) ;
measured_phase = mod( unwrap( deg2rad( measured_phase ) ) , 2  * pi ) ; % figure(1) ; hold on ; plot( measured_phase , 'r' ) ; hold off ;
% figure(2) ; hold on ; plot( sin( measured_phase)) ; hold off ;

measured_amplitude = cycle ;
for ii = 50 : length( cycle ) ;
measured_amplitude( ii ) = sqrt( 2 ) * sqrt( mean( cycle( ii - period( ii ) : ii ).^2 ) ) ; 
endfor % end ii loop
% plot(measured_amplitude) ;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% run the ekf optimisation
ekf_cycle = cycle ;

% complete values for R matrix for this data set
data = [ smooth_cycle( 101 : end ) measured_phase( 101 : end ) measured_angular_frequency( 101 : end ) measured_amplitude( 101 : end ) ]' ; 
%measured_sine = sgolayfilt( data( 1 , : ) , 2 , 7 ) ; % figure(1) ; plot( data( 1 , : ) , 'k' , measured_sine , 'r' ) ;
measured_sine = smooth_cycle( 101 : end )' ;
R( 1 , 1 ) = mean( ( data( 1 , : ) .- measured_sine ).^2 ) ; % variance of sine wave value measurements
R( 3 , 3 ) = mean( ( data( 3 , : ) .* 0.17 ).^2 ) ;          % variance of angular frequency measurements
R( 4 , 4 ) = mean( ( data( 4 , : ) .* 0.18 ).^2 ) ;          % variance of amplitude measurements

% get initial process covariance matrix Q
Q = analytical_shrinkage( data' ) ;
Q = 0.1 .* Q ;

lookback = 200 ;

%for ii = 301 : length( price )
  
% complete values for R matrix for this data set
data = [ smooth_cycle(ii-lookback:ii) measured_phase(ii-lookback:ii) measured_angular_frequency(ii-lookback:ii) measured_amplitude(ii-lookback:ii) ]' ; 
%measured_sine = sgolayfilt( data( 1 , : ) , 2 , 7 ) ; % figure(1) ; plot( data( 1 , : ) , 'k' , measured_sine , 'r' ) ;
measured_sine = cycle( ii - lookback : ii )' ;
R( 1 , 1 ) = mean( ( data( 1 , : ) .- measured_sine ).^2 ) ; % variance of sine wave value measurements
R( 3 , 3 ) = mean( ( data( 3 , : ) .* 0.17 ).^2 ) ;          % variance of angular frequency measurements
R( 4 , 4 ) = mean( ( data( 4 , : ) .* 0.18 ).^2 ) ;          % variance of amplitude measurements
  
%  Q = analytical_shrinkage( data' ) ;
   
% optimise the Q matrix for this data set
% declare optimisation function
f = @( Q_mult ) sine_ekf_Q_optim( Q_mult , data , smooth_cycle( ii - lookback : ii )' , Q , R ) ;
% Set options for fminunc
options = optimset( "MaxIter" , 50 ) ;
% initial value
Q_mult = 1 ;
Q_mult = fminunc( f , Q_mult , options ) ;
% adjust Q by optimised Q_mult
Q = Q_mult .* Q ;

  output = run_sine_ekf_Q_optim( data , Q , R ) ;
  %ekf_cycle( ii ) = ekf_sine_h( output( : , end ) ) ;
  ekf_cycle = ekf_sine_h( output ) ;
  
figure(1) ; subplot( 4 , 1 , 1 ) ; plot( data( 1 , : ) , 'k' , 'linewidth' , 2 , ekf_sine_h( output ) , 'r' , 'linewidth' , 2 ) ;
title( 'High Pass' ) ; legend( 'Measured' , 'EKF Estimated' ) ;
figure(1) ; subplot( 4 , 1 , 2 ) ; plot( data( 2 , : ) , 'k' , 'linewidth' , 2 , output( 2 , : ) , 'r' , 'linewidth' , 2 ) ;
title( 'High Pass Phase' ) ; legend( 'Measured' , 'EKF Estimated' ) ;
figure(1) ; subplot( 4 , 1 , 3 ) ; plot( data( 3 , : ) , 'k' , 'linewidth' , 2 , output( 3 , : ) , 'r' , 'linewidth' , 2 ) ;
title( 'High Pass Angular Frequency' ) ; legend( 'Measured' , 'EKF Estimated' ) ;
figure(1) ; subplot( 4 , 1 , 4 ) ; plot( data( 4 , : ) , 'k' , 'linewidth' , 2 , output( 4 , : ) , 'r' , 'linewidth' , 2 ) ;
title( 'High Pass Amplitude' ) ; legend( 'Measured' , 'EKF Estimated' ) ;
  
%endfor

figure(1) ; plot( cycle(101:end)' , 'k' , 'linewidth' , 2 , ekf_cycle , 'r' , 'linewidth' , 2 , output(1,:) , 'b' ) ;
figure(2) ; plot( price(101:end) , 'k' , 'linewidth' , 2 , trend(101:end).+ekf_cycle' , 'r' , 'linewidth' , 2 ) ;
One thing I will point out is the use of a function called analytical_shrinkage, which I have taken directly from a recent paper, Analytical Nonlinear Shrinkage of Large-Dimensional Covariance Matrices, the MATLAB code being provided as an appendix in the paper. Readers may find this to be particularly useful outside the use I have tried putting it to.

Next week I shall go on my customary, summer working holiday and be away from home until August: therefore there will be a hiatus in blog posts until my return.

Thursday, 6 June 2019

Determining the Noise Covariance Matrix R for a Kalman Filter

An important part of getting a Kalman filter to work well is tuning the process noise covariance matrix Q and the measurement noise covariance matrix R. This post is about obtaining the R matrix, with a post about the Q matrix to come in due course.

In my last post about the alternative version extended kalman filter to model a sine wave I explained that the 4 measurements are the sine wave value itself, and the phase, angular frequency and amplitude of the sine wave. The phase and angular frequencies are "measured" using outputs of the sinewave indicator and the amplitude is calculated as an RMS value over a measured period. The "problem" with this is that the accuracies of such measurements on real data are unknown, and therefore so is the R matrix, because the underlying cycle properties are unknown. My solution to this is offline Monte Carlo simulation.

As a first step the Octave code in the box immediately below
clear all ;
pkg load statistics ;

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% load data

raw_data = dlmread( 'raw_data_for_indices_and_strengths' ) ;
delete_hkd_crosses = [ 3 9 12 18 26 31 34 39 43 51 61 ] .+ 1 ; % +1 to account for date column
raw_data( : , delete_hkd_crosses ) = [] ;
raw_data( : , 1 ) = [] ; % delete data vector 

all_g_c = dlmread( "all_g_mults_c" ) ; % the currency g mults
all_g_c( : , 7 ) = [] ; % delete hkd index

all_g_s = dlmread( "all_g_sv" ) ;      % the gold and silver g mults   
all_g_c = [ all_g_c all_g_s(:,2:3) ] ; % a combination of the above 2
all_g_c( : , 2 : end ) = cumprod( all_g_c( : , 2 : end) , 1 ) ;
all_g_c( : , 1 ) = [] ; % delete date vector

all_raw_data = [ raw_data all_g_c ] ;

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% set up recording vectors
amplitude = zeros( size( all_raw_data , 1 ) , 1 ) ;
all_amplitudes = zeros( size( all_raw_data ) ) ;
all_periods = all_amplitudes ;

for ii = 1 : size( all_raw_data , 2 )
  
  price = all_raw_data( : , ii ) ;
  period = autocorrelation_periodogram( price ) ;
  all_periods( : , ii ) = period ;
  [ hp , ~ ] = highpass_filter_basic( price ) ; hp( 1 : 99 ) = 0 ;

  for jj = 100 : size( all_raw_data , 1 )
    amplitude( jj ) = sqrt( 2 ) * sqrt( mean( hp( round( jj - period( jj ) / 2 ) : jj , 1 ).^2 ) ) ;
  endfor
  
  % store information about amplitude changes as a multiplicative constant
  all_amplitudes( : , ii ) = amplitude ./ shift( amplitude , 1 ) ;

endfor

% truncate
all_amplitudes( 1 : 101 , : ) = [] ;
all_periods( 1 : 101 , : ) = [] ;

max_period = max( max( all_periods ) ) ;
min_period = min( min( all_periods ) ) ;

% unroll
all_amplitudes = all_amplitudes( : ) ; all_periods = all_periods( : ) ;

amplitude_changes = cell( max_period , 1 ) ;

for ii = min_period : max_period
  ix = find( all_periods == ii ) ;
  amplitude_changes{ ii , 1 } = all_amplitudes( ix , 1 ) ;
endfor

save all_amplitude_changes amplitude_changes ;
is used to get information about cycle amplitudes from real data and store it in a cell array.

This next box shows code that uses this cell array, plus earlier hidden markov modelling code for sine wave modelling, to produce synthetic price data
clear all ;
pkg load signal ;
pkg load statistics ;

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% load data

raw_data = dlmread( 'raw_data_for_indices_and_strengths' ) ;
delete_hkd_crosses = [ 3 9 12 18 26 31 34 39 43 51 61 ] .+ 1 ; % +1 to account for date column
raw_data( : , delete_hkd_crosses ) = [] ;
raw_data( : , 1 ) = [] ; % delete date vector
all_g_c = dlmread( "all_g_mults_c" ) ; % the currency g mults
all_g_c( : , 7 ) = [] ; % delete hkd index
all_g_s = dlmread( "all_g_sv" ) ;      % the gold and silver g mults   
all_g_c = [ all_g_c all_g_s(:,2:3) ] ; % a combination of the above 2
all_g_c( : , 2 : end ) = cumprod( all_g_c( : , 2 : end) , 1 ) ;
all_g_c( : , 1 ) = [] ; % delete date vector
all_raw_data = [ raw_data all_g_c ] ; clear -x all_raw_data ;

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

load all_hmm_periods_daily ;
load all_amplitude_changes ;

% choose an ix for all_raw_data
ix = input( "ix? " ) ;
price = all_raw_data( : , ix ) ;
period = autocorrelation_periodogram( price ) ;
[ hp , trend ] = highpass_filter_basic( price ) ;

% estimate variance of cycle noise
hp_filt = sgolayfilt( hp , 2 , 7 ) ;
noise = hp( 101 : end ) .- hp_filt( 101 : end ) ; noise_var = var( noise ) ;
% zero pad beginning of noise vector
noise = [ zeros( 100 , 1 ) ; noise ] ;

% set for plotting purposes
hp( 1 : 50 ) = 0 ; trend( 1 : 50 ) = price( 1 : 50 ) ;

% create sine wave with known phase and period/angular frequency 
[ gen_period , ~ ] = hmmgenerate( length( price ) , transprobest , outprobest ) ;
gen_period = gen_period .+ hmm_min_period_add ;
phase = cumsum( ( 2 * pi ) ./ gen_period ) ; phase = mod( phase , 2 * pi ) ;
gen_sine = sin( phase ) ;

% amplitude vector
amp = ones( size( hp ) ) ;

% RMS initial amplitude 
amp( 100 ) = sqrt( 2 ) * sqrt( mean( hp( 100 - period( 100 ) : 100 ).^2 ) ) ;

% alter sine wave with known amplitude
for ii = 101 : length( hp )
  % get multiple for this period
  random_amp_mults = amplitude_changes{ gen_period( ii ) } ;
  rand_mult = random_amp_mults( randi( size( random_amp_mults , 1 ) , 1 ) ) ;
  amp( ii ) = rand_mult * sqrt( 2 ) * sqrt( mean( hp( ii - 1 - period( ii - 1 ) : ii - 1 ).^2 ) ) ; ;
  gen_sine( ii ) = amp( ii ) * gen_sine( ii ) ; 
endfor

% estimate variance of generated cycle noise
gen_filt = sgolayfilt( gen_sine , 2 , 7 ) ;
gen_noise = gen_sine( 101 : end ) .- gen_filt( 101 : end ) ; gen_noise_var = var( gen_noise ) ;

% a crude adjustment of generated cycle noise to match "real" noise by adding
% noise
gen_sine = gen_sine' .+ noise ;

% set beginning of gen_sine to real cycle for plotting purposes
gen_sine( 1 : 100 ) = hp( 1 : 100 ) ;
new_price = trend .+ gen_sine ;

% a simple correlation check
correlation_cycle_gen_sine = corr( hp , gen_sine ) ;
correlation_price_gen_price = corr( price , new_price ) ;

figure(1) ; plot( gen_sine , 'k' , 'linewidth' , 2 ) ; grid minor on ; title( 'The Generated Cycle' ) ; 
figure(2) ; plot( price , 'b' , 'linewidth' , 2 , trend , 'k' , 'linewidth' , 1 , new_price , 'r' , 'linewidth', 2 ) ; 
title( 'Comparison of Real and Generated Prices' ) ; legend( 'Real Price' , 'Real Trend' , 'Generated Price' ) ; grid minor on ;
figure(3) ; plot( hp , 'k' , 'linewidth' , 2 , gen_sine , 'r' , 'linewidth' , 2 ) ; title( 'Generated Cycle compared with Real Cycle' ) ; 
legend( 'Real Cycle' , 'Generated Cycle' ) ; grid minor on ;
Some typical chart output of this code is the synthetic price series in red compared with the real blue price series
and the underlying cycles.
For this particular run the correlation between the synthetic price series and the real price series is 0.99331, whilst the correlation between the relevant cycles is only 0.19988, over 1500 data points ( not all are shown in the above charts. )

Because the synthetic cycle is definitely a sine wave ( by construction ) with a known phase, angular frequency and amplitude I can run the above sinewave_indicator and RMS measurements on the synthetic prices, compare with the known synthetic cycle, and estimate the measurement noise covariance matrix R.

More in due course. 

Friday, 31 May 2019

Extended Kalman Filter, Alternative Version

Below is alternative code for an Extended Kalman filter for a sine wave, which has 4 states: the sine wave value, the phase, the angular frequency and amplitude and measurements thereof. I have found it necessary to implement this version because I couldn't adjust my earlier version code to accept and measure the additional states without the Cholesky decomposition function chol() exiting and giving errors about the input not being a positive definite matrix.
clear all ;
1 ;

function Y = ekf_sine_h( x )
% Measurement model function for the sine signal.
% Takes the state input vector x of sine, phase, angular frequency and amplitude and 
% calculates the current value of the sine given the state vector values.

f = x( 2 , : ) ;    % phase value in radians
a = x( 4 , : ) ;    % amplitude
Y = a .* sin( f ) ; % the sine value

endfunction

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
pkg load statistics ;
load all_snr ;
load all_hmm_periods_daily ;

N = 250 ; % total dynamic steps i.e.length of data to generate
[ gen_period , gen_states ] = hmmgenerate( N , transprobest , outprobest ) ;
gen_period = gen_period .+ hmm_min_period_add ;
angular_freq_rads = ( 2 * pi ) ./ gen_period ;
real_phase = cumsum( angular_freq_rads ) ; real_phase = mod( real_phase , 2 * pi ) ;
gen_sine = sin( real_phase ) ;

noise_val = mean( [ all_snr(:,1) ; all_snr(:,2) ] ) ;
my_sine_noisy = awgn( gen_sine , noise_val ) ;

[ ~ , ~ , ~ , ~ , ~ , ~ , phase ] = sinewave_indicator( my_sine_noisy ) ;
phase = deg2rad( phase' ) ;
period = autocorrelation_periodogram( my_sine_noisy ) ;
measured_angular_frequency = ( 2 * pi ) ./ period' ;
measured_angular_frequency( 1 : 50 ) = 2 * pi / 20 ;

% for for amplitude
measured_amplitude = ones( 1 , N ) ;
for ii = 50 : N
  measured_amplitude( ii ) = sqrt( 2 ) * sqrt( mean( my_sine_noisy( ii - period(ii) : ii ).^2 ) ) ;  
endfor

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
h = @ekf_sine_h ; 

n = 4 ; % number of states - sine value, phase, angular frequency and amplitude

% matrix for covariance of process
Q = eye( n ) ; 
Q( 1 , 1 ) = var( my_sine_noisy ) ;
Q( 2 , 2 ) = var( phase( 50 : end ) ) ;
Q( 3 , 3 ) = var( measured_angular_frequency( 50 : end ) ) ;
Q( 4 , 4 ) = var( measured_amplitude( 50 : end ) ) ;
% scale the covariance matrix, a tunable factor?
Q = 0.001 .* Q ; 

% measurement noise variance matrix
R = zeros( 4 , 4 ) ; 
R( 1 , 1 ) = var( gen_sine .- my_sine_noisy ) ;                                               % the sine values
R( 2 , 2 ) = var( real_phase( 50 : end ) .- phase( 50 : end ) ) ;                             % the phase values
R( 3 , 3 ) = var( angular_freq_rads( 50 : end ) .- measured_angular_frequency( 50 : end ) ) ; % angular frequency values
R( 4 , 4 ) = var( measured_amplitude( 50 : end ) .- 1 ) ;                                     % amplitude values

% initial state from noisy measurements
s = [ my_sine_noisy( 50 ) ;              % sine
      phase( 50 ) ;                      % phase ( Rads )
      measured_angular_frequency( 50 ) ; % angular_freq_rads
      measured_amplitude( 50 ) ] ;       % amplitude               
 
% initial state covariance 
P = eye( n ) ; 
 
% allocate memory
xV = zeros( n , N ) ;  % record estmates of states        
pV1 = zeros( n , N ) ; % projection measurments
pV2 = zeros( n , N ) ; % projection measurments
pV3 = zeros( n , N ) ; % projection measurments

% basic Jacobian of state transition matrix 
A = [ 0 0 0 0 ;   % sine value
      0 1 1 0 ;   % phase
      0 0 1 0 ;   % angular frequency
      0 0 0 1 ] ; % amplitude
      
H = eye( n ) ; % measurement matrix
      
for k = 51 : N

% do ekf
% nonlinear update x and linearisation at current state s
x = s ;
x( 2 ) = x( 2 ) + x( 3 ) ;        % advance phase value by angular frequency
x( 2 ) = mod( x( 2 ) , 2 * pi ) ; % limit phase values to range 0 --- 2 * pi
x( 1 ) = x( 4 ) * sin( x( 2 ) ) ; % sine value calculation

% update the 1st row of the jacobian matrix at state vector s values
A( 1 , : ) = [ 0 s( 4 ) * cos( s( 2 ) ) 0 sin( s( 2 ) ) ] ;

P = A * P * A' + Q ; % state transition model update of covariance matrix P

measurement_residual = [ my_sine_noisy( k ) - x( 1 ) ;              % sine residual
                         phase( k ) - x( 2 ) ;                      % phase residual
                         measured_angular_frequency( k ) - x( 3 ) ; % angular frequency residual
                         measured_amplitude( k ) - x( 4 ) ] ;       % amplitude residual
                         
innovation_residual_covariance = H * P * H' + R ;
kalman_gain = P * H' / innovation_residual_covariance ;

% update the state vector s with kalman_gain 
s = x + kalman_gain * measurement_residual ;

% some reality based post hoc adjustments
s( 2 ) = abs( s( 2 ) ) ; % no negative phase values
s( 3 ) = abs( s( 3 ) ) ; % no negative angular frequencies
s( 4 ) = abs( s( 4 ) ) ; % no negative amplitudes

% update the state covariance matrix P
% NOTE
% The Joseph formula is given by P+ = ( I − KH ) P− ( I − KH )' + KRK', where I is the identity matrix,
% K is the gain, H is the measurement mapping matrix, R is the measurement noise covariance matrix, 
% and P−, P+ are the pre and post measurement update estimation error covariance matrices, respectively.  
% The optimal linear unbiased estimator (equivalently the optimal linear minimum mean square error estimator)  
% or Kalman filter often utilizes simplified covariance update equations such as P+ = (I−KH)P− and P+ = P− −K(HP−H'+R)K'.  
% While these alternative formulations require fewer computations than the Joseph formula, they are only valid 
% when K is chosen as the optimal Kalman gain. In engineering applications, situations arise where the optimal 
% Kalman gain is not utilized and the Joseph formula must be employed to update the estimation error covariance.  
% Two examples of such a scenario are underweighting measurements and considering states. 
% Even when the optimal gain is used, the Joseph formulation is still preferable because it possesses 
% greater numerical accuracy than the simplified equations.
P = ( eye( n ) - kalman_gain * H ) * P * ( eye( n ) - kalman_gain * H )' + kalman_gain * R * kalman_gain' ;
 
xV( : , k ) = s ;                                                 % save estimated updated states
pV1( : , k ) = s ; pV1( 2 , k ) = pV1( 2 , k ) + pV1( 3 , k ) ;   % for plotting projections
pV2( : , k ) = s ; pV2( 2 , k ) = pV2( 2 , k ) + 2*pV2( 3 , k ) ; % for plotting projections
pV3( : , k ) = s ; pV3( 2 , k ) = pV3( 2 , k ) + 3*pV3( 3 , k ) ; % for plotting projections
 
endfor

% Plotting
figure(1) ; subplot(3,1,1) ; plot( real_phase , 'k', 'linewidth' , 2 , phase , 'b' , 'linewidth' , 2 , xV( 2 , : ) , 'r' , 'linewidth' , 2 ) ; grid minor on ;
title( 'Phase State' ) ; legend( 'Actual Phase State' , 'Measured Phase State' , 'Estimated Phase Sate' ) ; ylabel( 'Radians' ) ;

figure(1) ; subplot(3,1,2) ; plot(angular_freq_rads, 'k', 'linewidth' , 2 , measured_angular_frequency , 'b' , 'linewidth' , 2 , xV( 3 , : ) , 'r' , 'linewidth' , 2 ) ; 
grid minor on ; 
title( 'Angular Frequency State in Radians' ) ; legend( 'Actual Angular Frequency' , 'Measured Angular Frequency' , 'Estimated Angular Frequency' ) ;

figure(1) ; subplot(3,1,3) ; plot( ones(1,N) , 'k', 'linewidth' , 1 , measured_amplitude , 'b' , 'linewidth' , 2 , xV( 4 , : ) , 'r' , 'linewidth' , 2 ) ; grid minor on ;
title( 'Amplitude State' ) ; legend( 'Actual Amplitude' , 'Measured Amplitude' , 'Estimated Amplitude' ) ;

sine_est = h( xV ) ;
sine_1 = h( pV1 ) ; sine_1 = shift( sine_1 , 1 ) ;
sine_2 = h( pV2 ) ; sine_2 = shift( sine_2 , 2 ) ;
sine_3 = h( pV3 ) ; sine_3 = shift( sine_3 , 3 ) ;

figure(2) ; plot( gen_sine , 'k' , 'linewidth' , 2 , my_sine_noisy , '*b' , 'linewidth' , 2 , sine_est , 'r' , 'linewidth' , 2 ) ; grid minor on ;
title( 'Underlying and its Estimate' ) ; legend( 'Real Underlying Sine' , 'Noisy Measured Sine' , 'EKF Estimate of Real Underlying Sine' ) ;

figure(3) ; plot( gen_sine , 'k' , 'linewidth' , 2 , my_sine_noisy , '*b' , 'linewidth' , 2 , sine_1 , 'r' , 'linewidth' , 2 , sine_2 , 'g' , 'linewidth' , 2 , ...
sine_3 , 'b' , 'linewidth' , 2 ) ; grid minor on ;
title( 'Plot of the actual sine wave vs projected estimated sines' ) ; 
legend( 'Actual' , 'Noisy Measured Sine' , 'EKF Projected 1' , 'EKF Projected 2' , 'EKF Projected 3' ) ;
The measurement for the phase is an output of the sinewave indicator function, the period an output of the autocorrelation periodogram function and the amplitude the square root of 2 multiplied by the Root mean square of the sine wave over the autocorrelation periodogram measured period.

As usual the code is liberally commented. More soon.