Thursday, 23 April 2020

Visualising Oanda's Orderbook

My earlier post of 26th March shows code to visualise the most recent instantaneous snapshot of Oanda's order book, realised as a horizontal bar chart superimposed over a price chart. Below is a screen shot of a different type of chart
designed to show the historical order book, which is similar to the proprietary Bookmap software. The background of the chart is a heatmap of the 20 order book levels above and below the order book price, with the lighter colours representing a higher percentage of the total order book order volume, the spheres are sized proportionally to the tick volume of the relevant OHLC bar and the blue and red lines represent the high and low of the bar respectively. The lighter colours nicely highlight the areas where orders are accumulated for potential support and resistance.

However, the above screenshot is actually a two dimensional view, from above, of a three dimensional surface plot, as can be seen in the short video below

The first 25 seconds or so shows panning along the two dimensional view, whilst the remainder of the video shows panning in 3 dimensions. I have whimsically named this a "Snake and Canyon" plot as it resembles watching a river flowing/snaking along a canyon/valley floor and bouncing off the cliffs/hills that are the support and resistance zones. The higher the peaks, the higher the percentage of resting orders, and thus a greater number of incoming market orders is needed to breakout out of the canyon/valley.

I created this so that I can visually look for patterns in the historical data, the reason being that my statistical search for useful features, using the Boruta package, has not shown any useful results yet. The Octave code to produce a "Snake and Canyon" plot is given below. Of course, use of this code presupposes that you have the data available for loading from csv files. Enjoy!
clear all ;
cd /home/dekalog/Documents/octave/oanda_data/20m ;

orderbook_snapshots_files = glob( '*_historical_orderbook_snapshots' ) ;
## {
##   [1,1] = aud_jpy_historical_orderbook_snapshots
##   [2,1] = aud_usd_historical_orderbook_snapshots
##   [3,1] = eur_aud_historical_orderbook_snapshots
##   [4,1] = eur_chf_historical_orderbook_snapshots
##   [5,1] = eur_gbp_historical_orderbook_snapshots
##   [6,1] = eur_jpy_historical_orderbook_snapshots
##   [7,1] = eur_usd_historical_orderbook_snapshots
##   [8,1] = gbp_chf_historical_orderbook_snapshots
##   [9,1] = gbp_jpy_historical_orderbook_snapshots
##   [10,1] = gbp_usd_historical_orderbook_snapshots
##   [11,1] = nzd_usd_historical_orderbook_snapshots
##   [12,1] = usd_cad_historical_orderbook_snapshots
##   [13,1] = usd_chf_historical_orderbook_snapshots
##   [14,1] = usd_jpy_historical_orderbook_snapshots
##   [15,1] = xag_usd_historical_orderbook_snapshots
##   [16,1] = xau_usd_historical_orderbook_snapshots
## }

file_no = input( 'Enter file no. from list, a number 1 to 16 inclusive. ' ) ;
filename = orderbook_snapshots_files{ file_no } ;
str_split = strsplit( filename , "_" ) ;
price_name = strjoin( str_split( 1 : 2 ) , "_" ) ;
ohlc_20m = dlmread( [ price_name , '_ohlc_20m' ] ) ; ## price data
orders = dlmread( [ price_name , '_historical_orderbook_snapshots' ] ) ; ## get latest orderbook levels

if ( file_no == 1 || file_no == 6 || file_no == 9 || file_no == 14 )
 bucket_size = 0.05 ;
elseif ( file_no == 16 )
 bucket_size = 0.5 ;
else
 bucket_size = 0.0005 ;
endif

order_price = orders( : , 6 ) ;
price_length = 250 ;
plot_price = order_price( end - price_length : end ) ;
rounded_price = round( plot_price ./ bucket_size ) .* bucket_size ;
x_length = ( 1 : 1 : ( price_length + 1 ) )' ;

y_max = max( rounded_price .+ ( 20 * bucket_size ) ) ;
y_min = min( rounded_price .- ( 20 * bucket_size ) ) ;
y = ( y_min : bucket_size : y_max )' ;

z = zeros( size( y , 1 ) , size( x_length , 1 ) ) ;
zz = orders( end - price_length : end , 7 : 47 ) .+ orders( end - price_length : end , 48 : 88 ) ;

for ii = 1 : size( z , 2 )
[ ~ , ix ] = min( abs( y .- plot_price( ii ) ) ) ;
z( ix - 20 : ix + 20 , ii ) = zz( ii , : ) ;
endfor

z_high = max( max( z ) ) ;
mid_y = ( ohlc_20m( end - price_length : end , 19 ) .+ ohlc_20m( end - price_length : end , 20 ) ) ./ 2 ;
vol = ohlc_20m( end - price_length : end , 22 ) ; vol = vol ./ max( vol ) ;

up_scatter_ix = find( ohlc_20m( end - price_length : end , 21 ) >= ohlc_20m( end - price_length : end , 18 ) ) ;
down_scatter_ix = find( ohlc_20m( end - price_length : end , 21 ) < ohlc_20m( end - price_length : end , 18 ) ) ;

figure(1) ;
colormap( 'cubehelix' ) ; surf( x_length , y , z , 'edgecolor' , 'none' , 'facecolor' , 'interp' , 'facealpha' , 0.75 ) ; view( 2 ) ; 
axis( [ 1 price_length y_min y_max 0 z_high ] ) ; grid off ;
hold on ;
scatter3( up_scatter_ix , mid_y( up_scatter_ix ) , ones( size( up_scatter_ix ) ) .* z_high ./ 2 , 1000.*vol( up_scatter_ix ) , 'c' , 'o' , 'filled' ) ;
scatter3( down_scatter_ix , mid_y( down_scatter_ix ) , ones( size( down_scatter_ix ) ) .* z_high ./ 2 , 1000.*vol( down_scatter_ix ) , 'm' , 'o' , 'filled' ) ;
line( 'xdata' , x_length , 'ydata' , ohlc_20m( end - price_length : end , 19 ) , 'zdata' , ones(size(x_length)).*z_high ./ 2 , 'color','b' , 'linewidth' , 5 ) ;
line( 'xdata',x_length ,'ydata', ohlc_20m( end - price_length : end , 20 ) ,'zdata', ones(size(x_length)).*z_high ./ 2 , 'color','r' , 'linewidth' , 5 ) ;
xlabel( 'Time' ) ; ylabel( 'Levels - Price' ) ; zlabel( 'OrderBook Percent' ) ; title( strjoin( str_split( 1 : 2 ) , "-" ) ) ;
hold off ;

Wednesday, 15 April 2020

Generic Octave_Oanda_API Function

My last two posts have shown Octave functions that use the Oanda API to access and download data. In the first of these posts I said that I would post more code for further functions as and when I write them. However, on further reflection this would be unnecessary as the generic form of any such function is:

1) create the required headers
## set up the headers
query = [ 'curl -s --compressed -H "Content-Type: application/json"' ] ; ## -s is silent mode for Curl for no paging to terminal
query = [ query , ' -H "Authorization: Bearer XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"' ] ;
2) create the required API call
## construct the actual API call for instrument, year, month, day, hour and min
query = [ query , ' "https://api-fxtrade.oanda.com/v3/instruments/' , toupper( cross ) , "/orderBook?time=" , num2str( year ) , '-' , ...
num2str( month , "%02d" ) , '-' , num2str( day , "%02d" ) , 'T' , num2str( hour , "%02d" ) , '%3A' , num2str( min , "%02d" ) , '%3A00.000Z"' ] ;
3) extract the data from the returned JSON object
## call to use external Unix systems/Curl and return result
[ ~ , ret_JSON ] = system( query , RETURN_OUTPUT = 'TRUE' ) ;

## convert the returned JSON object to Octave structure
ret_JSON = load_json( ret_JSON ) ;
Providing the required libraries are installed on your system (see first post) the above is all that is needed for any API call - just change the details in the second code box - in the example above the code is to download the historical orderbook for a given forex currency cross/tradable at a specific date/time, given as function input variables.

I have found that the main difficulty lies in parsing the returned data structure, which can consist of nested Structures/Cell-Arrays. Below is code, liberally commented, that shows how to parse the above orderbook API call and add the 20 orderbook levels above/below the orderbook price to historical orderbook files already on disc
clear all ;
cd /home/dekalog/Documents/octave/oanda_data/20m ;

orderbooks = glob( '*_historical_orderbook_snapshots' ) ;
## {
##   [1,1] = aud_jpy_historical_orderbook_snapshots
##   [2,1] = aud_usd_historical_orderbook_snapshots
##   [3,1] = eur_aud_historical_orderbook_snapshots
##   [4,1] = eur_chf_historical_orderbook_snapshots
##   [5,1] = eur_gbp_historical_orderbook_snapshots
##   [6,1] = eur_jpy_historical_orderbook_snapshots
##   [7,1] = eur_usd_historical_orderbook_snapshots
##   [8,1] = gbp_chf_historical_orderbook_snapshots
##   [9,1] = gbp_jpy_historical_orderbook_snapshots
##   [10,1] = gbp_usd_historical_orderbook_snapshots
##   [11,1] = nzd_usd_historical_orderbook_snapshots
##   [12,1] = usd_cad_historical_orderbook_snapshots
##   [13,1] = usd_chf_historical_orderbook_snapshots
##   [14,1] = usd_jpy_historical_orderbook_snapshots
##   [15,1] = xag_usd_historical_orderbook_snapshots
##   [16,1] = xau_usd_historical_orderbook_snapshots
## }
data_20m = glob( '*_ohlc_20m' ) ;

for ii = 1 : 1 ## begin the instrument loop

str_split = strsplit( data_20m{ ii } , "_" ) ;
cross = strjoin( str_split( 1 : 2 ) , "_" ) ; ## get the tradable, e.g. 'eur_usd'

## create a unix system command to read the last row of 20 min ohlc of filename
unix_command = [ "tail -1" , " " , data_20m{ ii } ] ; 
[ ~ , data ] = system( unix_command ) ;
data = strsplit( data , { "," , "\n" } ) ; ## gives a cell arrayfun
## covert last date/time to numeric format
data_20m_last = [ str2num(data{1}) , str2num(data{2}) , str2num(data{3}) , str2num(data{4}) ,str2num(data{5}) ] ;

## create a unix system command to read the last row of historical_orderbook_snapshots of filename
unix_command = [ "tail -1" , " " , orderbooks{ ii } ] ; 
[ ~ , data ] = system( unix_command ) ;
data = strsplit( data , { "," , "\n" } ) ; ## gives a cell arrayfun
## covert last date/time to numeric format
data_ordbk_last = [ str2num(data{1}) , str2num(data{2}) , str2num(data{3}) , str2num(data{4}) ,str2num(data{5}) ] ;

time_diff = datenum( data_20m_last ) - datenum( data_ordbk_last ) ;
## only run following code if there is more orderbook data to download to match 20min data already on file
 if ( time_diff > 0 )

 no_rows_diff = ceil( time_diff * 72 ) ; ## there are 72 x 20 minute bars per day
 ## create a unix system command to read the last no_rows_diff of 20 min ohlc of filename
 unix_command = [ "tail -" , num2str( no_rows_diff ) , " " , data_20m{ ii } ] ; 
 [ ~ , data ] = system( unix_command ) ;
 data = strsplit( data , { "," , "\n" } ) ; ## gives a cell arrayfun
 data = data( 1 : size( data , 2 ) - 1 ) ; ## get rid of last empty cell
 data = reshape( data , 22 , no_rows_diff )' ;
 
  for jj = 1 : no_rows_diff
   if ( str2double(data{jj,1})==data_ordbk_last(1) && str2double(data{jj,2})==data_ordbk_last(2) && ...
        str2double(data{jj,3})==data_ordbk_last(3) && str2double(data{jj,4})==data_ordbk_last(4) && ...
        str2double(data{jj,5})==data_ordbk_last(5) )
    begin_ix = jj + 1 ;
    break ;
   endif
  endfor ## end of jj = 1 : no_rows_diff loop

 new_orderbook_data = zeros( no_rows_diff - ( begin_ix - 1 ) , 88 ) ;

 kk = 0 ; ## initialise kk counter to loop over new_orderbook_data rows
  for jj = begin_ix : no_rows_diff ## loop over structure S from begin_ix to no_rows_diff
   kk = kk + 1 ; ## increment counter
   
   ## write dates and times to new_orderbook_data
   new_orderbook_data( kk , 1 ) = str2double( data{ jj , 1 } ) ;
   new_orderbook_data( kk , 2 ) = str2double( data{ jj , 2 } ) ;
   new_orderbook_data( kk , 3 ) = str2double( data{ jj , 3 } ) ;
   new_orderbook_data( kk , 4 ) = str2double( data{ jj , 4 } ) ;
   new_orderbook_data( kk , 5 ) = str2double( data{ jj , 5 } ) ;
   
   ## download the orderbook structure, S
   S = get_historical_orderbook( cross , new_orderbook_data( kk , 1 ) , new_orderbook_data( kk , 2 ) , new_orderbook_data( kk , 3 ) , ...
                                  new_orderbook_data( kk , 4 ) , new_orderbook_data( kk , 5 ) ) ;

   ## write the orderBook Price to new_orderbook_data
   new_orderbook_data( kk , 6 ) = str2double( S.orderBook.price ) ;

    ######## find where str2double( S.orderBook.price ) is within S ########
    for ix = 1 : size( S.orderBook.buckets , 2 )
     if ( str2double( S.orderBook.buckets{ ix }.price ) >= str2double( S.orderBook.price ) )
      mid_ix = ix ;  
      break ;
     endif
    endfor ## end ix loop

    if ( ( str2double( S.orderBook.price ) - str2double( S.orderBook.buckets{ mid_ix - 1 }.price ) ) < ...
         ( str2double( S.orderBook.buckets{ mid_ix }.price ) ) - str2double( S.orderBook.price ) ) ## refine accuracy of mid_ix
      mid_ix = mid_ix - 1 ;
    endif
    ########## index for str2double( S.orderBook.price ) found #############

  ## actual writing to file: +/- 20 lines around mid_ix, the orderbook_price
  orderbook_begin_ix = mid_ix - 20 ; orderbook_end_ix = mid_ix + 20 ;
  
  ## format of file to write is:
  ## year month day hour min orderbook_price long% short%
   xx = 7 ; ## initialise column counter
   for zz = orderbook_begin_ix : orderbook_end_ix
    new_orderbook_data( kk , xx ) = str2double( S.orderBook.buckets{ zz }.longCountPercent ) ;
    new_orderbook_data( kk , xx + 41 ) = str2double( S.orderBook.buckets{ zz }.shortCountPercent ) ;
    xx = xx + 1 ; ## increment column counter
   endfor ## end of zz filling loop
   
  endfor ## end of jj for loop to fill one line of new_orderbook_data

 endif ## end of ( time_diff > 0 ) if statement

dlmwrite( orderbooks{ ii } , new_orderbook_data , '-append' ) ;

endfor ## end of ii instrument for loop

## The downloaded Structure S looks like
## parse the character structure S and write to new_orderbook_data
##  isstruct(S)
##  ans = 1
##  fieldnames(S)
##  ans =
##  {
##    [1,1] = orderBook
##  }
##
##  >> fieldnames(S.orderBook)
##  ans =
##  {
##    [1,1] = instrument
##    [2,1] = time
##    [3,1] = unixTime
##    [4,1] = price
##    [5,1] = bucketWidth
##    [6,1] = buckets
##  }
##
##  S.orderBook.instrument
##  ans = AUD_JPY
##
##  S.orderBook.time
##  ans = 2020-04-05T21:00:00Z
##
##  S.orderBook.unixTime
##  ans = 1586120400
##
##  S.orderBook.price
##  ans = 65.080
##
##  S.orderBook.bucketWidth
##  ans = 0.050
##
##  iscell( S.orderBook.buckets )
##  ans = 1
The code uses loops, which is usually frowned upon in favour of vectorised code, but I am not aware of how to vectorise the parsing of the structure. This code also shows the use of the unix_command to use -tail to read just the last 'n' lines of the files on disc and thus avoid loading complete, and perhaps very large, files.

Saturday, 11 April 2020

Get Latest Pricing Octave_Oanda_API Function

Following on from my my earlier, simple account summary API function, here is a function which downloads the latest pricing information for a given currency cross/tradable
## Copyright (C) 2020 dekalog
## 
## This program is free software: you can redistribute it and/or modify it
## under the terms of the GNU General Public License as published by
## the Free Software Foundation, either version 3 of the License, or
## (at your option) any later version.
## 
## This program is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
## GNU General Public License for more details.
## 
## You should have received a copy of the GNU General Public License
## along with this program.  If not, see
## .

## -*- texinfo -*- 
## @deftypefn {} {@var{best_bid}, @var{best_ask} =} get_current_pricing (@var{currency_cross})
##
## Returns the current pricing (best bid and ask prices) for CURRENCY_CROSS given by the Oanda API call,
## for example,
##
## "https://api-fxtrade.oanda.com/v3/accounts//pricing?instruments=EUR_USD"
##
## for CURRENCY_CROSS == 'eur_usd' (input is a character vector)
##
## Internally the function runs system() which calls the Curl
## library for the actual API download. The function is hard coded
## with the account token and account ID.
##
## @seealso{}
## @end deftypefn

## Author: dekalog 
## Created: 2020-04-05

function [ best_bid , best_ask ] = get_current_pricing ( cross )
 
if ( ischar( cross ) == 0 )
   error( 'Input must be a character vector for currency crosss/tradable, e.g. "eur_usd"' ) ;
endif
 
## set up the headers
query = [ 'curl -s -H "Content-Type: application/json"' ] ; ## -s is silent mode for Curl for no paging to terminal
query = [ query , ' -H "Authorization: Bearer 63926d856d2f6d7ed16ff014c8227042-3ceb540b828a7380b02dfdb273e7ab68"' ] ;

## construct the API call
query = [ query , ' "https://api-fxtrade.oanda.com/v3/accounts/001-004-225017-001/pricing?instruments=' ] ;
query = [ query , toupper( cross ) , '"' ] ;

## call to use external Unix systems/Curl and return result
[ ~ , ret_JSON ] = system( query , RETURN_OUTPUT = 'TRUE' ) ;

## convert the returned JSON object to Octave structure
s = load_json( ret_JSON ) ;

best_bid = s.prices{1}.bids{1}.price ;
best_ask = s.prices{1}.asks{1}.price ;

endfunction

## Typically, the retval output structure s will look like this:-
##
## s =
##
##   scalar structure containing the fields:
##
##     time = 2020-04-10T17:42:13.317424312Z
##     prices =
##     {
##       [1,1] =
##
##         scalar structure containing the fields:
##
##           type = PRICE
##           time = 2020-04-10T17:42:09.734257300Z
##           bids =
##           {
##             [1,1] =
##
##               scalar structure containing the fields:
##
##                 price: 1x7 sq_string
##                 liquidity: 1x1 uint32 scalar
##
##             [1,2] =
##
##               scalar structure containing the fields:
##
##                 price: 1x7 sq_string
##                 liquidity: 1x1 uint32 scalar
##
##             [1,3] =
##
##               scalar structure containing the fields:
##
##                 price: 1x7 sq_string
##                 liquidity: 1x1 uint32 scalar
##
##             [1,4] =
##
##               scalar structure containing the fields:
##
##                 price: 1x7 sq_string
##                 liquidity: 1x1 uint32 scalar
##
##           }
##
##           asks =
##           {
##             [1,1] =
##
##               scalar structure containing the fields:
##
##                 price: 1x7 sq_string
##                 liquidity: 1x1 uint32 scalar
##
##             [1,2] =
##
##               scalar structure containing the fields:
##
##                 price: 1x7 sq_string
##                 liquidity: 1x1 uint32 scalar
##
##             [1,3] =
##
##               scalar structure containing the fields:
##
##                 price: 1x7 sq_string
##                 liquidity: 1x1 uint32 scalar
##
##             [1,4] =
##
##               scalar structure containing the fields:
##
##                 price: 1x7 sq_string
##                 liquidity: 1x1 uint32 scalar
##
##           }
##
##           closeoutBid = 1.09358
##           closeoutAsk = 1.09388
##           status = tradeable
##           tradeable = 1
##           unitsAvailable =
##
##             scalar structure containing the fields:
##
##               default: 1x1 scalar struct
##               openOnly: 1x1 scalar struct
##               reduceFirst: 1x1 scalar struct
##               reduceOnly: 1x1 scalar struct
##
##           quoteHomeConversionFactors =
##
##             scalar structure containing the fields:
##
##               positiveUnits: 1x10 sq_string
##               negativeUnits: 1x10 sq_string
##
##           instrument = EUR_USD
##
##     }
##
## where bid and ask fields etc. just give information rather than values.
## Can access values by, e.g. values = s.prices
## to get
##
## b =
## {
##   [1,1] =
##
##     scalar structure containing the fields:
##
##       type = PRICE
##       time = 2020-04-10T17:48:17.265291655Z
##       bids =
##       {
##         [1,1] =
##
##           scalar structure containing the fields:
##
##             price = 1.09352
##             liquidity = 1000000
##
##         [1,2] =
##
##           scalar structure containing the fields:
##
##             price = 1.09351
##             liquidity = 2000000
##
##         [1,3] =
##
##           scalar structure containing the fields:
##
##             price = 1.09350
##             liquidity = 2000000
##
##         [1,4] =
##
##           scalar structure containing the fields:
##
##             price = 1.09348
##             liquidity = 5000000
##
##       }
##
##       asks =
##       {
##         [1,1] =
##
##           scalar structure containing the fields:
##
##             price = 1.09393
##             liquidity = 1000000
##
##         [1,2] =
##
##           scalar structure containing the fields:
##
##             price = 1.09395
##             liquidity = 2000000
##
##         [1,3] =
##
##           scalar structure containing the fields:
##
##             price = 1.09396
##             liquidity = 2000000
##
##         [1,4] =
##
##           scalar structure containing the fields:
##
##             price = 1.09397
##             liquidity = 5000000
##
##       }
##
##       closeoutBid = 1.09348
##       closeoutAsk = 1.09397
##       status = tradeable
##       tradeable = 1
##       unitsAvailable =
##
##         scalar structure containing the fields:
##
##           default =
##
##             scalar structure containing the fields:
##
##               long: 1x6 sq_string
##               short: 1x6 sq_string
##
##           openOnly =
##
##             scalar structure containing the fields:
##
##               long: 1x6 sq_string
##               short: 1x6 sq_string
##
##           reduceFirst =
##
##             scalar structure containing the fields:
##
##               long: 1x6 sq_string
##               short: 1x6 sq_string
##
##           reduceOnly =
##
##             scalar structure containing the fields:
##
##               long: 1x1 sq_string
##               short: 1x1 sq_string
##
##
##       quoteHomeConversionFactors =
##
##         scalar structure containing the fields:
##
##           positiveUnits = 0.80113441
##           negativeUnits = 0.80178317
##
##       instrument = EUR_USD
##
## }
##
## where some of the values are now "viewable" in terminal.
##
## to do this directly do
##
## s.prices{1}.asks{1}.price, which  will give 1.09393
##
## and 
##
## s.prices{1}.asks{1}.liquidity, which will give 1000000
The function returns are the best bid and ask prices; however, it would be a simple enough task for readers to edit the function to get further levels, a la Market depth, liquidity at these levels and some other price metrics. There is a comment section at the end of the function which shows how to do this.

I wrote this function with a view to it perhaps becoming the basis of a client-side, trailing stop functionality. Enjoy!

Sunday, 5 April 2020

First Octave Function using Oanda API

As part of my on-going code revision I have written my first Octave function to use the Oanda API. This is just a simple "proof of concept" function which downloads an account summary.
## Copyright (C) 2020 dekalog
## 
## This program is free software: you can redistribute it and/or modify it
## under the terms of the GNU General Public License as published by
## the Free Software Foundation, either version 3 of the License, or
## (at your option) any later version.
## 
## This program is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
## GNU General Public License for more details.
## 
## You should have received a copy of the GNU General Public License
## along with this program.  If not, see
## .

## -*- texinfo -*- 
## @deftypefn {} {@var{retval} =} account_summary ()
##
## Returns the Oanda account summary given by the Oanda API call
##
## "https://api-fxtrade.oanda.com/v3/accounts//summary"
##
## Internally the function runs system() which calls the Curl
## library for the actual API download. The function is hard coded
## with the account token and account ID.
## 
## @seealso{}
## @end deftypefn

## Author: dekalog 
## Created: 2020-04-04

function retval = account_summary ()

## set up the headers
query = [ 'curl -s -H "Content-Type: application/json"' ] ; ## -s is silent mode for Curl for no paging to terminal
query = [ query , ' -H "Authorization: Bearer XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"' ] ;

## construct the API call
query = [ query , ' "https://api-fxtrade.oanda.com/v3/accounts/XXX-XXX-XXXXXX-XXX/summary"' ] ;

## call to use external Unix systems/Curl and return result
[ ~ , retval ] = system( query , RETURN_OUTPUT = 'TRUE' ) ;

## convert the returned json object to Octave structure
retval = load_json( retval ) ;

endfunction
The function uses the Curl library, so obviously this must be installed on your system, and to convert the returned JSON object to a native Octave structure the Octave wrapper function available from this Github, https://github.com/Andy1978/octave-rapidjson, is used. For this wrapper function to work rapidjson must also be installed.

I shall probably write more such Octave-OandaAPI functions for my particular use cases and will blog about them as and when I do so.

Thursday, 26 March 2020

Some Basic Code Housekeeping

Since my last post, back in late November last year, I have been doing a few disparate things such as:
  • improving the coding of some functions in R to use the Oanda API to automatically download data using cronjobs
  • coding some Octave functions to plot/visualise the above data
  • more work on Random Vector Functional Link networks 
  • trying my hand at some discretionary day trading to take advantage of the increased market volitility due to the current Covid-19 pandemic
The R functions I blogged about back in 2017 here and my recent work has been to improve them by implementing some basic price sanity checks on the OHLC bars, checking the integrity of the date handling prior to writing to file and also writing to file in a format more amenable to being read by Octave.

A new addition to this work is to also download Oanda's forex order and position books using the API, which I was inspired to do by this online guide. At the moment I am testing this data using ideas from the literature on Limit Order Books and using the R Boruta package to test if the features described in the literature are suitable for this data.

To help visualise this, I have written this Octave function
clear all ;
cd /home/dekalog/Documents/octave/oanda_data/20m ;

files_10m = glob( '*_ohlc_10m' ) ;
## files_10m =
## {
##   [1,1] = aud_jpy_ohlc_10m
##   [2,1] = aud_usd_ohlc_10m
##   [3,1] = eur_aud_ohlc_10m
##   [4,1] = eur_chf_ohlc_10m
##   [5,1] = eur_gbp_ohlc_10m
##   [6,1] = eur_jpy_ohlc_10m
##   [7,1] = eur_usd_ohlc_10m
##   [8,1] = gbp_chf_ohlc_10m
##   [9,1] = gbp_jpy_ohlc_10m
##   [10,1] = gbp_usd_ohlc_10m
##   [11,1] = nzd_usd_ohlc_10m
##   [12,1] = usd_cad_ohlc_10m
##   [13,1] = usd_chf_ohlc_10m
##   [14,1] = usd_jpy_ohlc_10m
##   [15,1] = xag_usd_ohlc_10m
##   [16,1] = xau_usd_ohlc_10m
## }
file_no = input( 'Enter file no. from list, a number 1 to 16 inclusive. ' ) ;
filename = files_10m{ file_no } ;
str_split = strsplit( filename , "_" ) ;
price = strjoin( str_split( 1 : 2 ) , "_" ) ;
orders = dlmread( [ price , '_orderbook_snapshot' ] ) ; ## get latest orderbook levels
latest_price = orders( 1 , 1 ) ;
[ ~ , ix ] = min( abs( orders( 1 , 1 ) .- orders( 2 : end , 1 ) ) ) ; ix = ix + 1 ;
order_levels = 10 ;
order_max = max( max( orders( ix - order_levels : ix + order_levels , 2 : 3 ) ) ) ;
max10 = 20 / order_max ; max20 = 10 / order_max ;

## create a unix system command to read the last 'n' rows of ohlc of filename
unix_command = [ "tail -100" , " " , filename ] ; 
[ ~ , data ] = system( unix_command ) ;
data = strsplit( data , { "," , "\n" } ) ; ## gives a cell arrayfun
if ( isempty( data{ end } ) )
 data( end ) = [] ;
endif
data = reshape( str2double( data' ) , 22 , 100 )' ;
## data cols 
## open == 18, high == 19, low == 20, close == 21, vol == 22

## plot the 10 minute bars
h = findobj( 'type' , 'figure' ) ;
if ( !isempty( h ) && any( h == 10 ) )
clf( 10 ) ;
endif
figure( 10 ) ; candle( data(:,19) , data(:,20) , data(:,21) , data(:,18) ) ;
hold on ; figure( 10 ) ; barh( orders( ix - order_levels : ix + order_levels , 1 ) , orders( ix - order_levels : ix + order_levels , 2 ) .* max10 , 0.7 , 'c' ) ; 
barh( orders( ix - order_levels : ix + order_levels , 1 ) , orders( ix - order_levels : ix + order_levels , 3 ) .* max10 , 0.4 , 'm' ) ;
hold off ;
hline( latest_price , 'r-' ) ; ## the latest snapshot price
for ii = ix - order_levels : ix + order_levels
hline( orders( ii , 1 ) , 'k:' , num2str( orders( ii , 1 ) ) ) ;
endfor
title( strjoin( str_split( 1 : 2 ) , " " ) ) ;

## get ix for forming 20 minute bars
ix0 = find( ( data(:,4) == shift( data(:,4) , -1 ) ) .* ( ( data(:,5) == 0 ) .* ( shift( data(:,5) , -1 ) == 10 ) ) ) ;
ix20 = find( ( data(:,4) == shift( data(:,4) , -1 ) ) .* ( ( data(:,5) == 20 ) .* ( shift( data(:,5) , -1 ) == 30 ) ) ) ;
ix40 = find( ( data(:,4) == shift( data(:,4) , -1 ) ) .* ( ( data(:,5) == 40 ) .* ( shift( data(:,5) , -1 ) == 50 ) ) ) ;

## bars beginning on the hour
## the highs
data( ix0 , 19 ) = max( [ data( ix0 , 19 ) , data( ix0.+1 , 19 ) ] , [] , 2 ) ;
## the lows
data( ix0 , 20 ) = min( [ data( ix0 , 20 ) , data( ix0.+1 , 20 ) ] , [] , 2 ) ;
## the close
data( ix0 , 21 ) = data( ix0.+1 , 21 ) ;
## the vol
data( ix0 , 22 ) = data( ix0 , 22 ) + data( ix0.+1 , 22 ) ;

## bars beginning 20 past the hour
## the highs
data( ix20 , 19 ) = max( [ data( ix20 , 19 ) , data( ix20.+1 , 19 ) ] , [] , 2 ) ;
## the lows
data( ix20 , 20 ) = min( [ data( ix20 , 20 ) , data( ix20.+1 , 20 ) ] , [] , 2 ) ;
## the close
data( ix20 , 21 ) = data( ix20.+1 , 21 ) ;
## the vol
data( ix20 , 22 ) = data( ix20 , 22 ) + data( ix20.+1 , 22 ) ;

## bars beginning 40 past the hour
## the highs
data( ix40 , 19 ) = max( [ data( ix40 , 19 ) , data( ix40.+1 , 19 ) ] , [] , 2 ) ;
## the lows
data( ix40 , 20 ) = min( [ data( ix40 , 20 ) , data( ix40.+1 , 20 ) ] , [] , 2 ) ;
## the close
data( ix40 , 21 ) = data( ix40.+1 , 21 ) ;
## the vol
data( ix40 , 22 ) = data( ix40 , 22 ) + data( ix40.+1 , 22 ) ;

## delete rows of 10, 30 and 50 minutes past the hour
data( [ ix0.+1 ; ix20.+1 ; ix40.+1 ] , : ) = [] ;

## plot 20 minute candles
if ( !isempty( h ) && any( h == 20 ) )
clf( 20 ) ;
endif
figure( 20 ) ; candle( data(:,19) , data(:,20) , data(:,21) , data(:,18) ) ;
hold on ; figure( 20 ) ; barh( orders( ix - order_levels : ix + order_levels , 1 ) , orders( ix - order_levels : ix + order_levels , 2 ) .* max20 , 0.7 , 'c' ) ; 
barh( orders( ix - order_levels : ix + order_levels , 1 ) , orders( ix - order_levels : ix + order_levels , 3 ) .* max20 , 0.4 , 'm' ) ;
hold off ;
hline( latest_price , 'r-' ) ; ## the latest snapshot price
for ii = ix - order_levels : ix + order_levels
hline( orders( ii , 1 ) , 'k:' , num2str( orders( ii , 1 ) ) ) ;
endfor
title( strjoin( str_split( 1 : 2 ) , " " ) ) ;
which takes in 10 minute OHLC data and creates 20 minute bars from these 10 minute bars and plots both with the 10 order book levels above and below the latest order book price level. The horizontal blue bars are percentage of buy orders whilst the red bars are sell orders,
compared with the closest Oanda screen shot.
There are differences between the two, namely
  • My plot above is of 20 minute bars vs Oanda 15 minute bars
  • my open order histogram is fixed at the latest downloaded open order snapshot corresponding to the rightmost bar close, whereas the Oanda platform allows you to scroll back and see the order book in historical time
The 20 minute timeframe was deliberately chosen to match the maximum "refresh rate" of the API download availability and gives a nice, slightly different perspective to the 15 and 30 minute granularity provided on the Oanda platform.

Friday, 29 November 2019

RVFL Network Results

Having let the tests outlined in my previous post run, I can say that the results are inconclusive as the optimised number of neurons in the hidden layer turns out to be one, the minimum possible to even have a hidden layer. This leads me to believe that the raw features, the ideal cyclic tau embedding, are sufficient in and of themselves for the purpose I have in mind.

To that end, I have indulged in a bit of feature engineering and written the following Octave function,
## Copyright (C) 2019 dekalog
## 
## This program is free software: you can redistribute it and/or modify it
## under the terms of the GNU General Public License as published by
## the Free Software Foundation, either version 3 of the License, or
## (at your option) any later version.
## 
## This program is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
## GNU General Public License for more details.
## 
## You should have received a copy of the GNU General Public License
## along with this program.  If not, see
## www.gnu.org.

## -*- texinfo -*- 
## @deftypefn {} {@var{emb1}, @var{emb2} =} cyclic_embedding (@var{price}, @var{period})
##
## Inputs are a price vector and a period vector.
##
## The function normalises the price between -1 and +1 over an adaptive period lookback.
##
## The outputs are two matrices of 3 columns each - the first column is normalised price
## and the second and third are delay embeddings of Tau and 2 x Tau, Tau being equal to
## one quarter of the adaptive period vector, the theoretical ideal Tau for sinusoidal
## waveforms.
##
## The first output matrix, EMB1, normalises the Tau and 2 x Tau columns according to the
## most recent max high/min low; EMB2 Tau and 2 x Tau are normalised according to the 
## max high/min low in force at the delay embedding time.
##
## @seealso{}
## @end deftypefn

## Author: dekalog 
## Created: 2019-09-18

function [ emb1 , emb2 , prob_matrix ] = cyclic_embedding( price , period )
  
price_smooth = price ;
emb1 = repmat( price , 1 , 3 ) ;
emb2 = emb1 ;
coeffs = generalised_sgolay_filter_coeffs( 5 , 2 , 0 ) ; coeffs = coeffs' ;

## initialising loop
price_smooth( 1 : 3 ) = coeffs( 1 : 3 , : ) * price( 1 : 5 ) ;  
for ii = 4 : 48 
  price_smooth( ii ) = coeffs( 3 , : ) * price( ii - 2 : ii + 2 ) ;
endfor
price_smooth( 49 : 50 ) = coeffs( 4 : 5 , : ) * price( 46 : 50 ) ;
## end initialising loop 

coeffs( 1 : 2 , : ) = [] ;

for ii = 51 : size( price , 1 )

  price_smooth( ii - 2 : ii ) = coeffs * price( ii - 4 : ii ) ;
  max_r = max( price_smooth( ii - period( ii ) : ii ) ) ;
  min_r = min( price_smooth( ii - period( ii ) : ii ) ) ;
  
  ## period is exactly divisable by 4 ( and 2 ), e.g. 8 12 16 20 24 28 32 36 40 44 48 etc?
  if ( rem( period( ii ) , 4 ) == 0 ) 
    
  emb1( ii , 1 ) = 2 * ( ( price_smooth( ii ) - min_r ) / ( max_r - min_r ) - 0.5 ) ;
  emb1( ii , 2 ) = 2 * ( ( price_smooth( ii - round( period( ii ) / 4 ) ) - min_r ) / ( max_r - min_r ) - 0.5 ) ;
  emb1( ii , 3 ) = 2 * ( ( price_smooth( ii - round( period( ii ) / 2 ) ) - min_r ) / ( max_r - min_r ) - 0.5 ) ;
  emb2( ii , 1 ) = emb1( ii , 1 ) ;
  emb2( ii , 2 ) = emb2( ii - round( period( ii ) / 4 ) , 1 ) ;
  emb2( ii , 3 ) = emb2( ii - round( period( ii ) / 2 ) , 1 ) ;
  
  ## periods 10 14 18 22 26 30 34 38 42 46 50
  elseif ( rem( period( ii ) , 2 ) == 0 && rem( period( ii ) , 4 ) == 2 )

  emb1( ii , 1 ) = 2 * ( ( price_smooth( ii ) - min_r ) / ( max_r - min_r ) - 0.5 ) ;
  emb1( ii , 2 ) = 2 * ( ( ( 0.5*price_smooth( ii - round( period( ii ) / 4 ) ) + 0.5*price_smooth( ii - round( period( ii ) / 4 ) + 1 ) ) - min_r )...
                       / ( max_r - min_r ) - 0.5 ) ;
  emb1( ii , 3 ) = 2 * ( ( price_smooth( ii - round( period( ii ) / 2 ) ) - min_r ) / ( max_r - min_r ) - 0.5 ) ;
  emb2( ii , 1 ) = emb1( ii , 1 ) ;
  emb2( ii , 2 ) = 0.5*emb2( ii - round( period( ii ) / 4 ) , 1 ) + 0.5*emb2( ii - round( period( ii ) / 4 ) + 1 , 1 ) ;
  emb2( ii , 3 ) = emb2( ii - round( period( ii ) / 2 ) , 1 ) ;

  ## periods 9 13 17 21 25 29 33 37 41 45 49
  elseif ( rem( period( ii ) , 2 ) == 1 && rem( period( ii ) , 4 ) == 1 )

  emb1( ii , 1 ) = 2 * ( ( price_smooth( ii ) - min_r ) / ( max_r - min_r ) - 0.5 ) ;
  emb1( ii , 2 ) = 2 * ( ( ( 0.75*price_smooth( ii - round( period( ii ) / 4 ) ) + 0.25*price_smooth( ii - round( period( ii ) / 4 ) - 1 ) ) - min_r )...
                       / ( max_r - min_r ) - 0.5 ) ;
  emb1( ii , 3 ) = 2 * ( ( ( 0.5*price_smooth( ii - round( period( ii ) / 2 ) ) + 0.5*price_smooth( ii - round( period( ii ) / 2 ) + 1 ) ) - min_r )...
                       / ( max_r - min_r ) - 0.5 ) ;
  emb2( ii , 1 ) = emb1( ii , 1 ) ;
  emb2( ii , 2 ) = 0.75*emb2( ii - round( period( ii ) / 4 ) , 1 ) + 0.25*emb2( ii - round( period( ii ) / 4 ) - 1 , 1 ) ;
  emb2( ii , 3 ) = 0.5*emb2( ii - round( period( ii ) / 2 ) , 1 ) + 0.5*emb2( ii - round( period( ii ) / 2 ) + 1 , 1 ) ;
  
  ## periods 11 15 19 23 27 31 35 39 43 47
  elseif ( rem( period( ii ) , 2 ) == 1 && rem( period( ii ) , 4 ) == 3 )

  emb1( ii , 1 ) = 2 * ( ( price_smooth( ii ) - min_r ) / ( max_r - min_r ) - 0.5 ) ;
  emb1( ii , 2 ) = 2 * ( ( ( 0.75*price_smooth( ii - round( period( ii ) / 4 ) ) + 0.25*price_smooth( ii - round( period( ii ) / 4 ) + 1 ) ) - min_r )...
                       / ( max_r - min_r ) - 0.5 ) ;
  emb1( ii , 3 ) = 2 * ( ( ( 0.5*price_smooth( ii - round( period( ii ) / 2 ) ) + 0.5*price_smooth( ii - round( period( ii ) / 2 ) + 1 ) ) - min_r )...
                       / ( max_r - min_r ) - 0.5 ) ;
  emb2( ii , 1 ) = emb1( ii , 1 ) ;
  emb2( ii , 2 ) = 0.75*emb2( ii - round( period( ii ) / 4 ) , 1 ) + 0.25*emb2( ii - round( period( ii ) / 4 ) + 1 , 1 ) ;
  emb2( ii , 3 ) = 0.5*emb2( ii - round( period( ii ) / 2 ) , 1 ) + 0.5*emb2( ii - round( period( ii ) / 2 ) + 1 , 1 ) ;
  
  endif
  
endfor ## end of embedding features creation

feature_peak = emb2 * [ 1 0 ; -1 1 ; 0 -1 ] * [ 1 ; 1 ] ;
feature_trough = zeros( size( feature_peak ) ) ;
ix = find( feature_peak < 0 ) ; 
feature_trough( ix ) = abs( feature_peak( ix ) ) ;
feature_peak( feature_peak <= 0 ) = 0 ;

## https://stats.stackexchange.com/questions/132652/how-to-determine-which-distribution-fits-my-data-best
## Weibull distribution with shape = 219.68 and scale = 1.94 for turn +/- 1 bar
## Weibull distribution with shape = 85.88 and scale = 1.84 for turn +/- 2 bar
## This comes from bayesian testing of cutoff value of feature_peak/feature_trough for highs/lows +/- 1 bar.
## The function used to get these values is "bayes_train_cyclic_turn_prob_of_embedding.m" which calls
## "bayes_optim_of_cyclic_embedding_conv_function" in /home/dekalog/Documents/octave/turning_points
scale = 1.84 ; shape = 85.88 ;
prob_matrix = zeros( size( feature_peak , 1 ) , 3 ) ;
prob_matrix( : , 1 ) = wblcdf( feature_peak , scale , shape ) ;
prob_matrix( : , 2 ) = wblcdf( feature_trough , scale , shape ) ;
prob_matrix( : , 3 ) = 1 .- sum( prob_matrix( : , 1 : 2 ) , 2 ) ;

endfunction
which again is a work in progress.

This takes as input a price and a period vector and outputs features as per the plots in the above linked ideal cyclic tau post plus a matrix of probabilities for price action being at a cyclic high/low. This matrix is the result of Monte Carlo Bayesian Optimisation over ideal sine wave prices to get a cutoff value for the derived feature in the above function. The probability distribution used for this probability matrix is the Weibull distribution, which has been determined by following the routine outlined in this "how to determine which distribution fits my data best" forum post and using the R statistical software platform.

The hard coded values for the cutoff in the above code are from the results of optimisation on pure sine waves. As I write this post there are optimisation routines running on sine waves with 20 db noise added.

More in due course.

Wednesday, 13 November 2019

Random Vector Functional Link Networks

In my last post I briefly mentioned Random Vector Functional Link networks and that this post would be about them, inspired by the fact that the preliminary results of the last post suggest a shallow rather than a deep network structure.

The idea of RVFL networks is over two decades old and has perhaps been over shadowed by the more recent Extreme Learning Machine, although as mentioned in my last post there is some controversy about plagiarism with regard to ELMs. An RVFL network is basically an ELM with additional direct connections from the input layer to the output layer. The connections from the input layer to the single hidden layer are randomly generated and then fixed, the hidden layer is concatenated with the original input layer to form a new layer, H, and the connections from H to the output layer are solved in one step using the Moore Penrose inverse to get the Linear least squares solution, or alternatively using Regularised least squares. The advantage of this closed-form approach is the fast training times compared with more general optimisation routines such as gradient descent.

The above linked paper details a series of comparative tests run over various configurations of RVFL networks over a number of different data sets. Some of the main conclusions drawn are:
  1. the direct links from input to output enhance network performance
  2. whether to include output bias or not is a data dependent tunable factor
  3. the radial basis function for the hidden units always leads to better performance
  4. regularised least squares (ridge regression) performs better than the Moore Penrose inverse
Based on the code provided by the study's authors I have written the following two objective functions for use with the BayesOpt Library.
## Copyright (C) 2019 dekalog
## 
## This program is free software: you can redistribute it and/or modify it
## under the terms of the GNU General Public License as published by
## the Free Software Foundation, either version 3 of the License, or
## (at your option) any later version.
## 
## This program is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
## GNU General Public License for more details.
## 
## You should have received a copy of the GNU General Public License
## along with this program.  If not, see
## .

## -*- texinfo -*- 
## @deftypefn {} {@var{J} =} rvfl_training_of_cyclic_embedding_with_cv (@var{x})
##
## Function for Bayesian training of RVFL networks with fixed parameteres of:
##
##    direct links,
##    radial basis function activation,
##    ridge regression for regularized least squares,
##
## and optimisable parameters of:
##
##    number of neurons in hidden layer,
##    lambda for the least squares regression,
##    scaling of hidden layer inputs,
##    with or without an output bias.
##
## The input X is a vector of 6 values to be optimised by the BayesOpt library
## function 'bayesoptcont.'
## The output J is the Brier Score for the test fold cross validated data.
## @seealso{}
## @end deftypefn

## Author: dekalog 
## Created: 2019-11-04

function J = rvfl_training_of_cyclic_embedding_with_cv ( x )
global sample_features ; global sample_targets ;
epsilon = 1e-15 ; ## to ensure log() does not give out a nan

Nfea = size( sample_features , 2 ) ;

## check input x
if ( numel( x ) != 6 )
  error( 'The input vector x must be of length 6.' ) ;
endif

## get the parameters from input x
hidden_layer_size = floor( x( 1 ) ) ;    ## number of neurons in hidden layer
randomisation_type = floor( x( 2 ) ) ;   ## 1 == uniform, 2 == Gaussian
scale_mode = floor( x( 3 ) ) ;           ## 1 will scale the features for all neurons, 2 will scale the features for each hidden
##                                          neuron separately, 3 will scale the range of the randomization for uniform distribution
scale = x( 4 ) ;                         ## Linearly scale the random features before feeding into the nonlinear activation function. 
##                                          In this implementation, we consider the threshold which leads to 0.99 of the maximum/minimum 
##                                          value of the activation function as the saturating threshold.
##                                          scale = 0.9 means all the random features will be linearly scaled
##                                          into 0.9 * [ lower_saturating_threshold , upper_saturating_threshold ].
if_output_bias = floor( x( 5 ) + 0.5 ) ; ## Use output bias, or not? 1 == yes , 0 == no.
lambda = x( 6 ) ;                        ## the regularization coefficient lambda 

length_jj_loop = 25 ;
all_brier_values = zeros( length_jj_loop , 1 ) ;  

##rand( 'seed' , 0 ) ;
##randn( 'seed' , 0 ) ;
##U_sample_targets = unique( sample_targets ) ;
##nclass = numel( U_sample_targets ) ;

##sample_targets_temp = zeros( numel( sample_targets ) , nclass ) ; 
##
#### get the 0 - 1 one hot coding for the target,  
##for i = 1 : nclass
##  idx = sample_targets == U_sample_targets( i ) ;
##  sample_targets_temp( idx , i ) = 1 ;
##endfor

###### information for splitting into training and test sets ###############
ix_positive_targets = find( sample_targets == 1 ) ;
ix_negative_targets = ( 1 : numel( sample_targets ) )' ; 
ix_negative_targets( ix_positive_targets ) = [] ;
## split 20/80
split_no1 = round( 0.2 * numel( ix_positive_targets ) ) ;
split_no2 = round( 0.2 * numel( ix_negative_targets ) ) ;

######### get type of randomisation from input x #################
if ( randomisation_type == 1 ) ## uniform randomisation 
  
  if ( scale_mode == 3 ) ## range scaled for uniform randomisation
     Weight = scale * ( rand( Nfea , hidden_layer_size ) * 2 - 1 ) ; ## scaled uniform random input weights to hidden layer
     Bias = scale * rand( 1 , hidden_layer_size ) ;                  ## scaled random bias weights to hidden layer
  else
     Weight = rand( Nfea , hidden_layer_size ) * 2 - 1 ; ## unscaled random input weights to hidden layer
     Bias = rand( 1 , hidden_layer_size ) ;              ## unscaled random bias weights to hidden layer
  endif
    
elseif ( randomisation_type == 2 ) ## gaussian randomisation
  Weight = randn( Nfea , hidden_layer_size ) ; ## gaussian random input weights to hidden layer
  Bias = randn( 1 , hidden_layer_size ) ;      ## gaussian random bias weights to hidden layer
else
  error( 'only Gaussian and Uniform are supported' )
endif
############################################################################

## Activation Function    
Saturating_threshold = [ -2.1 , 2.1 ] ;
Saturating_threshold_activate = [ 0 , 1 ] ;

for jj = 1 : length_jj_loop

## shuffle
randperm1 = randperm( numel( ix_positive_targets) ) ;
randperm2 = randperm( numel( ix_negative_targets) ) ;
test_ix1 = ix_positive_targets( randperm1( 1 : split_no1 ) ) ;
test_ix2 = ix_negative_targets( randperm2( 1 : split_no2 ) ) ;
test_ix = [ test_ix1 ; test_ix2 ] ;
train_ix1 = ix_positive_targets( randperm1( split_no1 + 1 : end ) ) ;
train_ix2 = ix_negative_targets( randperm2( split_no2 + 1 : end ) ) ;
train_ix = [ train_ix1 ; train_ix2 ] ;

sample_targets_train = sample_targets( train_ix ) ;
sample_features_train = sample_features( train_ix , : ) ;

Nsample = size( sample_features_train , 1 ) ;

Bias_train = repmat( Bias , Nsample , 1 ) ;
H = sample_features_train * Weight + Bias_train ;

if ( scale_mode == 1 )
  ## scale the features for all neurons 
  [ H , k , b ] = Scale_feature( H , Saturating_threshold , scale ) ;
elseif ( scale_mode == 2 ) 
  ## else scale the features for each hidden neuron separately
  [ H , k , b ] = Scale_feature_separately( H , Saturating_threshold , scale ) ;
endif
  
## actual activation, the radial basis function
H = exp( -abs( H ) ) ;

if ( if_output_bias == 1 )
  ## we will use an output bias
  H = [ H , ones( Nsample , 1 ) ] ; 
endif

## the direct link scaling options, concatenate hidden layer and sample_features_train 
if ( scale_mode == 1 )
  ## scale the features for all neurons
  sample_features_train = sample_features_train .* k + b ;
  H = [ H , sample_features_train ] ;
elseif ( scale_mode == 2 )
  ## else scale the features for each hidden neuron separately
  [ sample_features_train , ktr , btr ] = Scale_feature_separately( sample_features_train , Saturating_threshold_activate , scale ) ;
  H = [ H , sample_features_train ] ;
else
  H = [ H , sample_features_train ] ;
endif

H( isnan( H ) ) = 0 ; ## avoids any 'blowups' due to nans in H

## do the regularized least squares for concatenated hidden layer output
## and the original, possibly scaled, input sample_features
if ( hidden_layer_size < Nsample )
 beta = ( eye( size( H , 2 ) ) / lambda + H' * H ) \ H' * sample_targets_train ;
else
 beta = H' * ( ( eye( size( H , 1 ) ) / lambda + H * H' ) \ sample_targets_train ) ; 
endif

############# now the test on test data ####################################
Bias_test = repmat( Bias , numel( sample_targets( test_ix ) ) , 1 ) ;
H_test = sample_features( test_ix , : ) * Weight + Bias_test ;

if ( scale_mode == 1 )
  ## scale the features for all neurons
  H_test = H_test .* k + b ;
elseif ( scale_mode == 2 )
  ## else scale the features for each hidden neuron separately
  nSamtest = size( H_test , 1 ) ; 
  kt = repmat( k , nSamtest , 1 ) ;
  bt = repmat( b , nSamtest , 1 ) ;
  H_test = H_test .* kt + bt ;
endif

## actual activation, the radial basis function
H_test = exp( -abs( H_test ) ) ;

if ( if_output_bias == 1 )
  ## we will use an output bias
  H_test = [ H_test , ones( numel( sample_targets( test_ix ) ) , 1 ) ] ; 
endif

## the direct link scaling options, concatenate hidden layer and sample_features_train 
if ( scale_mode == 1 )
  ## scale the features for all neurons 
  testX_temp = sample_features( test_ix , : ) .* k + b ;
  H_test = [ H_test , testX_temp ] ;
elseif ( scale_mode == 2 )
  ## else scale the features for each hidden neuron separately
  nSamtest = size( H_test , 1 ) ; 
  kt = repmat( ktr , nSamtest , 1 ) ;
  bt = repmat( btr , nSamtest , 1 ) ;
  testX_temp = sample_features( test_ix , : ) .* kt + bt ;   
  H_test = [ H_test , testX_temp ] ; 
else
  H_test = [ H_test , sample_features( test_ix , : ) ] ; 
endif

H_test( isnan( H_test ) ) = 0 ; ## avoids any 'blowups' due to nans in H_test

## get the test predicted target output
test_targets = H_test * beta ;

##Y_temp = zeros( Nsample , 1 ) ;
##% decode the target output
##for i = 1 : Nsample
##    [ maxvalue , idx ] = max( sample_targets_temp( i , : ) ) ;
##    Y_temp( i ) = U_sample_targets( idx ) ;
##endfor

############################################################################

## the final logistic output
final_output = 1.0 ./ ( 1.0 .+ exp( -test_targets ) ) ;

## get the Brier_score
## https://en.wikipedia.org/wiki/Brier_score
all_brier_values( jj ) = mean( ( final_output .- sample_targets( test_ix ) ) .^ 2 ) ;

rand( 'state' ) ; randn( 'state' ) ; ## reset rng

endfor ## end of jj loop

J = mean( all_brier_values ) ;

endfunction

## Various measures of goodness
## https://stats.stackexchange.com/questions/312780/why-is-accuracy-not-the-best-measure-for-assessing-classification-models
## https://www.fharrell.com/post/classification/
## https://stats.stackexchange.com/questions/433628/what-is-a-reliable-measure-of-accuracy-for-logistic-regression
## https://www.jstatsoft.org/article/view/v090i12
## https://stats.idre.ucla.edu/other/mult-pkg/faq/general/faq-what-are-pseudo-r-squareds/
## https://stats.stackexchange.com/questions/319666/aic-with-test-data-is-it-possible
## https://www.learningmachines101.com/lm101-076-how-to-choose-the-best-model-using-aic-or-gaic/
## https://stackoverflow.com/questions/48185090/how-to-get-the-log-likelihood-for-a-logistic-regression-model-in-sklearn
## https://stats.stackexchange.com/questions/67903/does-down-sampling-change-logistic-regression-coefficients
## https://stats.stackexchange.com/questions/163221/whats-the-measure-to-assess-the-binary-classification-accuracy-for-imbalanced-d
## https://stats.stackexchange.com/questions/168929/logistic-regression-is-predicting-all-1-and-no-0
## https://stats.stackexchange.com/questions/435307/multiple-linear-regression-lse-when-one-of-parameter-is-known

These functions stand on the shoulders of the above and hard code direct links, radial basis function activation and ridge regression, with the number of neurons in the hidden layer, lambda for the ridge regression, different scaling options and inclusion of output bias or not as the optimisable parameters. The function minimisation objective is the Brier score.

This second function is slightly different in that the Akaike information criterion is the minimisation objective and there is the option to use the Netlab Generalised linear model function to solve for the hidden to output weights (comment out the relevant code as necessary.)
## Copyright (C) 2019 dekalog
## 
## This program is free software: you can redistribute it and/or modify it
## under the terms of the GNU General Public License as published by
## the Free Software Foundation, either version 3 of the License, or
## (at your option) any later version.
## 
## This program is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
## GNU General Public License for more details.
## 
## You should have received a copy of the GNU General Public License
## along with this program.  If not, see
## .

## -*- texinfo -*- 
## @deftypefn {} {@var{J} =} rvfl_training_of_cyclic_embedding (@var{x})
##
## Function for Bayesian training of RVFL networks with fixed parameteres of:
##
##    direct links,
##    radial basis function activation,
##    ridge regression for regularized least squares,
##
## and optimisable parameters of:
##
##    number of neurons in hidden layer,
##    lambda for the least squares regression,
##    scaling of hidden layer inputs,
##    with or without an output bias.
##
## The input X is a vector of 6 values to be optimised by the BayesOpt library
## function 'bayesoptcont.'
## The output J is the AIC value for the tested model.
## @seealso{}
## @end deftypefn

## Author: dekalog 
## Created: 2019-11-04

function J = rvfl_training_of_cyclic_embedding ( x )
global sample_features ; global sample_targets ;
epsilon = 1e-15 ; ## to ensure log() does not give out a nan

## check input x
if ( numel( x ) != 4 )
  error( 'The input vector x must be of length 6.' ) ;
endif

## get the parameters from input x
hidden_layer_size = floor( x( 1 ) ) ;    ## number of neurons in hidden layer
randomisation_type = floor( x( 2 ) ) ;   ## 1 == uniform, 2 == Gaussian
scale_mode = floor( x( 3 ) ) ;           ## 1 will scale the features for all neurons, 2 will scale the features for each hidden
##                                          neuron separately, 3 will scale the range of the randomization for uniform distribution
scale = x( 4 ) ;                         ## Linearly scale the random features before feeding into the nonlinear activation function. 
##                                          In this implementation, we consider the threshold which leads to 0.99 of the maximum/minimum 
##                                          value of the activation function as the saturating threshold.
##                                          scale = 0.9 means all the random features will be linearly scaled
##                                          into 0.9 * [ lower_saturating_threshold , upper_saturating_threshold ].
##if_output_bias = floor( x( 5 ) + 0.5 ) ; ## Use output bias, or not? 1 == yes , 0 == no.
##lambda = x( 6 ) ;                        ## the regularization coefficient lambda 

##length_jj_loop = 25 ;
##all_aic_values = zeros( length_jj_loop , 1 ) ;  

rand( 'seed' , 0 ) ;
randn( 'seed' , 0 ) ;
##U_sample_targets = unique( sample_targets ) ;
##nclass = numel( U_sample_targets ) ;

##sample_targets_temp = zeros( numel( sample_targets ) , nclass ) ; 
##
#### get the 0 - 1 one hot coding for the target,  
##for i = 1 : nclass
##  idx = sample_targets == U_sample_targets( i ) ;
##  sample_targets_temp( idx , i ) = 1 ;
##endfor

sample_targets_temp = sample_targets ;

[ Nsample , Nfea ] = size( sample_features ) ;

######### get type of randomisation from input x #################
if ( randomisation_type == 1 ) ## uniform randomisation 
  
  if ( scale_mode == 3 ) ## range scaled for uniform randomisation
     Weight = scale * ( rand( Nfea , hidden_layer_size ) * 2 - 1 ) ; ## scaled uniform random input weights to hidden layer
     Bias = scale * rand( 1 , hidden_layer_size ) ;                  ## scaled random bias weights to hidden layer
  else
     Weight = rand( Nfea , hidden_layer_size ) * 2 - 1 ; ## unscaled random input weights to hidden layer
     Bias = rand( 1 , hidden_layer_size ) ;              ## unscaled random bias weights to hidden layer
  endif
    
elseif ( randomisation_type == 2 ) ## gaussian randomisation
  Weight = randn( Nfea , hidden_layer_size ) ; ## gaussian random input weights to hidden layer
  Bias = randn( 1 , hidden_layer_size ) ;      ## gaussian random bias weights to hidden layer
else
  error( 'only Gaussian and Uniform are supported' )
endif
############################################################################

Bias_train = repmat( Bias , Nsample , 1 ) ;
H = sample_features * Weight + Bias_train ;

k_parameters = numel( Weight ) + numel( Bias_train ) ;

## Activation Function    
Saturating_threshold = [ -2.1 , 2.1 ] ;
Saturating_threshold_activate = [ 0 , 1 ] ;

if ( scale_mode == 1 )
  ## scale the features for all neurons 
  [ H , k , b ] = Scale_feature( H , Saturating_threshold , scale ) ;
elseif ( scale_mode == 2 ) 
  ## else scale the features for each hidden neuron separately
  [ H , k , b ] = Scale_feature_separately( H , Saturating_threshold , scale ) ;
endif
  
## actual activation, the radial basis function
H = exp( -abs( H ) ) ;

## glm training always applies a bias, so comment out if training with netlab glm
##if ( if_output_bias == 1 )
##  ## we will use an output bias
##  H = [ H , ones( Nsample , 1 ) ] ; 
##endif

## the direct link scaling options, concatenate hidden layer and sample_features 
if ( scale_mode == 1 )
  ## scale the features for all neurons
  sample_features_temp = sample_features .* k + b ;
  H = [ H , sample_features_temp ] ;
elseif ( scale_mode == 2 )
  ## else scale the features for each hidden neuron separately
  [ sample_features_temp , ktr , btr ] = Scale_feature_separately( sample_features , Saturating_threshold_activate , scale ) ;
  H = [ H , sample_features_temp ] ;
else
  H = [ H , sample_features ] ;
endif

H( isnan( H ) ) = 0 ; ## avoids any 'blowups' due to nans in H

############ THE ORIGINAL REGULARISED LEAST SQUARES CODE ###################
## do the regularized least squares for concatenated hidden layer output            
## and the original, possibly scaled, input sample_features                         
##if ( hidden_layer_size < Nsample )                                                  
## beta = ( eye( size( H , 2 ) ) / lambda + H' * H ) \ H' * sample_targets_temp ;     
##else                                                                                
## beta = H' * ( ( eye( size( H , 1 ) ) / lambda + H * H' ) \ sample_targets_temp ) ; 
##endif                                                                               
############################################################################

##k_parameters = k_parameters + numel( beta ) ;

## get the model predicted target output
##sample_targets_temp = H * beta ;

## the final logistic output
##final_output = 1.0 ./ ( 1.0 .+ exp( -sample_targets_temp ) ) ;

############ REPLACED BY GLM TRAINING USING NETLAB #########################
net = glm( size( H , 2 ) , 1 , 'logistic' ) ; ## Create a generalized linear model structure.
options = foptions ; ## Set default parameters for optimisation routines, for compatibility with MATLAB's foptions()
options( 1 ) = -1 ;   ## change default value
## OPTIONS(1) is set to 1 to display error values during training. If
## OPTIONS(1) is set to 0, then only warning messages are displayed.  If
## OPTIONS(1) is -1, then nothing is displayed.
options( 14 ) = 5 ; ## change default value
## OPTIONS(14) is the maximum number of iterations for the IRLS
## algorithm;  default 100.
net = glmtrain( net , options , H , sample_targets ) ;

k_parameters = k_parameters + net.nwts ;

## get output of trained glm model
final_output = glmfwd( net , H ) ;
############################################################################

##Y_temp = zeros( Nsample , 1 ) ;
##% decode the target output
##for i = 1 : Nsample
##    [ maxvalue , idx ] = max( sample_targets_temp( i , : ) ) ;
##    Y_temp( i ) = U_sample_targets( idx ) ;
##endfor

############################################################################

##  https://machinelearningmastery.com/logistic-regression-with-maximum-likelihood-estimation/
##
##  likelihood = yhat * y + (1 – yhat) * (1 – y)
##
##  We can update the likelihood function using the log to transform it into a log-likelihood function:
##
##      log-likelihood = log(yhat) * y + log(1 – yhat) * (1 – y)

##  Finally, we can sum the likelihood function across all examples in the dataset to maximize the likelihood:
##
##      maximize sum i to n log(yhat_i) * y_i + log(1 – yhat_i) * (1 – y_i)

log_likelihood = sum( log( final_output .+ epsilon ) .* sample_targets + log( 1 .- final_output .+ epsilon ) .* ( 1 .- sample_targets ) ) ;

## get Akaike Information criteria
J = 2 * k_parameters - 2 * log_likelihood ;

## get the Brier_score
## https://en.wikipedia.org/wiki/Brier_score
##J = mean( ( final_output .- sample_targets_temp ) .^ 2 ) ;

rand( 'state' ) ; randn( 'state' ) ; ## reset rng

endfunction

## Various measures of goodness
## https://stats.stackexchange.com/questions/312780/why-is-accuracy-not-the-best-measure-for-assessing-classification-models
## https://www.fharrell.com/post/classification/
## https://stats.stackexchange.com/questions/433628/what-is-a-reliable-measure-of-accuracy-for-logistic-regression
## https://www.jstatsoft.org/article/view/v090i12
## https://stats.idre.ucla.edu/other/mult-pkg/faq/general/faq-what-are-pseudo-r-squareds/
## https://stats.stackexchange.com/questions/319666/aic-with-test-data-is-it-possible
## https://www.learningmachines101.com/lm101-076-how-to-choose-the-best-model-using-aic-or-gaic/
## https://stackoverflow.com/questions/48185090/how-to-get-the-log-likelihood-for-a-logistic-regression-model-in-sklearn
## https://stats.stackexchange.com/questions/67903/does-down-sampling-change-logistic-regression-coefficients
## https://stats.stackexchange.com/questions/163221/whats-the-measure-to-assess-the-binary-classification-accuracy-for-imbalanced-d
## https://stats.stackexchange.com/questions/168929/logistic-regression-is-predicting-all-1-and-no-0
## https://stats.stackexchange.com/questions/435307/multiple-linear-regression-lse-when-one-of-parameter-is-known
Both of these functions are working code and are heavily commented and perhaps not very polished.

As I write this post I have various tests running in the background and will report on the results in due course.