Following on from my previous post, I have been experimenting with various tweaks to the basic set-up of expressing an indicator in the form of a neural net, shown again below to avoid the necessity of having readers flip between posts.
The tweaks explored relate to the weight matrices, activation functions, targets and loss functions, the end results of which are now briefly summarized in turn.The weight matrices are sparse and with shared weights within each separate matrix because they simply calculate the 4 indicator outputs t which represent the indicator and 3 lagged outputs of it, hence sharing the same weights. The input weights matrix is a 40 by 20 matrix with only 10 unique parameters to train/update (expressed during training by averaging across the 40 non zero weights in the matrix). Similarly, the output weights matrix only has 1 unique trainable weight. Together with bias units (not shown in the diagram above) and the 4 decision weights, the whole neural net has a total of 16 trainable weights.
The activation units are exactly as discussed in my previous post and shown in the diagram above, namely tanh on the 2 hidden layers and a sigmoid on the final output layer.
For the targets I used the sign of the slope of a sliding, centred smooth of the OHLC bar opens, on the premise that the first trade opportunity would be acted upon at the opening price following a buy/sell signal calculated on the close of a bar. The smoothing eliminates the whipsaws that would result from a single adverse return in an otherwise smooth set of returns.
The loss is a combination of a weighted cross-entropy loss and a Sortino ratio loss. The weights for the weighted cross-entropy loss are simply the absolute values of the returns, the idea being that it is more important to get the direction right on the big returns. The Sortino loss is implemented by minimizing the negative of the Sortino ratio, i.e. maximizing the ratio. There are 2 versions of this Sortino loss: an analytical gradient that spreads the cost over each training sample per training epoch similar to the cross-entropy loss, and a global loss which uses finite differences over the 4 output decision weights. Both of these Sortino losses were coded with the help of deepseek-talkai. The losses are mixed via a mixing parameter, Lambda, which varies from 0 to 1, with 0 being a pure cross-entropy loss and 1 being pure Sortino loss.
The following .gif shows the training data equity curves for each type of loss (see the titles of each plot) for every hundredth epoch up to epoch 600 over a week's worth of 10 minute forex data limited to the trading hours described in my previous post. The legend in the top left corner identifies each equity curve.
Obviously these plots show that it is possible to improve results over those of the original indicator (shown in red in the above .gif), but of course these are in-sample results. My next post will be about cross validation and out-of sample test results.
No comments:
Post a Comment