Wednesday, 27 February 2013

Restricted Boltzmann Machine

In an earlier post I said that I would write about Restricted Boltzmann machines and now that I have begun adapting the Geoffrey Hinton course code I have, this is the first post of possibly several on this topic.

Essentially, I am going to use the RBM to conduct unsupervised learning on unlabelled real market data, using some of the indicators I have developed, to extract relevant features to initialise the input to hidden layer weights of my market classifying neural net, and then conduct backpropagation training of this feedforward neural network using the labelled data from my usual, idealised market types.

Readers may well ask, "What's the point of doing this?" Well, taken from my course assignment notes, and edited by me for relevance to this post, we have:-

In the previous assignment we tried to reduce overfitting by learning less (early stopping, fewer hidden units etc.) RBMs, on the other hand, reduce overfitting by learning more: the RBM is being trained unsupervised so it's working to discover a lot of relevant regularity in the input features, and that learning distracts the model from excessively focusing on class labels. This is much more constructive distraction: instead of early stopping the model after a little learning we instead give the model something much more meaningful to do. ...it works great for regularisation, as well as training speed. ... In the previous assignment we did a lot of work selecting the right number of training iterations, the right number of hidden units, and the right weight decay. ... Now we don't need to do that at all, ... the unsupervised training of the RBM provides all the regularisation we need. If we select a decent learning rate, that will be enough, and we'll use lots of hidden units because we're much less worried about overfitting now.

Of course, a picture is worth a thousand words, so below are a 2D and a 3D picture

These two pictures show the weights of the input to hidden layer after only two iterations of RBM training, and effectively represent a "typical" random initialisation of weights prior to backpropagation training. It is from this type of random start that the class labelled data would normally be used to train the NN.

These next two pictures tell a different story

These show weights after 50,000 iterations of RBM training. Quite a difference, and it is from this sort of start that I will now train my market classifier NN using the class labelled data.

Some features are easily seen. Firstly, the six columns on the "left" sides of these pictures result from the cyclic period features in the real data, expressed in binary form, and effectively form the weights that will attach to the NN bias units. Secondly, the "right" side shows the most recent data in the look back window applied to the real market data. The weights here have greater magnitude than those further back, reflecting the fact that shorter periods are more prevalent than longer periods and that, intuitively obvious perhaps, more recent data has greater importance than older data. Finally, the colour mapping shows that across the entire weight matrix the magnitude of the values has been decreased by the RBM training, showing its regularisation effect.

Saturday, 23 February 2013

Regime Switching Article

Readers might be interested in this article about Regime Switching, from the IFTA journal, which in intent somewhat mirrors my attempts at market classification via neural net modelling.

Sunday, 27 January 2013

Softmax Neural Net Classifier "Half" Complete

Over the last few weeks I have been busy working on the Softmax activation function output neural net classifier and have now reached a point where I have trained it over enough of my usual training data that approximately half of the real data I have would be classified by it, rather than by my previously and incompletely trained "reserve" neural net.

It has taken this long to get this far for a few reasons; the necessity to substantially adapt the code from the Geoff Hinton neural net course and then conducting grid searches over the hyper-parameter space for the "optimum" learning rate, number of neurons in the hidden layer and also incorporating some changes to the features set used as input to the classifier. At this halfway stage I thought I would subject the classifier to the cross validation test of my recent post of 5th December 2012 and the results, which speak for themselves, are shown in the box below.
Random NN
Complete Accuracy percentage: 99.610000

"Acceptable" Mis-classifications percentages 
Predicted = uwr & actual = unr: 0.000000
Predicted = unr & actual = uwr: 0.062000
Predicted = dwr & actual = dnr: 0.008000
Predicted = dnr & actual = dwr: 0.004000
Predicted = uwr & actual = cyc: 0.082000
Predicted = dwr & actual = cyc: 0.004000
Predicted = cyc & actual = uwr: 0.058000
Predicted = cyc & actual = dwr: 0.098000

Dubious, difficult to trade mis-classification percentages 
Predicted = uwr & actual = dwr: 0.000000
Predicted = unr & actual = dwr: 0.000000
Predicted = dwr & actual = uwr: 0.000000
Predicted = dnr & actual = uwr: 0.000000

Completely wrong classifications percentages 
Predicted = unr & actual = dnr: 0.000000
Predicted = dnr & actual = unr: 0.000000

End NN
Complete Accuracy percentage: 98.518000

"Acceptable" Mis-classifications percentages 
Predicted = uwr & actual = unr: 0.002000
Predicted = unr & actual = uwr: 0.310000
Predicted = dwr & actual = dnr: 0.006000
Predicted = dnr & actual = dwr: 0.036000
Predicted = uwr & actual = cyc: 0.272000
Predicted = dwr & actual = cyc: 0.036000
Predicted = cyc & actual = uwr: 0.344000
Predicted = cyc & actual = dwr: 0.210000

Dubious, difficult to trade mis-classification percentages 
Predicted = uwr & actual = dwr: 0.000000
Predicted = unr & actual = dwr: 0.000000
Predicted = dwr & actual = uwr: 0.000000
Predicted = dnr & actual = uwr: 0.000000

Completely wrong classifications percentages 
Predicted = unr & actual = dnr: 0.000000
Predicted = dnr & actual = unr: 0.000000
This classifier has 3 sigmoid logistic neurons in its single hidden layer and during training early stopping was employed. I also tried adding L2 regularization but this didn't really seem to have any appreciable effect, so after a while I dropped this. All in all, I'm very pleased with my efforts and the classifier's performance so far. Over the next few weeks I shall continue with the training and when this is complete I shall post again.

On a related note, I have recently added another blog to the blogroll because I was impressed with a series of posts over the last couple of years concerning that particular blogger's investigations into neural nets for trading, especially the last two posts here and here. The ideas covered in these last two posts ring a bell with my post here, where I first talked about using a neural net as a market classifier based on the work I did in Andrew Ng's course on recognising hand written digits from pixel values. I shall follow this new blogroll addition with interest! 

Thursday, 3 January 2013

The Coin Toss Experiment

" 'The coin toss experiment' provides an indication that when one comes across a process that generates many system alternatives with many equity curves, some acceptable and some unacceptable, one may get fooled by randomness. Minimizing data-mining and selection bias is a very involved process for the most part outside the capabilities of the average user of such processes," taken from a recent addition to the blogroll. Interesting!

Wednesday, 5 December 2012

Neural Net Market Classifier to Replace Bayesian Market Classifier

I have now completed the cross validation test I wanted to run, which compares my current Bayesian classifier with the recently retrained "reserve neural net," the results of which are shown in the code box below. The test consists of 50,000 random iterations of my usual "ideal" 5 market types with the market classifications from both of the above classifiers being compared with the actual, known market type. There are 2 points of comparison in each iteration: the last price bar in the sequence, identified as "End," and a randomly picked price bar from the 4 immediately preceding the last bar, identified as "Random."
Number of times to loop: 50000
Elapsed time is 1804.46 seconds.

Random NN
Complete Accuracy percentage: 50.354000

"Acceptable" Mis-classifications percentages 
Predicted = uwr & actual = unr: 1.288000
Predicted = unr & actual = uwr: 6.950000
Predicted = dwr & actual = dnr: 1.268000
Predicted = dnr & actual = dwr: 6.668000
Predicted = uwr & actual = cyc: 3.750000
Predicted = dwr & actual = cyc: 6.668000
Predicted = cyc & actual = uwr: 2.242000
Predicted = cyc & actual = dwr: 2.032000

Dubious, difficult to trade mis-classification percentages 
Predicted = uwr & actual = dwr: 2.140000
Predicted = unr & actual = dwr: 2.140000
Predicted = dwr & actual = uwr: 2.500000
Predicted = dnr & actual = uwr: 2.500000

Completely wrong classifications percentages 
Predicted = unr & actual = dnr: 0.838000
Predicted = dnr & actual = unr: 0.716000

End NN
Complete Accuracy percentage: 48.280000

"Acceptable" Mis-classifications percentages 
Predicted = uwr & actual = unr: 1.248000
Predicted = unr & actual = uwr: 7.630000
Predicted = dwr & actual = dnr: 0.990000
Predicted = dnr & actual = dwr: 7.392000
Predicted = uwr & actual = cyc: 3.634000
Predicted = dwr & actual = cyc: 7.392000
Predicted = cyc & actual = uwr: 1.974000
Predicted = cyc & actual = dwr: 1.718000

Dubious, difficult to trade mis-classification percentages 
Predicted = uwr & actual = dwr: 2.170000
Predicted = unr & actual = dwr: 2.170000
Predicted = dwr & actual = uwr: 2.578000
Predicted = dnr & actual = uwr: 2.578000

Completely wrong classifications percentages 
Predicted = unr & actual = dnr: 1.050000
Predicted = dnr & actual = unr: 0.886000

Random Bayes
Complete Accuracy percentage: 19.450000

"Acceptable" Mis-classifications percentages 
Predicted = uwr & actual = unr: 7.554000
Predicted = unr & actual = uwr: 2.902000
Predicted = dwr & actual = dnr: 7.488000
Predicted = dnr & actual = dwr: 2.712000
Predicted = uwr & actual = cyc: 5.278000
Predicted = dwr & actual = cyc: 2.712000
Predicted = cyc & actual = uwr: 0.000000
Predicted = cyc & actual = dwr: 0.000000

Dubious, difficult to trade mis-classification percentages 
Predicted = uwr & actual = dwr: 5.730000
Predicted = unr & actual = dwr: 5.730000
Predicted = dwr & actual = uwr: 5.642000
Predicted = dnr & actual = uwr: 5.642000

Completely wrong classifications percentages 
Predicted = unr & actual = dnr: 0.162000
Predicted = dnr & actual = unr: 0.128000

End Bayes
Complete Accuracy percentage: 24.212000

"Acceptable" Mis-classifications percentages 
Predicted = uwr & actual = unr: 8.400000
Predicted = unr & actual = uwr: 2.236000
Predicted = dwr & actual = dnr: 7.866000
Predicted = dnr & actual = dwr: 1.960000
Predicted = uwr & actual = cyc: 6.142000
Predicted = dwr & actual = cyc: 1.960000
Predicted = cyc & actual = uwr: 0.000000
Predicted = cyc & actual = dwr: 0.000000

Dubious, difficult to trade mis-classification percentages 
Predicted = uwr & actual = dwr: 5.110000
Predicted = unr & actual = dwr: 5.110000
Predicted = dwr & actual = uwr: 4.842000
Predicted = dnr & actual = uwr: 4.842000

Completely wrong classifications percentages 
Predicted = unr & actual = dnr: 0.048000
Predicted = dnr & actual = unr: 0.040000
A Quick Analysis
  • Looking at the figures for complete accuracy it can be seen that the Bayesian classifier is not much better than randomly guessing, with 19.45% and 24.21% for "Random Bayes" and "End Bayes" respectively. The corresponding accuracy figures for the NN are 50.35% and 48.28%.
  • In the "dubious, difficult to trade" mis-classification category Bayes gets approx. 22% and 20% this wrong, whilst for the NN these figures halve to approx. 9.5% and 9.5%.
  • In the "acceptable" mis-classification category Bayes gets approx. 29% and 29%, with the NN being more or less the same.
Although this is not a completely rigorous test, I am satisfied that the NN has shown its superiority over the Bayesian classifier. Also, I believe that there is significant scope to improve the NN even more by adding additional features, changes in architecture and use of the Softmax unit etc. As a result, I have decided to gracefully retire the Bayesian classifier and deploy the NN classifier in its place.

Wednesday, 28 November 2012

Geoff Hinton's Coursera Course Almost Ended

I am now in the final week of the course (see previous post) and just have the final exam to complete. The course has been very intensive, very interesting and much more difficult than the first machine learning course I took. Personally, the big take aways from this course for the things that I want to do are:
  • Softmax activation function for output layers. I intend to replace my current use of the Sigmoid function in the output layer of my standby neural net with this Softmax function. The Softmax is far more suitable for my intended classification purposes.
  • Octave code for using momentum to speed up the training of a neural net.
  • Restricted Boltzmann machines, the stacking thereof and deep learning, and unsupervised learning. I shall talk more about this in a future post.
With regard to the training of my standby neural net, it is mostly completed. I say mostly because as soon as I learned about the above mentioned items I stopped training it once I had trained it on sufficient data to cover almost 99% of the dominant cycle periods to be found in the data. It seemed pointless to continue training it with increasing training times and diminishing returns, particularly since it is destined to be remodelled and retrained using what I've just learned. For now I will subject it to cross validation testing and if it passes this, I shall deploy it for a short period until such time as it is replaced by the neural net I have in mind following on from the course.

Thursday, 4 October 2012

Change in Neural Net Training Plans

After having spent the last few days training my NNs and seeing how long it is taking on my new data I have decided to change my training plans. I had been simultaneously training (on two separate computers) my decision tree idea alongside a more "normal" multi-class NN in the hope of eventually comparing the two. However, I anticipate that if I continued with this two pronged approach it would take about a month to finish, and I'd like quicker results than that. Also my attempt to use the hyperbolic tangent activation function hasn't been too successful and I'm not sure whether it's my coding or some deeper theoretical reason why it isn't working satisfactorily. Another reason is that the Coursera Neural Nets for Machine Learning course has just started, the syllabus for which is shown below:-

Lecture 1: Introduction
Lecture 2: The Perceptron learning procedure
Lecture 3: The backpropagation learning procedure
Lecture 4: Learning feature vectors for words
Lecture 5: Object recognition with neural nets
Lecture 6: Optimisation: How to make the learning go faster
Lecture 7: Recurrent neural networks and advanced optimisation
Lecture 8: How to make neural networks generalise better
Lecture 9: Combining multiple neural networks to improve generalisation
TOPICS TO BE COVERED IN LECTURES 10-16
Deep Autoencoders (including semantic hashing and image search with binary codes)
Hopfield Nets and Simulated Annealing
Boltzmann machines and the general learning algorithm
Restricted Boltzmann machines and contrastive divergence learning
Applications of Restricted Boltzmann machines to collaborative filtering and document modelling.
Stacking restricted Boltzmann machines or shallow autoencoders to make deep nets.
The wake-sleep algorithm and its contrastive version
Recent applications of generatively pre-trained deep nets
Deep Boltzmann machines and how to pre-train them
Modelling hierarchical structure with neural nets

I think that rather than ploughing on with the training of my decision tree NN it would perhaps be better to finish this course before I get too carried away with myself with new NN ideas; for example, lecture 9, or the "stacking of Boltzman machines," might give me much better insight to the issues involved.

For these reasons I have decided to retrain my "reserve NN" on my enlarged data set with my new feature set, using both computers available to me, whilst I work through the above course. I expect that this reserve NN will be fully trained before the course ends, so then I will be free to experiment with my newly acquired knowledge.