Wednesday, 15 August 2012

Progress Report on Neural Net

Well, reducing the number of nodes in the hidden layer didn't help much; if anything it made things look slightly more erratic. As a result I decided to increase the number of input features to 102, which gave much more pleasing results. A screen shot of this newer NN, in the bottom pane, is shown below
Comparing this with the earlier version shown in my previous post, for example by looking at the smooth uptrend in the middle, it can be seen that there are far fewer "false" market types indicated - a definite improvement. The moral seems to be that adding more informative features is the way to go.

However, this raises the problem of training time - it took about 30 hours to train this model using my current Octave scripts - which is far too long for me. Due to this I have decided to use the FANN library, fanntool and the octave-fann bindings for my future development of NNs. I've recently been playing around with these and I think that, in the long run, a lot of time will be saved, even though I will have to write a certain amount of "glue code" to achieve what I want. The above 102 input feature NN will be my reserve NN in the event that I can't get the FANN library, fanntool and octave-fann to work to my satisfaction.

Thursday, 2 August 2012

Results of Comparative Cross Validation Tests

As expected the NN achieved 100 % accuracy and my prediction of 20 % to 30 % accuracy for my current Naive Bayesian Classifier was more or less right - in various runs of sample sizes up to 50,000 it achieved accuracy rates of 30 % to 33 %. A screen shot of both classifiers applied to the last 200 days worth of S & P futures prices is shown below, with the Naive Bayesian in the upper pane and the NN in the lower pane.
However, despite it vastly superior performance in the tests, I don't really like the look of the NN on real data - it appears to be more erratic or noisier than the Bayesian classifier. I suspect that the NN may be overly complex, with 54 nodes in its one hidden layer. I shall try to improve the NN by reducing the number of hidden layer nodes to 25, and then seeing how that looks on real data.