DEFUN_DLD (linearpredict, args, , "Help String")
{
octave_value retval;
ColumnVector a = args(0).column_vector_value (); // The input price
ColumnVector b(a); // This will be the output column vector returned to Octave by "retval"
const int n = args(1).int_value();
int lngth = 10;
double g[30];
int zz;
for (zz = 0; zz < 30; zz++)
g[zz] = 0.0;
double sigPredict[30];
for (zz = 0; zz < 30; zz++)
sigPredict[zz] = 0.0;
double sigPower = 0.0;
double mu = 0.0;
double xBar = 0.0;
int jj = 0;
for (octave_idx_type ii (10); ii < a.length="" 10="" a="" average="" factor="" for="" href="https://draft.blogger.com/null" if="" ii="" jj="" lngth="" loop="" normalization="" ompute="" onvergence="" power="" sigpower="" start="" the=""> 0)
mu = 0.25 / (sigPower * 10);
//Compute signal estimate
xBar = 0;
for (jj = 1; jj <= lngth; jj++)
xBar = xBar + a(ii - jj) * g[jj];
//Compute gain coefficients
for (jj = 1; jj <= lngth; jj++)
g[jj] = g[jj] + (mu * (a(ii) - xBar) * a(ii - jj));
//Compute signal prediction waveform
for (jj = 0; jj <= lngth; jj++)
sigPredict[jj] = a(ii - (10 - jj));
//Extend signal prediction into the future
int kk = 0;
for (jj = lngth + 1; jj <= lngth + 5; jj++)
{
sigPredict[jj] = 0;
for (kk = 1; kk <= lngth; kk++)
sigPredict[jj] = sigPredict[jj] + sigPredict[jj - kk] * g[kk];
}
b(ii) = sigPredict[lngth + n];
}
retval = b; // Assign the output column vector to the return value
return retval; // Return the output to Octave
}
which is very similar in concept to Burg's method. I think some application of these methods show more promise than concentrating on my naive implementation of Runge-Kutta.
"Trading is statistics and time series analysis." This blog details my progress in developing a systematic trading system for use on the futures and forex markets, with discussion of the various indicators and other inputs used in the creation of the system. Also discussed are some of the issues/problems encountered during this development process. Within the blog posts there are links to other web pages that are/have been useful to me.
Tuesday, 13 October 2015
Giving up on Runge-Kutta Methods (for now?)
Over the last few weeks I have been looking at using Runge-Kutta methods for the creation of features, but I have decided to give up on this for now simply because I think I have found a better way to accomplish what I want. I was alerted to this possible approach by this post over at http://dsp.stackexchange.com/ and following up on this I remembered that a few years ago I coded up John Ehler's Linear Prediction Filter (the white paper for which might still be available here) and my crudely transcribed code given below:
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment