Firstly, I am going to use the statistical concept of Cross-validation, whereby a model is trained on one set of data and tested for accuracy on another set of data, which is actually quite common in the world of back testing. Since the "Delta solutions" I will be testing come from late 2006 (See Testing the Delta Phenomenon, Part 2) the 4 years from 2008 to 2011 inclusive can be considered to be the validation set for this test. The test(s) will assess the accuracy of the predicted highs and lows for this period, on each Delta time frame for which I have solutions, by counting the difference in days between actual highs and lows in the data and their predicted occurrences and then creating an average error for these differences. This will be the test statistic. Using R, a Null Hypothesis average error distribution for random predictions on the same data will be created, using the same number of predicted turning points as per the Delta solution being tested. The actual average error test statistic on real data will be compared with this Null Hypothesis distribution of the average error test statistic and the Null Hypothesis rejected or not, as the case may be. The Null Hypothesis may be stated as
- given a postulated number of turning points the accuracy of the Delta Phenomenon in correctly predicting when these turning points will occur, using the average error test statistic described above as the measure of accuracy, is no better than random guessing as to where the turning points will occur.
- given a postulated number of turning points the accuracy of the Delta Phenomenon in correctly predicting when these turning points will occur, using the average error test statistic described above as the measure of accuracy, is better than could be expected from random guessing as to where the turning points will occur.
All of this might be made clearer for readers by following the commented R code below.
# Assume a 365 trading day year, with 4 Delta turning points in this year
# First, create a Delta Turning points solution vector, the projected
# days on which the market will make a high or a low
proj_turns <- c(47,102,187,234) # day number of projected turning points
# now assume we apply the above Delta solution to future market data
# and identify, according to the principles of Delta, the actual turning
# points in the future unseen "real data"
real_turns <- c(42,109,193,226) # actual market turns occur on these days
# calculate the distance between the real_turns and the days on which
# the turn was predicted to occur and calculate the test statistic
# of interest, the avge_error_dist
avge_error_dist <- mean( abs(proj_turns - real_turns) )
print(avge_error_dist) # print for viewing
# calculate the theoretical probability of randomly picking 4 turning points
# in our 365 trading day year and getting an avge_error_dist that is equal
# to or better than the actual avge_error_dist calculated above.
# Taking the first projected turning point at 47 and the actual turning
# that occurs at 42, to get an error for this point that is as small as or
# smaller than that which actually occurs, we must randomly choose one of
# the following days: 42,43,44,45,46,47,48,49,50,51 or 52. The probability of
# randomly picking one of these numbers out of 1 to 365 inclusive is
a <- 11/365
# and similarly for the other 3 turning points
b <- 15/364 # turning point 2
c <- 13/363 # turning point 3
d <- 17/362 # turning point 4
# Note that the denominator decreases by 1 each time because we are
# sampling without replacement i.e. it is not possible to pick the same
# day more than once. Combining the 4 probabilities above, we get
rdn_prob_as_good <- (a*b*c*d)/100 # expressed as a %
print( rdn_prob_as_good ) # a very small % !!!
# but rather than rely on theoretical calculations, we are actually
# going to repeated, randomly choose 4 turning points and compare their
# accuracy with the "real accuracy", as measured by avge_error_dist
# Create our year vector to sample, consisting of 365 numbered days
year_vec <- 1:365
# predefine vector to hold results
result_vec <- numeric(100000) # because we are going to resample 100000 times
# count how many times a random selection of 4 turning points is as good
# as or better than our "real" results
as_good_as = 0
# do the random turning point guessing, resampling year_vec, in a loop
for(i in 1:100000) {
# randomly choose 4 days from year_vec as turning points
this_sample <- sample( year_vec , size=4 , replace=FALSE )
# sort this_sample so that it is in increasing order
sorted_sample <- sort( this_sample , decreasing=FALSE )
# calculate this_sample_avge_error_dist, our test statistic
this_sample_avge_error_dist <- mean( abs(proj_turns - sorted_sample) )
# if the test statistic is as good as or better that our real result
if( this_sample_avge_error_dist <= avge_error_dist ) {
as_good_as = as_good_as + 1 # increment as_good_as count
}
# assign this sample result to result_vec
result_vec[i] <- this_sample_avge_error_dist
}
# convert as_good_as to %
as_good_as_percent <- as_good_as/100000
# some summary statistics of result_vec
mean_of_result_vec <- mean( result_vec )
standard_dev_of_result_vec <- sd( result_vec )
real_result_from_mean <- ( mean_of_result_vec - avge_error_dist )/standard_dev_of_result_vec
print( as_good_as ) # print for viewing
print( as_good_as_percent ) # print for viewing
print( mean_of_result_vec ) # print for viewing
print( standard_dev_of_result_vec ) # print for viewing
print( real_result_from_mean ) # print for viewing
# plot histgram of the result_vec
hist( result_vec , freq=FALSE, col='yellow' )
abline( v=avge_error_dist , col='red' , lwd=3 )
Typical output of this code iswhich shows a histogram of the distribution of random prediction average errors in yellow, with the actual average error shown in red. This is for the illustrative hypothetical values used in the code box above. Terminal prompt output for this is
[1] 6.5
[1] 2.088655e-08
[1] 38
[1] 0.00038
[1] 63.78108
[1] 32.33727
[1] 1.771364
where
6.5 is actual average error in days
2.088655e-08 is "theoretical" probability of Delta being this accurate
38 is number of times a random prediction is as good as or better than 6.5
0.00038 is 38 expressed as a percentage of random predictions made
63.78108 is the mean of the random distribution histogram
32.33727 is the standard deviation of the random distribution histogram
1.771364 is the difference between 63.78108 and 6.5 expressed as a multiple of the 32.33727 standard deviation.
This would be an example of the Null Hypothesis being rejected due to the 0.00038 % figure for random prediction accuracy being better than actual accuracy; in statistical parlance - a low p-value. Note, however, gross the difference between this figure and the "theoretical" figure. Also note that despite the Null being rejected the actual average error falls well within 2 standard deviations from the mean of the random distribution. This of course is due to the extremely heavy right-tailedness of the distribution, which expands the standard deviation range.
This second plot
and
[1] 77.75
[1] 2.088655e-08
[1] 48207
[1] 0.48207
[1] 79.85934
[1] 27.60137
[1] 0.07642148
shows what a typical failure to reject the Null Hypothesis would look like - a 0.48 p-value - and an actual average error that is indistinguishable from random, typified by it being well within a nice looking bell curve distribution.
So there it is, the procedure I intend to follow to objectively test the accuracy of the Delta Phenomenon.
3 comments:
thanks, looking forward to your objective statistical test.
Hi Dekalog, fyi there is a Delta Phenomenon multi-week training session via IRC by a group of DP experts!.. The classes are Monday, Wednesday and Friday 2-5pm EST starting on February 16th. irc server irc.forex.com #training
Thanks for the info about the training.
Post a Comment