"Snake-Eyes" Joe introduced a die of his own into a game of chance.
He was subsequently challenged that the die was biased.
Very
rigorously test to see if there are grounds to substantiate this claim; don't accept just two or three trial runs. Are you able to offer a theoretical model consistent with your findings?
Test "Snake-Eyes" Joe's Die with this simulator which has a run of 60,000 at a time:
No: | 1 | 2 | 3 | 4 | 5 | 6 | Total |
Scores |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
Note: the data changes with each subsequent mouse-over visitation to the link.
If you look at the source, you'll see that the process is equivalent to rolling the die normally, and then deleting every
11th 1. This is because we count ten 1s, then delete the next 1 and reset the count. So if we imagine rolling the die n times, we will only get n/6*(1-1/11) 1s.
Furthermore, n/6*1/11 of those rolls were deleted and don't increment the roll counter. So in reality, there were only n*(1-1/66) rolls. This means the fraction of 1s is actually
n/6*(1-1/11)/[ n (1-1/66) ] = 10/65 = 2/13
after a large number of rolls. For 60,000 this gives about 9230.8, and if you do several trials you'll see that this is more consistent with the results. Doing 10000 trials in C I get an average of 9232 1s.
Edited on July 31, 2008, 10:52 am
|
Posted by Eigenray
on 2008-07-31 10:27:26 |