"Snake-Eyes" Joe introduced a die of his own into a game of chance.
He was subsequently challenged that the die was biased.
Very
rigorously test to see if there are grounds to substantiate this claim; don't accept just two or three trial runs. Are you able to offer a theoretical model consistent with your findings?
Test "Snake-Eyes" Joe's Die with this simulator which has a run of 60,000 at a time:
No: | 1 | 2 | 3 | 4 | 5 | 6 | Total |
Scores |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
Note: the data changes with each subsequent mouse-over visitation to the link.
(In reply to
re: error in solution - fallacy? by brianjn)
The counter tn doesn't reach 10 until after the 10th 1 is rolled. So it is the 11th 1 that is skipped. Then tn is reset to 0, and only reaches 10 again after 10 more 1s. Then the 22nd 1 is skipped.
By 22nd 1 I mean the 22nd time g is set to 1. The number of 1s which are actually counted, t, is still only 20 at this point. If we count 60000 rolls, then we will have actually made on average 60000*66/65 ~ 60923 rolls. Of these, 60000*11/65 ~ 10154 were 1s, but we only counted 10/11 of these, or ~ 9231. The other 923 1s were skipped.
In the limit, the fraction of 1s that are counted is 2/13. For a finite number of rolls n, though, the expected number of 1s should be slightly higher: at the extreme, if n < 11, then the expected number of 1s will just be n/6. For larger n this effect would be small, though. Without working it out, I think the exact number of expected 1s in n rolls should be given by a 12th order linear recurrence, so the solutions should have one term being 2n/13, and the rest decaying exponentially.
Anyway, here is a C program to empirically find the average very quickly:
#include<stdio.h>
#include<stdlib.h>
int main() {
int run,t,tn,c,g;
for(run=t=0;run<1000;run++)
for(c=tn=0;c<60000;c++) {
g=rand()%6;
if(g==0 && tn==10) { tn=0; g=rand()%6; };
if(g==0) { t++; tn++; }
}
printf("mean=%lf\n",1.0*t/run);
}
Edited on August 1, 2008, 9:00 pm
|
Posted by Eigenray
on 2008-08-01 20:56:51 |