Although there isn’t evidence in the fact that the statistician reached the stopping time, there is (statistical) evidence in the amount of time it took them to reach it.
I haven’t worked it out formally, but maybe this explains the “paradox” about the beliefs of the statistician vs. the beliefs of a Bayesian watching them. Learning that they stopped eventually tells you nothing. Learning that they stopped at time N as opposed to N/10000 or N*10000 does tell you something. So the outsider can update on that information, and update further if they get to see the data. (This solves the problem of “I knew you were going to produce some data set leading you to have that posterior, so the data set you have doesn’t move me closer to believing that posterior” – you didn’t know the amount of time it would take to produce that data set.)
I’m not sure this makes sense at all. (Exactly what information do you learn from the stopping time alone? I’d have to work it out.)
This is what I’d kind of been thinking. The stronger the effect, the longer you’d expect to have to run the trial before getting a positive result.
Like, the odds of getting a positive after 500 trials is what it is. And if we say do another 100 trials every time we don’t get a positive, then if we have 2000 trials that tells us it failed fifteen times.
