Replies to this post
The questioner is just talking none-sense. He is assuming, seemingly for no reason, that you know that AC=>BC. If he does not make this false, completely unwarranted, assumption notice strange happens at all.
(also it is, of course, wrong to assume any individual person will assign consistent probabilities to events. By “consistent” I mean consistent with what the person knows not what some aliens know. )
I’m not really a Bayesian, but the obvious response is that learning that AC implies BC should cause you to update the probability of AC downwards and the probability of BC upwards.
Knowing that a statement is a named conjecture tells you something about how likely it is to be true. Knowing that it’s a named conjecture with a stronger form which is also a named conjecture tells you more.
The “questioner” framework was just a story. The real situations I am thinking about here are ones in which there is no one around to tell you that AC=>BC. Either you simply don’t know it, or you could figure it out but may not currently realize it, or know it but just don’t have it in mind.
I think I was actually clearer in my earlier post on this subject, so maybe I should have just found and linked that. Or maybe I should rewrite that one.
I just tried to type a clear explanation of my point and failed and deleted it, which is frustrating because it’s clear in my head. I think I’m just too tired.
But my point is something like: if you only have a hazy sense of the area you’re talking about, you’ll be tempted to give most events a sort of “reasonable, conservative” probability estimate, if someone asks you about them or you happen to ask yourself. But since there is all this implication structure (or just non-independence) among the various events, you’re actually implying a whole bunch of nontrivial things by doing this. You may not know about this structure, or only know about some of it, or know about a lot of it but not be able to hold it all in your head at the same time (quick, name every necessary condition for “cupcakes are still being sold in 2050″).
This seems wrong to me. Maybe I am misunderstanding it. Let me try to get my head around it and explain where I’m coming from.
Suppose that Omega comes to Alice and says: “Here is a coin that I have biased to either always come up heads or always come up tails. I will not tell you which. But I will tell you this. I know every truth about human history, and I have arranged for this coin to be biased heads if 9-11 was truly an inside job, and tails otherwise.”
Maybe Alice believes there was only a 0.1% chance that 9-11 was an inside job, so she believes 99.9% chance a flip will land tails, and 0.1% chance it will land heads.
Now Alice goes to Bob, who knows nothing about any of this, and says “What’s your probability distribution over how this coin lands when I flip it in a second?” Bob says “Fifty-fifty heads/tails, obviously.”
Alice says “Aha, so you think there’s an equal chance that 9-11 was or wasn’t an inside job? You’re pretty dumb.”
Obviously this is unfair to Bob. But equally obviously, Bob did exactly the right thing, from his position, to say that the coin flip odds were 50-50.
But I feel like you’re doing the same thing. Confronting the human for having beliefs that implied surprising facts about the relationship between the alien conjectures, makes no more sense than confronting Bob for having beliefs that imply surprising facts about the US government’s relationship with terrorism. But that means your argument proves too much - it suggests we can’t even assign a probability of 50% to a coin toss.
It seems like the example I gave was really bad for communicating what I actually wanted to communicate.
The alien conjectures are meant to be an extreme case of “a situation where we have incomplete information about the dependence relationships between different events.” (In the example, we had no information.) The example is meant to distill a phenomenon that happens in other, less contrived situations.
I’m too tired right now to come up with a good single example, but the prototype case I have in mind is relatively ordinary statements about the future, like “cupcakes are still being sold in 2050.” This seems pretty likely, and at first glance I’d just give it a probability I associate with “pretty likely.” But then you can make various more specific statements, involving people in 2050 doing things with cupcakes they’ve bought, which also seem “pretty likely,” but technically require the first statement and should have lower probability than it, unless they absolutely must happen if cupcakes are still sold.
All these statements are sort of “hidden conjunctions,” which depend on all sorts of prerequisites, many of which may not come to mind directly when thinking about the statement. When everything’s a conjunction of things you’re very unsure about, which are themselves conjunctions of things you’re very unsure about, etc., it becomes hard to keep the probabilities ordered in a way that respects this structure.
Are you just saying most people are bad and inconsistent in their probability assignments and commit the conjunction fallacy? Such that someone might assign 20% to “Linda is a feminist bank teller” but only 10% to “Linda is a bank teller” which would be absurd? If so I agree but I think that’s an argument that humans are bad at this, not that it’s not a good theoretical framework.
I think the point is that without a fairly detailed grasp of the situation, there’s no way your credence assignments aren’t going to lead to all sorts of conjunction fallacies. Nostalgebraist is trying to give some examples where this is really clear, but it winds up making the examples not be very convincing.
I think a better example is the statement: “California will (still) be a US state in 2100.” Where if you make me give a probability I’ll say something like “Almost definitely! But I guess it’s possible it won’t. So I dunno, 98%?”
But if you’d asked me to rate the statement “The US will still exist in 2100”, I’d probably say something like “Almost definitely! But I guess it’s possible it won’t. So I dunno, 98%?”
And of course that precludes the possibility that the US will exist but not include California in 2100.
And for any one example you could point to this as an example of “humans being bad at this”. But the point is that if you don’t have a good sense of the list of possibilities, there’s no way you’ll avoid systematically making those sorts of errors.
Consider the following list of statements: 1) in 2100, the US will exist. 2) In 2100, the US will contain states. 3) In 2100, the US will contain states west of the Mississippi. 4) In 2100, the US will contain states west of the Rockies. 5) In 2100, the US will contain California.
In my judgment, all of those statements are “almost certainly true.” And there’s content to that, as a matter of “giving credence to propositions about the future.” But if you want me to assign “probabilities” then you want me to assign numbers to all of those statements in a way that’s consistent across all those statements. And there’s no possible way to do that unless you have a list of all the possible propositions.
Try it. And then ask what you think the probability is that in 2100, the US contains any states bordering the Pacific.
