Sunday, September 30, 2012

How to want to want to change your mind

Back in February, Julia Galef posted a wonderful video full of tips on how to want to change your mind.  When I first watched it, I was excited to gain so many useful tools to share with others, but I must admit I harbored some doubts as to whether I really needed them myself.  I've put so much work, I thought, into learning to be rational.  Surely I already have such a basic skill down pat.

Not so!  Over the past several months, I've paid much closer attention to the phenomenology of disagreement and what it's like from the inside to change my mind.  I've found that even after having been raised by scientists, earning a degree in philosophy, and putting Julia's tricks to frequent use, it really is incredibly difficult.

Along the way, I adopted the following mantra.  Just because I'm more rational than those around me does not mean I am in fact rational.

It's a little tricky to discover instances of my own irrationality for a stubbornly obvious reason: If I were fully aware of being wrong, I'd already be right!  But it's not impossible once you get used to trying.  It's just a matter of recognizing what self-deception feels like.  At first, though, it's easiest to catch irrationality in retrospect, so here's a little exercise that taught me to be on the lookout for resistance to learning the truth.

Exercise One

Next time you change your mind about something, make a study of what led you to do it.   
  1. Get a piece of paper and fold it in half.  Great big in the top left, write down an estimation of how certain you were about your belief before you started the process of changing your mind. Beneath that, write out, in as much detail as possible, why you held the false belief in the first place.  If there were several pieces of evidence, make a list.   
  2. On the other side, write down all the evidence you collected or considered that ultimately led you to abandon your former hypothesis.  Circle the one that finally did the trick.   
  3. Then, distance yourself from the situation.  Pretend that it’s is a story about someone else entirely.  Consider each piece of weakening evidence individually, and estimate how much less certain it would make a fully rational Bayesian reasoner on its own and in conjunction with the other pieces of evidence you already had when you started considering this new one.  If you want to be really fancy about it, plug it into Bayes theorem and run the numbers.  Write those estimations in a column.   
  4. Finally, in another column, estimate how much each piece of evidence really did decrease your certainty of your false belief, and compare those numbers to those in the first column.

Now, perhaps you’re a whole lot more rational than I am.  But here’s what I find almost every time.  What actually happens is that my certainty barely changes at all until the final piece of evidence, even though the Bayesian reasoner’s certainty about the false hypothesis falls way below 50% long before that.  

This is what it means to cling to a belief, and it's all the more difficult to overcome in the course of a debate.  Even the most rational among us have human brains full of cognitive biases; defending yourself against them takes serious effort, no matter how far you've come as a rationalist.

But you don't have to take my word for it!  Go do science to it.  I'll see you next time.  ;-)

Tuesday, September 18, 2012

Science is better with Bayes.

My goal here is to explain how to approach science in every-day life through Bayes' theorem.  I promise it'll be fun.

(Made you look.)
One of the (several) problems with falsificationism (Popper's approach to science I laid out in a previous post) is that it doesn't give a useful account of degrees of certainty.  It encourages this idea that either you know a thing is true, you know it's false, or you're completely in the dark about it and can't make any rational decisions based on it.  In reality, if you're 90% certain about something, you should mostly act as though it's true, but give yourself a little wiggle room in case you turn out to be wrong.  We're almost never 100% certain about things, and that's perfectly fine.  We can still do good science and make rational decisions while working with probabilities, especially if we take a little advice from Bayes.

Remember back to when you were a little kid and you were just starting to doubt the existence of the tooth fairy.  It was a difficult question, because if there's no tooth fairy then your parents are liars.  And that's bad.  But you can't shake the feeling that this tooth fairy business doesn't quite match up with your understanding of the way the world works.  So you say to the world, "Stand back.  I'm going to try science."

You start with a question.  You want to know how it is that money appears under your pillow whenever you lose a tooth.  The theory you want to test is that the tooth fairy flies into your room, carefully reaches under your pillow, takes the tooth, and leaves money.  So your theory seems to predict that you ought to be able to catch her on camera.  Your test consists of leaving your freshly liberated tooth under your pillow, pointing your webcam at your bed, setting it to record all night, going to sleep, and watching the video the next day.  Your hypothesis is that there will be a fairy somewhere in the video.  Good old capital "S", capital "M" Scientific Method, as usual.

Suppose you get exactly the result you hypothesized.  Sure enough, three hours into the video you see a light from outside, the window opens, and a small shiny woman with wings floats in.  She reaches under your pillow for the tooth, replaces it with money, and then leaves.  The intuitive response to this result is to become wholeheartedly certain that the tooth fairy exists.  Popper's falsificationism tells us it's going to take a whole lot more tests before we should be really certain that the tooth fairy exists, because even though this is a legitimately scientific theory, confirmation isn't nearly as strong as falsification.  But it doesn't tell us how sure we *should* be.  Just that we shouldn't be completely sure.  Should we be 20% sure?  50% sure?  90% sure?

How we should act when we're 20% sure vs. 90% sure is very different indeed.  If you're only 20% sure the tooth fairy exists even though your parents insist she does, you should probably have an important talk with them about honesty, whether they themselves actually believe in her, and maybe skepticism if they really do.  If you're 90% sure, you might want to set up your computer to sound an alarm when it registers a certain amount of light so you can wake up and ask her to let you visit fairyland.  So how do you know how much certainty is rational?

Have no fear.  Bayes is here.

First, you're going to have to guesstimate your certainty about a few things.  You should definitely do this before you even run the experiment.  If you want to be really hardcore about it, convince other people, and generally run things with the rigor of a professional scientist, guesstimating isn't quite going to do the trick.  But every-day science like this is necessarily messy, and that doesn't mean you shouldn't do it.  It's perfectly fine and useful to be somewhere in the ballpark of correct.  So here are the numbers you need.
  • How certain are you that you really will catch her on film if she exists?  You reason that she probably is visible.  Otherwise she wouldn't have to come at night.  And fairies are supposed to glow or something, right?  You can't be invisible if you glow.  On the other hand, you don't really know how magic things interact with the rest of the world, so maybe she's like a vampire and simply can't be caught on film.  Let's call it 80% certainty, or 0.8.
  • How certain are you about the existence of the tooth fairy in the first place, before the experiment?  Since you were definitely becoming a tooth fairy doubter, but still thought it was pretty up-in-the-air, you figure you were about 40% certain that there's a tooth fairy.  You can express that as the decimal 0.4.
  • How likely is it that you'll see a fairy on the recording even if the tooth fairy doesn't actually exist?  It seems really unlikely.  But you can imagine other things that would cause this.  You mentioned to your older brother earlier that you were doubting the tooth fairy, so maybe he'll find out about your plan and play a prank with his film school buddies.  Or maybe there will be some fluke that causes damage to the file so it looks like there's a glowy person shaped thing in the recording that really is only in the recording.  So it's imaginable, but unlikely.  Let's say 5% sure something like that could happen.  0.05.
  • Finally, how likely is it that there's no tooth fairy?  Well this one's easy.  You already decided you're 40% sure there's a tooth fairy, so you must be 60% sure there isn't one.  0.6.
Bayes' theorem is all about finding out how much the evidence should change your beliefs, and whether it should change them at all.  It weighs all those factors we just estimated against each other and comes up with a degree of certainty that actually makes sense when you put them together.  Human brains are really bad at weighing probabilities rationally.  They just aren't built to do it.  But that's ok, because we have powerful statistical tools like this to help us out--provided we know how to use them.

If you want to know the nitty gritties of what's really going on inside Bayes theorem, check out Eliezer Yudkowsky's "excruciatingly gentle introduction to Bayes' theorem".  He's already got that covered (beautifully).  I just want to show you how it ends up working in real life.  So let's run the numbers.

We're looking for the probability that there's a tooth fairy after accounting for having (apparently) caught her on camera.  That's P(A|B), read "probability of A given B", where A is "there's a tooth fairy" and B is "she's in the recording", so "probability that there's a tooth fairy given that she's in the recording". 

In the numerator, we start with P(B|A), which is how likely it is that we really will see her on camera if she exists--probability "she's in the recording" given "there's a tooth fairy".  And that's 0.8.  Next, we multiply that by how sure we were that there's a tooth fairy before we caught her on film, simply probability "there's a tooth fairy".  And that's 0.4, for a total of 0.32 on top.

For the denominator, we start with a value we already have.  "P(B|A) P(A)" is what we just worked out to be 0.32.  So that's on one side of the addition sign.  Next, we want the probability that we'd see the tooth fairy in the recording even if the tooth fairy didn't actually exist.  The squiggly ~ symbol means "not"; P(B|~A) is probability "she's in the recording" given "she doesn't exist".  And that's 0.05.  Then we multiply that by P(~A), the probability that there isn't a tooth fairy, which is 0.6, for a total of 0.03 on the other side of the addition sign.  Add that up, and it's 0.35 on the bottom.

Finally, divide the top by the bottom: 0.32 divided by 0.35 equals 0.914ish.  What does that mean?  It means that if you started out thinking it's a bit less likely that there's a tooth fairy then that there isn't one, and then you caught her on camera, you should change your beliefs so that you're just a little over 90% certain that there's a tooth fairy.

In other words, you're growing up into an excellent rationalist who just made a groundbreaking discovery.  Go show the world your tooth fairy video, and see about having tea with the faeries.

Everything's better with science, and science is better with Bayes.


Problem Set: No, really, run the numbers.

1) Your power is out. It's storming. Use Bayes' theorem to decide how sure you are that a line is down.

2) A person you're attracted to smiles at you. Are they into you too?

3) (For this one, intuit the answer first. Make your best guess before applying the theorem, and WRITE IT DOWN. It's ok if you're way off. Just about all of us are. That's the point. Human brains aren't built for this kind of problem. I just don't want you falling prey to hindsight bias.) 1% of women at age forty who participate in routine screening have breast cancer. 80% of women with breast cancer will get positive mammographies. 9.6% of women without breast cancer will also get positive mammographies. A woman in this age group had a positive mammography in a routine screening. What is the probability that she actually has breast cancer?

Monday, September 17, 2012

"Science as Falsification" by Karl Popper: a simple English rendition (with a bit of artistic license)

Just to be clear, I'm not endorsing anything the authors is saying.  I'm just trying to make a paper that was highly influential in academia accessible to everybody else too.  The original paper of which the following is a rendition was originally published in 1963 in Conjectures and Refutations.  You can read the original version here.

Karl Popper, possibly in need of some simple English.
For the past year or so, I've been worried about the question, "What makes a theory count as scientific?"  I'm not worried about what makes something true or acceptable, just what makes it scientific as opposed to unscientific.  Science often gets things wrong, and people often stumble on things that are right without the help of science, so this can't be just about truth. 

Lots of people think that what makes something count as science is the fact that it came from observation and testing.  But I don't buy that.  Plenty of stuff that doesn't count as science is all about observation.  People believe in astrology, for instance, because they observe that astrologers make predictions that turn out to be true.  So why isn't astrology science?  How is the theory of astrology different from, say, Einstein's theory of general relativity?

The difference is that Einstein's theory might turn out to be wrong, and if it is, we'll eventually know.  We'll know because one day we'll make observations about the world that aren't in line with his theory.  What makes theories like Astrology, Freudian analysis, and other sorts of pseudo-science unscientific is that they can explain everything.  Usually, when we see that a theory is confirmed over and over again, we believe in it even more.  But if there's no way at all, even in principle, to make an observation that isn't in line with the theory, then all those confirmations don't actually mean anything.  Theories like that would be in line with all the same observations even if the theories were false--so if the theory is false, there's no way to find that out.

General relativity, evolution, Newtonian mechanics, and Mendelian genetics are all scientific theories not because there's lots of evidence confirming them, but because they make falsifiable predictions.  They predict certain things about the world, and the predictions are risky because we can check to see if the world really is that way.  If the world doesn't turn out to be the way the theory predicts, then we know the theory is false.  For pseudo-science, we get all the same predictions whether the theory is true or not.  There's no observation we could make to find out whether the theory's false.  Unscientific theories are unfalsifiable, unable to be shown false.

Observations that support a theory only really count as support if the theory makes risky predictions.  If a theory is scientific, you should be able to make a test so that if you get one result, you can continue believing the theory just as much as you did before--but if you get another result, you have to conclude that the theory is false.  Pseudoscience doesn't let you make these kinds of tests, because there's never any result you could possibly get that would make you change your mind and stop believing the theory.

Sometimes people have theories that really are testable, but when the test results don't come out the way they want, they either find some excuse to throw those results away, or they change their theory to match the results so it looks like they were right all along.  That's not science either, because it's impossible to find out that the theory is false when you do things that way, too.

This philosophy of science is called falsificationism, and I made it because draws it a line between what is science and what isn't.

Thursday, September 13, 2012

Rationalism Precludes Theism

I just had a long Facebook discussion about what it would take for a rationalist to believe in god.  I raised the question because the better we know exactly what sort of evidence would be required for rational theism, the more justified we are in not being theists.  It turned out to be very difficult to imagine what evidence would suffice.  In the end, I was able to prove that there are no conditions under which it would be rational to believe in god.  This surprised me, so I thought I’d share my argument.

I'll start with bunnies. One person said they’d believe in god given fossil evidence of Cambrian rabbits.  That seemed pretty weak to me at first, but I thought I should at least think it through.  I'm imagining that tomorrow morning I wake up to coffee and NPR, and find that the main story of the day is a claim that archeologists uncovered fossils from the Cambrian. My first thought is, "Simple mistake. Someone misrepresented information, got confused, fabricated evidence, etc." I do some research. It probably is a simple mistake. But suppose it isn't. Next, I think, "Earthquake anomaly." That seems pretty likely. More research. Along these lines, I entertain increasingly unlikely hypotheses (in careful order). "God did it" is nowhere near the beginning of the list. Part of that is because I'm not sure what it means, but I'll get back to that. I'd be getting near the neighborhood of god territory about the time I started hypothesizing that Earth is an alien science fair project and the rabbit fossil is left over from a test run that got a little messy and wasn't cleaned up all the way. That would indeed involve an intelligent creator of the human race, but it's quite a long way from, say, omnipotence, omniscience, omnipresence, and omnibenevolence.

The first problem with imagining sufficient evidence for belief in god is this: There are a whole lot of things we could mean when we say "god exists".  Not all of them are equally likely. Nor does one kind of evidence justify belief in all of them. "God" is fuzzy. Much like bunnies. It's semantically ambiguous and vague.  So if we want to know what it would take to reasonably believe in god, we’re going to have to figure out what it would take to reasonably believe in a pretty diverse range of entities individually.

That's one of the most frustrating things about talking with theists; they're quick to tell you what they don't mean once they've determined you're arguing for a god in whom they don't believe either, but they usually aren't so quick to pin down what they really do mean. When you try to reason with a theist, therefore, it’s a good idea to ask them explicitly what they mean by god even before you tell them that he doesn’t exist.  With many you get the impression that they themselves don't know that they mean. You'll talk with them for a long while, thinking you're getting somewhere, and then when you bring them to a conclusion they don't like but can't avoid, they say, "Well sure, but that's not what I mean by 'god'. What if god is really x?"

Legend has it that Paul Spade was once teaching a seminar on the philosophy of theology when someone pulled one of these. Another student gave an exasperated sigh, turned to the first student, and remarked, "Look, what if god is a garage in New Jersey?"

This succinctly expresses a rationalist’s frustrations with fuzzy notions of god, but let’s see what happens when we take the question seriously.  If god is a garage in New Jersey, convincing me of his existence is a fairly simple matter. I already have an awful lot of good reasons to think that there are garages in New Jersey, so showing me a picture of the particular one you're talking about would be plenty.  But this form of theism is neither interesting nor useful.  I really hope conceptions of god never get so boring as to be confined to garages in New Jersey.

So now let’s look at the somewhat more serious kinds of gods who are merely responsible for purposefully creating humans.  In light of the many observations about the universe we've so far made and systematically evaluated through science, it is tremendously unlikely that the human race was intelligently created.  Finding rabbit fossils would indeed be evidence for intelligent creation, because the probability of intelligent creation would be slightly higher after throwing large chunks of our model of biology into doubt.  But it's horribly weak evidence, especially relative to its strength for alternative hypotheses that are far more in line with the vast majority of what we've so far observed. It would be utterly irrational to believe even in the very weak meanings of god on the basis of Cambrian rabbits.  (Obviously, this isn’t evidence at all for garage-gods, since garages are equally likely to exist whether or not there were rabbits in the Cambrian.)

If god is simply any conscious thing that purposefully created the human race, then here is an example of what would convince me. A very long-lived alien could land on Earth, show us the blueprints, and explain how it did it and why. Well, that wouldn't quite be enough, because the alien could be lying. (I mean, come on, you're a brilliant alien who's run into an extremely credulous species that likes to worship even evil gods. Honesty, or godhood? I could see lying.) But if we took those blueprints, showed that they account for all pre-existing observations, and made some predictions based on them whose truth would be in direct contradiction with our current model, then we could test those predictions and the right results would convince me that we were in fact created intelligently by this alien. Which, by that definition, would mean I'd become a theist.

But for meanings of god that are bigger than this (for instance, a being that is omnipotent), I run into the following problem. It is much, much more likely that there exists a being who is capable of causing me to experience whatever it chooses, regardless of what's actually going on outside of my head, than it is that there's a being who really does possess such properties as omnipotence and omniscience. Why?  Because of conjunction. 

For any events x and y, the odds of x happening cannot be greater than the odds of x and y happening.  To figure out the base probability that x and y both happen, you multiply the odds of x by the odds of y.  Odds are expressed as percentages or fractions, so you’re multiplying something less than one by something less than one, which makes the product even smaller than either factor. 

It would take a definite, finite amount of power and/or knowledge to appear infinitely powerful or knowledgeable.  There’s a certain set of things you’d have to know or be able to do in order, say, to run a computer simulation of a lifetime’s worth of human experience.  There is probably a very large number of things you’d have to do, and many of them may be awfully improbable, but because the set isn’t infinite, the probability isn’t infinitesimal (provided the set is well founded—that is, no item on the list requires that you be able to do all the things on the list).

A being with those powers could cause me to experience what I would ordinarily take to be evidence of extraordinary things. There is a certain degree of extraordinaryness beyond which it becomes less likely that the thing I’m experiencing is actually happening than that someone is purposefully monkeying with my subjectivity. For instance, perhaps I am actually a program running on the hard drive of some human’s computer from the future.  Perhaps the future human is amused by the game of creating consciousnesses solely for the purpose of messing with them. That would have to be sort of an evil person, but I must admit it's exactly my kind of evil.
But is a creature with the power to create such a simulation rightly called a god? If so, then any experience (or group of experiences) beyond the subjectivity-monkeying threshold would make me a theist. But this god is infinitely less powerful than an omnipotent god, so again, that's a long way from the god most theists seem to believe in.  They want a god who can do anything.

I'd planned to claim next that only an a priori proof for any god less likely than the monkeying version would do, but it now occurs to me that even that would be insufficient  With a slight modification, the monkeying-god becomes Plato's evil demon.  

Plato described a demon whose only purpose in life is to make us miscount the number of sides on a triangle.  It could be that there are not actually three sides to a triangle, provided that every time we try to count the sides of a triangle, we make a mistake.  This problem is bigger than triangles.  If the monkeying god can control every aspect of my subjectivity by changing lines of computer code, he could cause me to reason incorrectly about even an apparently iron-clad mathematical proof.  And this, too, would be much more likely than anything even close to the god(s) of the theists.

Note, by the way, that even the first version of the monkeying god isn't necessary for experiences of direct revelation. If an experience could possibly be caused by a malfunctioning (or strangely functioning) human brain, it's not sufficient evidence for theism. Simple hallucination happens all the time. I came up with the monkeying god to account for experience that couldn't be pathological. Here's an example of the kind of experience I'm talking about (adapted from a splendid scene by Eliezer Yudkowsky in Harry Potter and theMethods of Rationality).

You hand a very large list of prime numbers to a friend and tell him to select two four digit prime numbers (without telling you what they are) and write down their product. He returns a paper on which is written "16285467". You walk outside directly afterward, grab a shovel, pick a random chunk of ground, and start digging. Five feet down, you hit a rock. Upon examining the rock, you find that it contains fossilized crinoid stems on the surface (and may or may not contain a rabbit in the middle, presumably from the Paleozoic this time). On one side, the crinoid stems are configured to write out "2213". On the other side, the crinoid stems say "7359".  Actually imagine that this has happened, and imagine how you would react.  “I must be hallucinating” probably wouldn’t satisfy you, for you lack the ability to factor eight digit numbers in your head.

Now, this isn't a perfect example, because it wouldn't be impossible to hallucinate this of your own accord. But it would indeed be incredibly unlikely (literally), far more so than anything people experience when they claim to communicate directly with god.  I'm not sure whether it would be more likely that an external agent is messing with your mind than that you happened to hallucinate it accidentally.  Or that you're actually that damn good at prime factorization.  Or that you multiplied every set of pairs of four digit numbers with one member less than half of 16285467 without noticeably aging and then promptly forgot about it.  But if it happened several times in a row, or many similar things happened, at some point the pathology position becomes untenable and it's time for the monkeying god hypothesis to step in.

Therefore, it's never rational to believe in an Allah or New-Testament-style god, because whatever your reason for suspecting that god is responsible, it’s more likely one of the less powerful versions of a god is the cause.  

I'd originally intended to figure out exactly what it would take to convince me of the existence of something like the Catholic god, but it appears this really is a special case.  Even if god does exist, there simply are no conditions under which it's rational to believe in him (unless you're willing to give the name god to something more like a garage in New Jersey).