The Slog Through the Swamp: What Science Is, And Why It Works, And Why I Care

I talk about science a lot in this blog. I am passionate about science, especially for someone who’s only studied it as a humanities major and an educated layperson. Scientists are my heroes — most obviously scientists like Galileo or Darwin, who’ve forced people to radically rethink the universe and our place in it, but also Joe and Jane Nerdiac slogging away in a lab or a swamp, trying to figure out some minute detail about the world with more patience and diligence than I could ever muster up.

And periodically, both in this blog and elsewhere, I run into people who try to convince me that my faith in science is misplaced. I hear/read people say things like, “Scientists are human, therefore science is flawed… therefore science is not to be trusted, and/or can’t really tell us anything useful about the world.”

The thing is? The first part of that is absolutely true. Science isn’t perfect. It’s a human endeavor, and it’s therefore fraught with imperfection. It’s shaped by bias, and arrogance, and the intense desire to be right, and the ability to be fooled, and the difficulty people have in seeing or imagining what they don’t expect.

I’ve never met, or read, a scientist who thought otherwise.

Which is exactly why the scientific method has developed the way it has.

People talk a lot about science as if it were a set of beliefs — like a religion, a body of theories and opinions about how things are. But while there’s some truth to this on a practical day-to-day basis, it really isn’t the big picture, or even the medium-sized picture. What science is, ultimately, is a method — a method for observing the world, and trying to explain it.

And here’s the thing about the scientific method: It’s been developed over the years to do one very specific thing — to minimize the effects of human error and bias, as much as is humanly possible.

See, scientists KNOW that they, like the rest of the human race, are arrogant, stubborn bastards who crave recognition and have axes to grind. Believe me: when you point out that many scientists are arrogant, you’ll get a dozen or more scientists laughing and saying, “Buddy, you don’t know the half of it.” And they have therefore developed this method for trying to figure out what is and isn’t real about the world — one which goes as far as we know how to minimize the effects of that arrogance and stubbornness and the rest of it.

It doesn’t do it perfectly. And it takes time, not to mention extremely hard, often tedious work. But I would argue that it does this job better than any other method we have of gathering information about the world and coming up with theories to explain it.

So I want to talk a little about the scientific method — what exactly it is, and how it works, and why it’s done the way it’s done. (FYI, this isn’t meant to be a comprehensive summary of the scientific method — just a quickie tour of the features that I think are most pertinent to these conversations.)


Transparency, of both results and methodology. When scientists publish papers, they don’t just report the results of experiments. They also report — in mind-numbingly boring detail — exactly how those experiments were done.

They do this for two reasons. They do it so other people can repeat the experiment and see if they get the same results (see Replicability below). And they do it so other people can examine and analyze their methodology, and point out any problems there might be with it. Scientists know that outside observers can often spot mistakes that an insider can’t — especially when that insider has been working on their research for years, and has a certain rabid attachment to the outcome.

Replicating results. One of the first things that happens when a scientist reports a surprising result is that a hundred other scientists run to their labs to repeat the experiment and see if they get the same result. So even if one scientist gets a particular result because they expected or wanted it and somehow skewed their experiment to make it happen… when the hundred other scientists repeat the experiment and try to replicate the results, it’s not going to come out the same. (BTW, this doesn’t just work to screen out bias — it also works to screen out fraud.)

Peer review. Again, scientists know that outside observers can often spot mistakes that an insider can’t, either because that insider cares too passionately about the outcome, or because they’re simply too close to the work to have perspective on it. So before it’s even published, research has to be reviewed by other scientists in the field — scientists who don’t have the same personal stake in the outcome as the researcher, and some of whom may even have opposing or competing stakes.

Careful control groups. As much as is humanly possible, scientists set up control groups for their experiments that are identical in every way to the testing group except in the area being tested. (And if they don’t do a good job with this, it’s likely to get caught in the peer review process — and even more likely to get caught in the attempts to replicate the research.) It’s impossible to do this perfectly — especially when you’re doing your testing on human beings and not, say, hydrogen atoms — but they do it as well as they can, and they run it by their peers to see if they missed anything (see Peer Review above). They do this because they know, from experience and history, that a hundred different variables can affect the outcome of an experiment — and a variable that you thought was trivial could turn out to be crucial.

I learned about a wonderful example of the importance of careful controls when I was in middle-school science class. We were learning about the polio vaccine, and our teacher explained that when the vaccine was first being tested, the researchers went to the schools and asked parents for permission to test this experimental vaccine on their kids. Some parents said yes, some said no… so the researchers said, “Great. We’ll test the vaccine on the kids whose parents said Yes, and the ones whose parents said No will be our control group.” But when they went to publish their results, they were told that the experiment was flawed and they had to repeat it. There was an important difference between their control group and their testing group, one that hadn’t occurred to them — namely, whether the parents had said Yes or No to the experiment. So they repeated the study, this time splitting the kids whose parents said Yes into a testing group and a control group.

And when they compared their results to the results of the original experiment, they found that, in fact, kids whose parents had refused the experiment WERE more likely to get polio than kids whose parents allowed it. Regardless of whether they’d gotten the vaccine or not. They would never in a hundred years have expected that outcome — but that’s the outcome they got. And they got it — as well as an accurate answer to the rather more important question of whether the polio vaccine worked — because of the combination of peer review and careful use of controls. (I don’t have space here to go into why they think this outcome happened — if you’re curious, ask me in the comments.)

Double-blind and placebo-controlled testing. Scientists know — especially when it comes to doing tests on people, such as medical or psychological research — that unconscious biases of the testers can influence the results of the tests. (You jiggle the test tubes of your experimental group just a little harder than your control group, and your results are fucked.) And when it comes to medical testing, scientists know about the placebo effect. So as much as possible, experiments are carefully set up so that even the researchers don’t know, for instance, which batch of blood samples came from the group that got the drug, and which batch came from the group that got the placebo — until the testing is all completed.

Falsifiability. This is one of the most important principles of science. If you have a theory that can’t be disproven — if any evidence at all can be made to fit into your theory — then you don’t have a useful theory. It has no predictive power, no explanatory power. So when you offer a theory, you have to be willing to say, “If A, B, or C happens, that would support my theory; if X, Y, or Z happens, that would contradict it.”

This is one of the reasons so many science-lovers and skeptics get so frustrated with so many religious or spiritual beliefs (not all of those beliefs, but many). Anything at all that could ever happen can get twisted around somehow to fit into the belief system. And from a scientific method point of view, that makes the belief system useless.

Which is what I was trying to get at before (somewhat clumsily) in my Lattice of Coincidence post, when I was asking, “If paranormal phenomena were ‘shy’ (i.e., inconsistent and unpredictable and tending to disappear when tested) but real, how would that information be useful?” If you have a theory about the paranormal or metaphysical (or about anything else), and no possible result or evidence — or lack thereof — could contradict that theory or convince you that it’s wrong… then it’s not a useful theory. It has no power to explain past results or predict future ones. And that’s not just a practical problem. It’s a philosophical problem, and a big one. If you have no way of knowing whether you’re wrong, then you have no way of knowing whether you’re right.


Does this system sometimes screw up? Fuck, yeah. Especially in the short run. Early results can seem promising but don’t pan out. Surprising new evidence gets explained by boatloads of new theories that turn out to be ca-ca. And I’m sure everyone can probably think of (or Google) many, many examples of times when scientists have taken one or more of the abovementioned principles and massively screwed it up.

But when the method is followed, it works. Slowly, in the long run, with lots of stops and slowdowns and detours along the way, it works. And even when it isn’t carefully followed by an individual scientist, the method works in the long run to catch that scientist’s mistakes — and to catch mistaken assumptions and incorrect theories made by all scientists, and provide a new and more accurate theory.

And maybe more to the point:

What else do we have? What other method do we have for gathering information about the world, and coming up with explanations of what that information means, that has anywhere near the same power to minimize bias, and the desire to be right, and the difficulty in seeing what you don’t expect, and all the other obstacles our brains put in the way of understanding the world?

Intuition and inspiration are great. Scientists rely on it heavily to come up with ideas in the first place. But intuition is a starting place — not a final answer. We KNOW that intuition is heavily slanted by bias and expectations and what we want to be true. Intuition gives us ideas, gets us started on roads to explore — but if we want to be really, really sure that our ideas reflect reality, as sure as we can be with our imperfect brains and our huge and mystifying world, then we need a method to test those inspired, intuitive ideas. And as imperfect as it is, I think the scientific method is the best one we have.

In tomorrow’s post: Common objections to science and the scientific method — and my replies to them. If you have arguments against my little love letter, I’d like to ask you to hold them until then.

The Slog Through the Swamp: What Science Is, And Why It Works, And Why I Care

10 thoughts on “The Slog Through the Swamp: What Science Is, And Why It Works, And Why I Care

  1. 1

    Ooh, great subject!
    Personally, I like to distinguish between the true heart of science, and the scientific method used to achieve it.
    Now, the scientific method is a good thing, for all of the reasons you’ve listed, but it’s merely a well-tested and reliable way of getting to a desired goal. It’s not the goal itself.
    The goal is a good theory. A theory that works. But what, precisely, is a scientific theory?
    A theory is a recipe (written procedure that anyone can, in principle, follow) for making predictions. Science is the search for theories that work.
    Right and wrong are easy to tell apart, but they are not a dichotomy. There are would-be theories that do not fit into either category. These are things that do not make predictions.
    This is where falsifiability comes in. A prediction which is not falsified by any observation is not a meaningful prediction at all. Predicting anything is predicting nothing. It’s neither right nor wrong, it’s simply useless.
    A really good theory makes very specific predictions. “things fall down at 9.8 m/s²” is more specific, and thus more useful, than “things fall down”. By virtue of being more specific, it’s easier to disprove. And so if you test it and fail to disprove it, that’s more significant.
    The big thing I hope to get across is that prediction is the true heart of science. In particular, adjectives like “natural” or “supernatural” have absolutely no bearing on whether a theory is scientific or not. What matters is if it makes predictions, and if the predictions check out.
    If you can manage that, you can make a respected science out of demonology.
    This is what pisses scientists off about folks trying to present creationism as a science. You can look for flaws in evolution presicely because it’s a scientific theory and thus sticks its neck out and makes predictions. You can look for evidence that contradicts those predictions.
    But to demonstrate the advantage of one theory over another, you need to find a circumstance under which the two make conflicting predictions. Then you look and see which is right. Or maybe they’re both wrong.
    It’s easiest and cheapest to start by making your theory explain past observations. If you manage this well, you can hope that you’re on to something good. But an explanation, however beautiful, that fails to take the next step and predict is, in Wolfgang Pauli’s words, “not even wrong”. Creationism fails this basic criterion of adequacy.

  2. 2

    That’s a great summary of the scientific method. Can I point my students this way come next semester? Two comments: First, not all science is hypothesis driven, some is “discovery science”. For example, the human genome project wasn’t looking to test a hypothesis, but was simply (in a highly technical way) gathering some observations about our DNA. The other thing I’d like to point out is that even science not performed in the laboratory with white coats on follows these methods. Research on the environment, or evolution, doesn’t always have the luxury of being able to set up an experiment, but researchers can still find situations that have created a natural experiment by comparing different environments or different organisms. The mice living on dark lava flows surrounded by pale sand in the Southwest comes to mind. Eagerly looking forward to the next installment!

  3. 3

    “I run into people who try to convince me that my faith in science is misplaced”. The science/faith argument is a false dichotomy. It’s not one or the other. The extremists who argue against science because it doesn’t match their literal view of a sacred text are wrong. We live in a predictable and knowable world (which is quite a nice gift to have, and those of us who believe in God should thank him for that feature on a regular basis). At the same time, it is equally wrong for someone to say that people who are devoted to a religion are doomed to be superstitious unscientific fanatics. There are many professional scientists who are religious and their religion does not affect their objectivity.

  4. 4

    “At the same time, it is equally wrong for someone to say that people who are devoted to a religion are doomed to be superstitious unscientific fanatics.”
    Well, first of all: I didn’t say that they were. In fact, if you’ll read these two posts about science again, you’ll see that I barely talked about religion at all. And when I did talk about conflicts between religion and science, I was always very careful to say things like “many,” “often,” “much of the time,” “more likely,” etc. I do think that many religious beliefs reject, contradict, or are even flat-out hostile to science — half the people in this country believe that people were created in more or less our current form about 10,000 years ago — but I didn’t say that all of them were. I didn’t say it, and I don’t think it.
    But I do think this.
    I’ve read a lot of debates between non-believers and believers in religion or spirituality. Not just creationist extremists, but moderate and progressive believers who do value science and the scientific method. And what I have seen time and time again — not every time, but an awful lot of the time — is believers getting backed into a corner in a debate, and replying by saying things like, “I just know it’s true,” or “I feel it in my heart.
    And that, I think, IS in direct conflict with the scientific method. At least when you’re talking about objective questions about what is or is not true in the world (as opposed to subjective questions about your own experience of the world). Try to imagine, say, Stephen Jay Gould arguing for punctuated equilibrium because he just felt it to be true.
    There’s a really good piece about this on the Daylight Atheism blog, called “The Theist’s Guide to Converting Atheists.” It’s at:
    In it, he provides a list of of everything he can think of that he would accept as proof/evidence that a given religion is true. (This is exactly what I meant when I was talking about falsifiability in this post.) He invites theists to do the same — to provide a list of what would convince them that their faith was mistaken. And he points out that the response, very often (he says “almost invariably,” but I think that might be a trifle strong) is, “Nothing could convince me that my faith in God is mistaken.”
    That kind of faith IS in conflict with the principles of science and the scientific method. And while I don’t think it’s universal, I do think it’s very common, even among moderate believers who generally support science. It’s not the same thing as being a “superstitious fanatic” — but it is a conflict. And I think religious believers who do have this kind of “Nothing could convince me that my faith is mistaken” faith need to acknowledge that.

  5. 5

    Eclectic and Kris — good points. (And yes, Kris, I’d be thrilled to have you point your students to my blog!)
    Eclectic, you make a good point about the ultimate purpose of science being to come up with theories that make successful predictions. (Although I think Kris also makes a good point about there being types of science that exist solely to collect information, not to test theories.)
    I think my point is that the scientific method is what makes a scientific theory and its predictions different — and more trustworthy — from your everyday garden-variety theories (like the ones that my answer-syndrome brain come up with on an almost hourly basis).
    The theory of evolution, for instance, is fundamentally different from my theory that bisexuals of both genders are more likely to get into long-term relationships with women than with men. It’s different because the theory of evolution is supported by an enormous body of carefully gathered, peer-reviewed, replicable results from dozens of scientific fields… and my theory about bisexuality is based on my personal observations of my circle of friends and acquaintances.

  6. 6

    Actually, I agree that your theory about bisexuals’ relationships is fundamentally different from the theory of evolution, but NOT for the reason you give.
    The amount of supporting evidence doesn’t _fundamentally_ change a theory, only how much credence we give it. There are no formal graduation ceremonies where a baby conjecture becomes a hypothesis, then a theory and is finally declared a Natural Law.
    The fundamental difference between the two theories is that your relationship theory depends on a great deal of social context. In your time, country, and culture it may be true, but it could be different in different circumstances. And you’re not even sure what the controlling circumstances are.
    Evolution, on the other hand, is universally applicable. Whether the selection pressure is natural or artificial, if you have a system with reproduction, selection pressure, and heritable variation, then evolution will happen.
    That’s what makes it _fundamentally_ a more powerful theory than yours. The fact that it’s also better validated is a minor matter of degree in comparison.

  7. 7

    I just found a wonderful quote that I have to share, that rather neatly explains why people who understand science have little patience with the complaint that something or other is “only a theory”:
    “all the great truths which have been established by the experience of all ages and nations, and which are taken for granted in all reasonings, may be said to be theories. It is a theory in the same sense in which it is a theory that day and night follow each other, that lead is heavier than water, that bread nourishes, that arsenic poisons, that alcohol intoxicates.”
    – Speech on Copyright extension by Thomas Macaulay to Parliament, 5 February 1841.
    I snarfed this from

  8. 8

    Hey! Seems like a great post to me, but to be honest.. Halfway through I stopped reading. Sure, it’d be interesting to read and all that, but I really, really wanted to learn more about why the kids of the parents that refused vaccination were more likely to get the disease than those whose parents agreed but who didn’t get the vaccination.
    Pleeaase do tell, if you got some idea! 😀
    (I should know the stuff about the scientific method already, student of psychology here.. ‘Should’, I somehow doubt my fellow students concern themselves much with the basics of science.. Eh, whatever. Nice to see someone do something like you did. =))

  9. 9

    …but I really, really wanted to learn more about why the kids of the parents that refused vaccination were more likely to get the disease than those whose parents agreed but who didn’t get the vaccination.

    I’ve been waiting for someone to ask me that. I can’t believe it took over a year.
    Here’s what I was taught (and remember, this was over 30 years ago, and in a grade- school science class, so you might want to get confirmation):
    Parents who refused to have an experimental vaccine tested on their kids tended to be generally more cautious and protective when it came to their kids. So they were also, what with the polio epidemic, more likely to keep their kids from being exposed: not letting them swim in public pools, play in public parks, etc.
    But this sort of exposure often acted as a form of inoculation. So the kids with the less protective parents (the ones who let their kids get an experimental vaccine) were more likely to be exposed to small amounts of polio, and (sometimes) got resistance. Whereas the kids with more protective parents were less likely to get exposed to small amounts of polio, and less likely to get resistance… and more likely to get polio.
    Thanks for asking!

  10. 10

    Oh, and P.S.: One of the reasons I think this is important is that it’s kind of counter- intuitive. I, for one, would have thought that the kids with more protective parents and less exposure to polio would be less likely to get polio, not more. Which shows another reason for the importance of careful control groups — you can never be sure which uncontrolled factors will turn out to be important, and in what direction.

Leave a Reply

Your email address will not be published. Required fields are marked *