The Moon Isn’t Out During the Day – Observer Bias

The other night my seven year old son and I were getting home late from one of those all-night Science raves you read about – you know, the ones where scientists indoctrinate their children to live a Godless life lacking all morals and also there is froyo – when he looked up at the sky and said, “I think we get more nights with the Moon than the other side of the Earth does.”

(Just a few weeks before he had asked where the Moon goes at night when he can’t see it and I explained that it is on the other side of the Earth.)

“Why do you think that?” I asked.

“Because I see it more than half the nights I look outside.”

That, my friends, is what we call “observer bias,” and it is an excellent topic for this essay because it is something that we are all guilty of.

Let us establish a few basic facts, the first and foremost being that the Moon is sometimes out during the day. The Moon orbits the Earth and as a result it spends half its time over the day side of the Earth and half over the night side of the Earth. I know this may be obvious to you, but I have personally encountered grown humans that are surprised that the Moon can be seen during the day. Not only is this interesting in its own right, but it is a perfect example of observer bias. Why would someone think that the Moon is never out during the day? The obvious answer is that they’ve never seen the Moon out during the day. I grant you that the Moon is a bit harder to find during the day than it is at night, what with the Sun being so bright, but it’s not like the Moon is hard to find during the day – provided it is there to see. The reason a given person has never seen the Moon during the day is that they’ve never looked for it. If someone doesn’t expect to see the Moon then why look for it?

Observer bias occurs when an observer’s expectations somehow influence the observation, introducing a bias toward a certain conclusion. Because our observer doesn’t expect to see the Moon during the day then they never look for or notice it and as a result they don’t think the Moon is out during the day. It’s important to realize that observer bias does not come from a place of ignorance, it comes from being human.

The Scientific Method is built upon first having an expectation of what the yet-to-be-done observation will show. Without an expectation there is no way to design an experiment that will show what one does – or does not – expect to see. The construction of a well thought out experiment is one of the hardest things that scientists do, and in general is a more complicated topic than I can cover in an essay, or even a whole book. There is a great historic example concerning a classic experiment, but let’s speak more generally about observer bias first.

The most understandable examples of observer bias come from surveys of people’s attitudes or opinions. Construction and execution of a non-biased survey is incredibly hard. First, the survey questions must be constructed without any implied bias in them. Second, the people administering the survey must not add any emphasis to the questions – it is this part that can be incredibly hard to control given the number of people that have to be involved when collecting large numbers of opinions. Let’s say that you’re taking a verbally administered survey concerning your government representative.

“How would you rate your representative’s performance?”

“Okay, I guess.”

“There isn’t anything you don’t like about them?”

The second question prompts the survey taker to think that negative opinions are desired. If this is done often enough then the survey will have a bias towards a specific type of response, introducing the questioner’s observer bias into the results. This is one of the many reasons why surveys of this nature should only be trusted if they come from long established and reputable institutions, ones that are forthcoming about how many people were queried and where they were from. If questions about car ownership are asked in New York City, that information is not indicative of car ownership in the United States as a whole; this introduces something similar to observer bias called selection bias, when a specific subset is asked and then misrepresented as having the views of the general population.

Observer and selection bias are human traits. We naturally congregate with like-minded people who are most probably from our same socioeconomic group. At work we are likely to encounter and associate with people that share our education backgrounds, which can also represent a shared economic status. Our friends, by selection, likely share most of our political and social opinions. A very popular, though frequently abused, term these days is “living in a bubble,” where someone comes to think that the entire world conforms to their viewpoint because all their closest associates share the same viewpoint, democratically confirming its validity. Scientists are only human so we also suffer from observer and selection bias, and in one classic experiment “living in the bubble” took the form of an oil droplet.

In 1909 Robert Millikan began a series of experiments that were so monumental in their result that the technique he invented came to be called the Millikan Oil Drop Experiment. Millikan sought to answer a fundamental question about the Universe: what is the electric charge of a single electron? For his success he was awarded the Nobel Prize in Physics in 1923. Millikan’s experiment was ingenious, and as an undergraduate I spent eight hours performing the experiment again and again to get a large enough sample size to get a reasonable answer.

Here’s how it works.

A fine spray of oil is injected into a cylindrical chamber. The flat plates at the top and bottom of the chamber can be given a controlled amount of opposite electric charges by applying a voltage. The oil drops can be then be given a small electric charge by exposing them briefly to a source of ionizing radiation. By adjusting the electric field in the chamber it is possible to make the oil drops float against the pull of gravity. By measuring the electric voltage applied to the chamber a simple equation estimates the amount of charge in the oil drop.

The charge of a single electron is the fundamental unit of electric charge in the Universe, no charge can be smaller and any charge bigger must be an integer multiple of that fundamental charge. It’s not possible to have two and a half times the charge of the electron; we have to have four times, eighty-six times, one thousand fifty-seven times the charge, or other random whole number multiples. The oil drop must have a multiple of the electron charge, and given the size of an electron and the size of an oil drop it probably contains a large multiple. It’s impossible to know what that multiple is for an individual oil drop without already knowing the charge of the electron, but we do know it has to be a whole number. If we measure enough oil drops in this way – and I’m talking tens of thousands of oil drops – statistics can give us a value for the electron charge because it must be common factor between all the oil drops.

Through this method, Millikan (and his team of graduate students that did the experiments over countless hours) first estimated the electron charge as being 99.39% of the currently accepted value, an accuracy that is an astounding feat.

Enter observer bias. We now know that one of the physical values Millikan needed in his calculation (the viscosity of air) was incorrect. This was not Millikan’s fault, it was just due to the accuracy of this information at the time. This inaccuracy introduced a very small but systematic error in his calculation, resulting in his lower than real (but still incredibly accurate) value for the electron charge. After Millikan published his result others repeated the experiment. Subsequent experiments found the electron charge to be slightly higher than Millikan’s value. Later the number got bigger still, until finally the value reached the current accepted value. Why did the accepted value slowly rise over time rather than jump to the final value?

Groundbreaking physicist and science writer Richard Feynman stated in his commencement address at Caltech in 1974 that scientists took Millikan’s number to be more accurate than they should have considered it. When subsequent experimenters found a number (statistically) much higher than Millikan’s they looked for causes of error to blame. They might discard data from some oil drops, or adjust parameters like the viscosity of air until the number was closer to Millikan’s, even if it was still larger than his original value. As a result the accepted value of the electron charge crept up over time at a slower pace than was necessary given the accuracy of the experimental data. The scientists working after Millikan had observer bias because they were looking for a specific number, or something close to it; any number much bigger or smaller than The Great Millikan’s must be incorrect.

Observer bias is part of human nature, but it is something that scientists must confront head on in order to not get trapped by it. Experiments must be designed to demonstrate if an hypothesis is correct, and should also provide data to demonstrate if an alternative hypothesis is correct; helping to prove one theory by disproving another. It may not be possible to escape the “bubbles” in our personal lives, but becoming aware of them is the first step to finding a way around observer bias.


About Andrew Porwitzky

Dr Andrew Porwitzky is a professional scientist, comic book junkie, and freelance writer. He is also on Twitter way too much.

Leave a Reply