Confessions of a Quackbuster

This blog deals with healthcare consumer protection, and is therefore about quackery, healthfraud, chiropractic, and other forms of so-Called "Alternative" Medicine (sCAM).

Tuesday, October 31, 2006

The Scientific Method from P. White (UofO)

The Scientific Method from P. White (UofO)

1.1: What is the "scientific method"?
The scientific method is the best way yet discovered for winnowing the truth from
lies and delusion. The simple version looks something like this:

1. Observe some aspect of the universe.

2. Invent a theory that is consistent with what you have observed.

3. Use the theory to make predictions.

4. Test those predictions by experiments or further observations.

5. Modify the theory in the light of your results.

6. Go to step 3.

This leaves out the co-operation between scientists in building theories, and the
fact that it is impossible for every scientist to independently do every experiment
to confirm every theory. Because life is short, scientists have to trust other
scientists. So a scientist who claims to have done an experiment and obtained
certain results will usually be believed, and most people will not bother to repeat
the experiment.

Experiments do get repeated as part of other experiments. Most scientific papers
contain suggestions for other scientists to follow up. Usually the first step in doing
this is to repeat the earlier work. So if a theory is the starting point for a
significant amount of work then the initial experiments will get replicated a
number of times.

Some people talk about "Kuhnian paradigm shifts". This refers to the observed
pattern of the slow extension of scientific knowledge with occasional sudden
revolutions. This does happen, but it still follows the steps above.

Many philosophers of science would argue that there is no such thing as the
scientific method.

1.2: What is the difference between a fact, a theory and a hypothesis?

In popular usage, a theory is just a vague and fuzzy sort of fact. But to a scientist
a theory is a conceptual framework that explains existing facts and predicts new
ones. For instance, today I saw the Sun rise. This is a fact. This fact is explained
by the theory that the Earth is round and spins on its axis while orbiting the sun.
This theory also explains other facts, such as the seasons and the phases of the
moon, and allows me to make predictions about what will happen tomorrow.
This means that in some ways the words fact and theory are interchangeable.
The organisation of the solar system, which I used as a simple example of a
theory, is normally considered to be a fact that is explained by Newton's theory of
gravity. And so on.

A hypothesis is a tentative theory that has not yet been tested. Typically, a
scientist devises a hypothesis and then sees if it "holds water" by testing it
against available data. If the hypothesis does hold water, the scientist declares it
to be a theory.

An important characteristic of a scientific theory or hypotheis is that it be
"falsifiable". This means that there must be some experiment or possible
discovery that could prove the theory untrue. For example, Einstein's theory of
Relativity made predictions about the results of experiments. These experiments
could have produced results that contradicted Einstein, so the theory was (and
still is) falsifiable.

On the other hand the theory that "there is an invisible snorg reading this over
your shoulder" is not falsifiable. There is no experiment or possible evidence that
could prove that invisible snorgs do not exist. So the Snorg Hypothesis is not
scientific. On the other hand, the "Negative Snorg Hypothesis" (that they do not
exist) is scientific. You can disprove it by catching one. Similar arguments apply
to yetis, UFOs and the Loch Ness Monster. See also question 5.2 on the age of
the Universe.

1.3: Can science ever really prove anything?

Yes and no. It depends on what you mean by "prove".

For instance, there is little doubt that an object thrown into the air will come back
down (ignoring spacecraft for the moment). One could make a scientific
observation that "Things fall down". I am about to throw a stone into the air. I use
my observation of past events to predict that the stone will come back down.
Wow - it did!

But next time I throw a stone, it might not come down. It might hover, or go
shooting off upwards. So not even this simple fact has been really proved. But
you would have to be very perverse to claim that the next thrown stone will not
come back down. So for ordinary everyday use, we can say that the theory is
true.

You can think of facts and theories (not just scientific ones, but ordinary everyday
ones) as being on a scale of certainty. Up at the top end we have facts like
"things fall down". Down at the bottom we have "the Earth is flat". In the middle
we have "I will die of heart disease". Some scientific theories are nearer the top
than others, but none of them ever actually reach it. Skepticism is usually
directed at claims that contradict facts and theories that are very near the top of
the scale. If you want to discuss ideas nearer the middle of the scale (that is,
things about which there is real debate in the scientific community) then you
would be better off asking on the appropriate specialist group.

1.4: If scientific theories keep changing, where is the Truth?

In 1666 Isaac Newton proposed his theory of gravitation. This was one of the
greatest intellectual feats of all time. The theory explained all the observed facts,
and made predictions that were later tested and found to be correct within the
accuracy of the instruments being used. As far as anyone could see, Newton's
theory was the Truth.

During the nineteenth century, more accurate instruments were used to test
Newton's theory, and found some slight discrepancies (for instance, the orbit of
Mercury wasn't quite right). Albert Einstein proposed his theories of Relativity,
which explained the newly observed facts and made more predictions. Those
predictions have now been tested and found to be correct within the accuracy of
the instruments being used. As far as anyone can see, Einstein's theory is the
Truth.

So how can the Truth change? Well the answer is that it hasn't. The Universe is
still the same as it ever was, and Newton's theory is as true as it ever was. If you
take a course in physics today, you will be taught Newton's Laws. They can be
used to make predictions, and those predictions are still correct. Only if you are
dealing with things that move close to the speed of light do you need to use
Einstein's theories. If you are working at ordinary speeds outside of very strong
gravitational fields and use Einstein, you will get (almost) exactly the same
answer as you would with Newton. It just takes longer because using Einstein
involves rather more maths.

One other note about truth: science does not make moral judgements. Anyone
who tries to draw moral lessons from the laws of nature is on very dangerous
ground. Evolution in particular seems to suffer from this. At one time or another it
seems to have been used to justify Nazism, Communism, and every other -ism in
between. These justifications are all completely bogus. Similarly, anyone who
says "evolution theory is evil because it is used to support Communism" (or any
other -ism) has also strayed from the path of Logic.

1.5: "Extraordinary evidence is needed for an extraordinary claim"

An extraordinary claim is one that contradicts a fact that is close to the top of the
certainty scale discussed above. So if you are trying to contradict such a fact,
you had better have facts available that are even higher up the certainty scale.

1.6: What is Occam's Razor?

Ockham's Razor ("Occam" is a Latinised variant) is the principle proposed by
William of Ockham in the fifteenth century that "Pluralitas non est ponenda sine
neccesitate", which translates as "entities should not be multiplied
unnecessarily". Various other rephrasings have been incorrectly attributed to him.

In more modern terms, if you have two theories which both explain the observed
facts then you should use the simplest until more evidence comes along. See
W.M. Thorburn, "The Myth of Occam's Razor," Mind 27:345-353 (1918) for a
detailed study of what Ockham actually wrote and what others wrote after him.
The reason behind the razor is that for any given set of facts there are an infinite
number of theories that could explain them. For instance, if you have a graph
with four points in a line then the simplest theory that explains them is a linear
relationship, but you can draw an infinite number of different curves that all pass
through the four points. There is no evidence that the straight line is the right one,
but it is the simplest possible solution. So you might as well use it until someone
comes along with a point off the straight line.

Also, if you have a few thousand points on the line and someone suggests that
there is a point that is off the line, it's a pretty fair bet that they are wrong.

The following argument against Occam's Razor is sometime proposed:

This simple hypothesis was shown to be false;
the truth was more complicated. So Occam's Razor doesn't work.

This is a strawman argument. The Razor doesn't tell us anything about the truth
or otherwise of a hypothesis, but rather it tells us which one to test first. The
simpler the hypothesis, the easier it is to shoot down.

A related rule, which can be used to slice open conspiracy theories, is Hanlon's
Razor: "Never attribute to malice that which can be adequately explained by
stupidity". This definition comes from "The Jargon File" (edited by Eric
Raymond), but one poster attributes it to Robert Heinlein, in a 1941 story called
"Logic of Empire".

1.7: Galileo was persecuted, just like researchers into today.

People putting forward extraordinary claims often refer to Galileo as an example
of a great genius being persecuted by the establishment for heretical theories.
They claim that the scientific establishment is afraid of being proved wrong, and
hence is trying to suppress the truth.

This is a classic conspiracy theory. The Conspirators are all those scientists who
have bothered to point out flaws in the claims put forward by the researchers.
The usual rejoinder to someone who says "They laughed at Columbus, they
laughed at Galileo" is to say "But they also laughed at Bozo the Clown". (From
Carl Sagan, Broca's Brain, Coronet 1980, p79).

Incidentally, stories about the persecution of Galileo Galilei and the ridicule
Christopher Columbus had to endure should be taken with a grain of salt.
During the early days of Galileo's theory church officials were interested and
sometimes supportive, even though they had yet to find a way to incorporate it
into theology. His main adversaries were established scientists - since he was
unable to provide HARD proofs they didn't accept his model. Galileo became
more agitated, declared them ignorant fools and publicly stated that his model
was the correct one, thus coming in conflict with the church.

When Columbus proposed to take the "Western Route" the spherical nature of
the Earth was common knowledge, even though the diameter was still debatable.
Columbus simply believed that the Earth was a lot smaller, while his adversaries
claimed that the Western Route would be too long. If America hadn't been in his
way, he most likely would have failed. The myth that "he was laughed at for
believing that the Earth was a globe" stems from an American author who
intentionally adulterated history.

1.8: What is the "Experimenter effect"?

It is unconscious bias introduced into an experiment by the experimenter. It can
occur in one of two ways:

· Scientists doing experiments often have to look for small effects or differences
between the things being experimented on.

· Experiments require many samples to be treated in exactly the same way in order
to get consistent results.

Note that neither of these sources of bias require deliberate fraud.

A classic example of the first kind of bias was the "N-ray", discovered early this
century. Detecting them required the investigator to look for very faint flashes of
light on a scintillator. Many scientists reported detecting these rays. They were
fooling themselves. For more details, see "The Mutations of Science" in Science
Since Babylon by Derek Price (Yale Univ. Press).

A classic example of the second kind of bias were the detailed investigations into
the relationship between race and brain capacity in the last century. Skull
capacity was measured by filling the empty skull with lead shot or mustard seed,
and then measuring the volume of beans. A significant difference in the results
could be obtained by ensuring that the filling in some skulls was better settled
than others. For more details on this story, read Stephen Jay Gould's The
Mismeasure of Man.

For more detail see:
T.X. Barber, Pitfalls of Human Research, 1976.
Robert Rosenthal, Pygmalion in the Classroom.
[These were recommended by a correspondent. Sorry I have no more information.]

1.9: How much fraud is there in science?

In its simplest form this question is unanswerable, since undetected fraud is by
definition unmeasurable. Of course there are many known cases of fraud in
science. Some use this to argue that all scientific findings (especially those they
dislike) are worthless.

This ignores the replication of results which is routinely undertaken by scientists.
Any important result will be replicated many times by many different people. So
an assertion that (for instance) scientists are lying about carbon-14 dating
requires that a great many scientists are engaging in a conspiracy. See the
previous question.

In fact the existence of known and documented fraud is a good illustration of the
self-correcting nature of science. It does not matter if a proportion of scientists
are fraudsters because any important work they do will not be taken seriously
without independent verification. Hence they must confine themselves to
pedestrian work which no-one is much interested in, and obtain only the
expected results. For anyone with the talent and ambition necessary to get a
Ph.D this is not going to be an enjoyable career.

Also, most scientists are idealists. They perceive beauty in scientific truth and
see its discovery as their vocation. Without this most would have gone into
something more lucrative.

These arguments suggest that undetected fraud in science is both rare and
unimportant.

The above arguments are weaker in medical research, where companies
frequently suppress or distort data in order to support their own products.
Tobacco companies regularly produce reports "proving" that smoking is
harmless, and drug companies have both faked and suppressed data related to
the safety or effectiveness or major products.

For more detail on more scientific frauds than you ever knew existed, see False
Prophets by Alexander Koln.

The standard textbook used in North America is Betrayers of the Truth: Fraud
and Deceit in Science by William Broad and Nicholas Wade (Oxford 1982).
There is a mailing list SCIFRAUD for the discussion of fraud and questionable
behaviour in science. To subscribe, send "sub scifraud " to
listserv@uacsc2.albany.edu.

1.9.1: Did Mendel fudge his results?

Gregor Mendel was a 19th Century monk who discovered the laws of inheritance
(dominant and recessive genes etc.). More recent analysis of his results suggest
that they are "too good to be true". Mendelian inheritance involves the random
selection of possible traits from parents, with particular probabilities of particular
traits. It seems from Mendel's raw data that chance played a smaller part in his
experiments than it should. This does not imply fraud on the part of Mendel.

First, the experiments were not "blind" (see the questions about double blind
experiments and the experimenter effect). Deciding whether a particular pea is
wrinkled or not needs judgement, and this could bias Mendel's results towards
the expected. This is an example of the "experimenter effect".

Second, Mendel's Laws are only approximations. In fact it does turn out that in
some cases inheritance is less random than his Laws state.

Third, Mendel might have neglected to publish the results of `failed' experiments.
It is interesting to note that all 7 of the characteristics measured in his published
work are controlled by single genes. He did not report any experiments with more
complicated characteristics. Mendel later started experiments with a more
complex plant, hawkweed, could not interpret the results, got discouraged and
abandoned plant science.

See The Human Blueprint by Robert Shapiro (New York: St. Martin's, 1991) p.
17.

1.10: Are scientists wearing blinkers?

One of the commonest allegations against mainstream science is that its
practitioners only see what they expect to see. Scientists often refuse to test
fringe ideas because "science" tells them that this will be a waste of time and
effort. Hence they miss ideas which could be very valuable.

This is the "blinkers" argument, by analogy with the leather shields placed over
horses eyes so that they only see the road ahead. It is often put forward by
proponents of new-age beliefs and alternative health.

It is certainly true that ideas from outside the mainstream of science can have a
hard time getting established. But on the other hand the opportunity to create a
scientific revolution is a very tempting one: wealth, fame and Nobel prizes tend to
follow from such work. So there will always be one or two scientists who are
willing to look at anything new.

If you have such an idea, remember that the burden of proof is on you. Posting
an explanation of your idea to sci.skeptic is a good start. Many readers of this
group are professional scientists. They will be willing to provide constructive
criticism and pointers to relevant literature (along with the occasional rasberry).
Listen to them. Then go away, read the articles, improve your theory in the light
of your new knowledge, and then ask again. Starting a scientific revolution is a
long, hard slog. Don't expect it to be easy. If it was, we would have them every
week.

Bill Latura
Twenty Science Attitudes
From the Rational Enquirer, Vol 3, No. 3, Jan 90.

1. Empiricism. Simply said, a scientist prefers to "look and see." You do not argue
about whether it is raining outside--just stick a hand out the window. Underlying
this is the belief that there is one real world following constant rules in nature, and
that we can probe that real world and build our understanding--it will not change
on us. Nor does the real world depend upon our understanding--we do not "vote"
on science.

2. Determinism. "Cause-and-effect" underlie everything. In simple mechanisms, an
action causes a reaction, and effects do not occur without causes. This does not
mean that some processes are not random or chaotic. But a causative agent does
not alone produce one effect today and another tomorrow.

3. A belief that problems have solutions. Major problems have been tackled in the
past, from the Manhattan Project to sending a man to the moon. Other problems
such as pollution, war, poverty, and ignorance are seen as having real causes and
are therefore solvable--perhaps not easily, but possible.

4. Parsimony. Prefer the simple explanation to the complex: when both the complex
earth-centered system with epicycles and the simple Copernican sun-centered
system explain apparent planetary motion, we choose the simpler.

5. Scientific manipulation. Any idea, even though it may be simple and conform to
apparent observations, must usually be confirmed by work that teases out the
possibility that the effects are caused by other factors.

6. Skepticism. Nearly all statements make assumptions of prior conditions. A
scientist often reaches a dead end in research and has to go back and determine if
all the assumptions made are true to how the world operates.

7. Precision. Scientists are impatient with vague statements: A virus causes disease?
How many viruses are needed to infect? Are any hosts immune to the virus?
Scientists are very exact and very "picky".

8. Respect for paradigms. A paradigm is our overall understanding about how the
world works. Does a concept "fit" with our overall understanding or does it fail to
weave in with our broad knowledge of the world? If it doesn't fit, it is
"bothersome" and the scientist goes to work to find out if the new concept is
flawed or if the paradigm must be altered.

9. A respect for power of theoretical structure. Diederich describes how a
scientist is unlikely to adopt the attitude: "That is all right in theory but it won't
work in practice." He notes that theory is "all right" only if it does work in
practice. Indeed the rightness of the theory is in the end what the scientist is
working toward; no science facts are accumulated at random. (This is an
understanding that many science fair students must learn!)

10. Willingness to change opinion. When Harold Urey, author of one textbook
theory on the origin of the moon's surface, examined the moon rocks brought
back from the Apollo mission, he immediately recognized this theory did not fit
the hard facts laying before him. "I've been wrong!" he proclaimed without any
thought of defending the theory he had supported for decades.

11. Loyalty to reality. Dr. Urey above did not convert to just any new idea, but
accepted a model that matched reality better. He would never have considered
holding to an opinion just because it was associated with his name.

12. Aversion to superstition and an automatic preference for scientific
explanation. No scientist can know all of the experimental evidence underlying
current science concepts and therefore must adopt some views without
understanding their basis. A scientist rejects superstition and prefers science
paradigms out of an appreciation for the power of reality based knowledge.

13. A thirst for knowledge, an "intellectual drive." Scientists are addicted puzzlesolvers.
The little piece of the puzzle that doesn't fit is the most interesting.
However, as Diederich notes, scientists are willing to live with incompleteness
rather than "...fill the gaps with off-hand explanations."

14. Suspended judgment. Again Diederich describes: "A scientist tries hard not to
form an opinion on a given issue until he has investigated it, because it is so hard
to give up opinion already formed, and they tend to make us find facts that
support the opinions... There must be however, a willingness to act on the best
hypothesis that one has time or opportunity to form."

15. Awareness of assumptions. Diederich describes how a good scientist starts by
defining terms, making all assumptions very clear, and reducing necessary
assumptions to the smallest number possible. Often we want scientists to make
broad statements about a complex world. But usually scientists are very specific
about what they "know" or will say with certainty: "When these conditions hold
true, the usual outcome is such-and-such."

16. Ability to separate fundamental concepts from the irrelevant or
unimportant. Some young science students get bogged down in observations and
data that are of little importance to the concept they want to investigate.

17. Respect for quantification and appreciation of mathematics as a language of
science. Many of nature's relationships are best revealed by patterns and
mathematical relationships when reality is counted or measured; and this beauty
often remains hidden without this tool.

18. An appreciation of probability and statistics. Correlations do not prove causeand-
effect, but some pseudoscience arises when a chance occurrence is taken as
"proof." Individuals who insist on an all-or-none world and who have little
experience with statistics will have difficulty understanding the concept of an
event occurring by chance.

19. An understanding that all knowledge has tolerance limits. All careful analyses
of the world reveal values that scatter at least slightly around the average point; a
human's core body temperature is about so many degrees and objects fall with a
certain rate of acceleration, but there is some variation. There is no absolute
certainty.

20. Empathy for the human condition. Contrary to popular belief, there is a value
system in science, and it is based on humans being the only organisms that can
"imagine" things that are not triggered by stimuli present at the immediate time in
their environment; we are, therefore, the only creatures to "look" back on our past
and plan our future. This is why when you read a moving book, you imagine
yourself in the position of another person and you think "I know what the author
meant and feels." Practices that ignore this empathy and resultant value for human
life produce inaccurate science. (See Bronowski for more examples of this
controversial "scientific attitude.")






********************** Subscribe to this blog **********************
Enter your email address below to subscribe to
Confessions of a Quackbuster!


powered by Bloglet
**********

Reciprocal Links: An Invitation