The Scientific Method

EP Editorial Staff | July 1, 2009


“The human mind is prone to suppose
the existence of more order and
regularity in the world than it finds.”

…Sir Francis Bacon, 1620

This article is based on an excerpt from Chapter 2, “The Fundamental Method,” of the new book, Scientific Method: Applications in Failure Investigation and Forensic Science. It is used with the kind permission of the publisher, CRC Press (www.crcpress.com)

There are historically two versions of the basic scientific method. The first and older version involves collecting verifiable facts and observations about an effect, event, item or subject. A person then assesses these facts and observations and posits a general proposition consistent with the data. As new facts and observations accumulate, the general proposition may be modified or changed to maintain consistency with the known body of verifiable information.

For instance, suppose data concerning the wingspan, weight and flying ability of various bird species were collected and correlated. Based upon these correlations, generalizations about the ratio of wingspan to weight that is necessary to successfully achieve long-distance flight, medium-distance flight or short-distance flight, or perhaps not being able to fly at all, might be concluded.

This was the basic method, which is sometimes called empiricism, advocated by Lord Francis Bacon who lived in the late sixteenth and early seventeenth centuries. He should not be confused with Friar Roger Bacon, who also had a hand in developing the scientific method several centuries before Lord Bacon. The role each played in developing the scientific method will be noted in the next chapter.

Fundamentally, empiricism is the tenet that knowledge is derived from measurements, verifiable facts, observations and actual—as opposed to spiritual or supernatural—experiences, feelings or beliefs. Furthermore, as they are collected, these things are to be reported and documented without being modified or interpreted by preconceived notions.

For example, if the ignition temperature of a certain kind of paper is reported in reference books to be 451 F, this fact can be verified by anyone, in any laboratory, by recreating the same conditions as specified in the reference and measuring the temperature when ignition occurs. This is an example of what is meant by the terms repeatable and verifiable.

A good rule for field or laboratory work is to report what is actually observed, not what a person thinks should be observed or concludes that he/she has observed.If an investigator, however, finds that three samples of the same kind of paper listed in the reference book ignite variously at 454 F, 450 F and 447 F, after ensuring that the specified conditions have been faithfully recreated and this is not due to instrumentation variances, then these are the ignition temperatures that should be reported.

But, if our dauntless investigator has a preconceived notion that the correct answer must be 451 F, since this is the value listed in the reference book, then he might be tempted to report that he found that the ignition point was indeed 451 F, and to ignore the small differences he actually observed. The investigator might presume that he must have done something slightly wrong to have missed, but come close to, the correct answer. After all, 454 F, 450 F and 447 F are all close to 451 F, and taken together, the three values do have an average of 451 F.

The outcome of this is that the investigator nudges the data to fit his preconceived notion of what he expected to find. This is called a confirmation bias. In doing so, he fails to report what was plainly in front of his eyes: that the kind of paper he tested has small but measurable variations in ignition temperature from that reported in the reference book.

This variation in ignition temperature may become important to the investigation in ways the investigator cannot imagine at this point. Unfortunately, in his eagerness to confirm the presumed correct value in the reference book and not appear wrong, the investigator perhaps threw away a very useful nugget of knowledge and compromised sound laboratory practice. A good rule for field or laboratory work is to report what is actually observed, not what a person thinks should be observed or concludes that he/she has observed.

Consider the following story researched and reported by Lisa Jardine in the April 28, 2006, online BBC News. In 1664, sea trials were done by the British Navy to test the use of pendulum clocks to determine longitude. The clocks were taken on a nine-month voyage southward down the west coast of Africa. Upon their return to London in 1665, the leader of the expedition, Captain Robert Holmes, reported that the clocks had worked so well, they had actually saved the lives of the crews of the four ships.

According to Holmes, on the return journey, he and the expedition had to sail several hundred miles westward in order to catch a favorable wind. In doing so, all four ships had run low on drinking water—so low, in fact, that the crews thought they were in danger of running out of water. While their traditional calculations told them the expedition was far from any freshwater supply point, based on the pendulum clocks, they were only a short distance west of Fuego, an island in the Cape Verde group. Thus, the expedition sailed due east and made landfall at Fuego the next day, as predicted. News of the clocks’s success in accurately determining longitude took London by storm, impressing the Fellows of the Royal Society. Orders for the devices began to come in.

However, the pendulum clock’s inventor, Christiaan Huygens, had doubts about the Holmes claim. He didn’t think his clock could be so accurate. Ms. Jardine reports that Huygens said the following in a letter show to the Royal Society:

“I have to confess that I had not expected such a spectacular result from these clocks. I beg you to tell me if the said Captain seems a sincere man whom one can absolutely trust. For it must be said that I am amazed that the clocks were sufficiently accurate to allow him by their means to locate such a tiny island.”

So, the Royal Society asked Samuel Pepys to check the evidence supporting the claim—especially the captain’s ship log. Pepys’s scrutiny of the matter found cause for Huygens’s doubts. The pendulum clocks had not been any more accurate than the traditional calculations. The sailors had simply been lucky. Captain Holmes falsified the evidence in order to please the Fellows of the Royal Society. In other words, he told them what they wanted to hear and what the evidence appeared to support.

It is commendable that Huygens was scrupulously logical. Even though he had invented them, he knew that his pendulum clocks could not be as accurate as was being claimed. In fact, his pendulum type ship clocks never were particularly valuable in accurately determining longitude. An effective ship’s clock to determine longitude was not invented until the mid-1700s, by John Harrison. Thus, the moral of this story is that even when faced with agreeable evidence, it is important to have a questioning attitude. How many sailors might have died by fatal navigation errors if the clocks had been presumed to be accurate?

Empiricism is contrasted by rationalism. Rationalism is the tenet that knowledge can be derived from the power of intellect alone. In other words, knowledge can be derived by deductive reasoning from basic principles that people sometimes take for granted, without physical proof or justifying evidence.

Sometimes, these basic principles are said to be self-evident; that is, they are considered to be so obvious that no proof is required. Alternately, they are characterized as self-evident because the principles are so fundamental, their proof would constitute a tautology—they can only be proven by referring to themselves. Notions about beauty, symmetry, religion, ideology, truth, good and evil and philosophy are often used to justify self-evident principles.

Data that appear to contradict the assumed fundamental principle have to be discredited or must be explained away by the invention of new rules.On a practical level, most notions about beauty, symmetry, etc. do appear to change with time and vary from culture to culture. Despite protestations to the contrary from “true believers,” such notions are not held absolute, even within the same culture. In literate cultures, the evolution of principles held to be absolute is well documented in the historical record. In other cultures, it is indirectly documented through their art, mythology and traditions.

After 2000 years of acceptance, for example, Euclid’s self-evident principles of geometry were eventually challenged by 19th-century mathematicians as NOT really being self-evident. The result was non-Euclidean geometry, which was later used to describe the time-space relationships embedded in relativity. Euclid’s geometry was simply one among many.

Unlike empiricism, rationalism allows preconceived notions to provide a framework into which new facts and data are fitted. Some fundamental principle is usually considered inviolate—the principle can’t be changed or modified because it is assumed to be absolutely true. Consequently, in order for the fundamental principle to remain inviolate, all data, facts, and observations must be made to be consistent with the fundamental principle.

Data that appear to contradict the assumed fundamental principle have to be discredited or must be explained away by the invention of new rules. Sometimes the new rules apply only to that particular fact or group of facts. In the preceding example involving the ignition temperature of paper, the investigator did just that: he discredited his own experimental data when it did not agree with the value he expected. He presumed he had made some small mistake that caused the variation and then nudged the figures to make his laboratory results agree with the value listed in the reference book.

Consider the following simple example about disease to highlight the difference between empiricism and rationalism. Empiricists might note the statistical distribution of a disease, the various age groups it affects and their occupations, when it occurs, where it occurs, etc. In this way perhaps a useful correlation about the disease may be learned that allows the disease to be avoided, prevented, predicted or perhaps cured. (This was how the connection between yellow fever and mosquitoes was made.)

Rationalists, though, might propose as a fundamental principle that disease is a malady sent by God to punish people for wrongdoing. Even if it appears that good people are getting sick, which would contradict the basic principle if taken at face value, rationalists might conclude that disease is striking otherwise good people because they have done bad things that only God knows about. Disease, therefore, provides an excellent way for mortals to distinguish between secret sinners and people who are genuinely good.

Thus, the observation that seemingly good people are also struck down by the disease is explained away by the invention of a special rule. Note that the new rule, like the fundamental principle, does not lend itself to being either proven or disproven. It is a matter of faith, not evidence. Further, because God is presumed to be the controlling factor with respect to disease, some people might conclude that there is little point in attempting to do anything to understand or avoid disease. Mere mortals should not interfere with God’s will.

This was the basic premise of Reverend Edward Massey’s famous (or infamous) 1772 sermon, The Dangerous and Sinful Practice of Inoculation. As a result of opposition by various religious leaders to smallpox inoculation, thousands of Europeans and American Colonists died-people who could have otherwise been saved. Further, despite the reported successes of the Jenner method of vaccination, which removed the risks associated with the live inoculation method, in 1798 various clergymen in Boston formed the Anti-Vaccination Society. It called on people to refuse vaccination because it was “bidding defiance to Heaven itself, even to the will of God,” and that “the law of God prohibits the practice.” Jenner was still being regularly vilified from the pulpit at the University of Cambridge in 1803.

Lest some think that the preceding notion about disease is absurd or just an academic discussion of a belief no longer harbored by modern civilization, please scan the Internet under topics such as disease and punishment or disease as punishment by God.

In short, empiricists accept what is observed at face value and try to find order to explain it. Rationalists propose up front what the order should be and then sort their observations and facts to fit within that framework. More will be said about this later when a priori and a posteriori reasoning are discussed.

The second, more modern version of the basic scientific method also involves collecting data about a particular effect and looking for a principle common to the observations. After developing a working hypothesis consistent with the available data, however, the hypothesis is then applied to anticipate additional consequences or effects that have not yet been observed. In short, the hypothesis is tested to determine if it has predictive value.

These additional consequences or effects are then sought out: perhaps by experimentation, by additional observations or by reexamining evidence and data already collected. Often, a good working hypothesis allows one to predict the presence of evidence or effects that were present from the beginning—but overlooked, misunderstood or considered unimportant during the initial evidence-gathering period.

Often, a good working hypothesis allows one to predict the presence of evidence or effects that were present from the beginning.If the additional consequences or effects predicted by the hypothesis are confirmed, it does not automatically verify the hypothesis. It is only a tentative confirmation. When a prediction and its subsequent verification apparently support the hypothesis, do not contradict it and demonstrate that it has useful predictive ability, it still doesn’t conclusively prove the hypothesis.

Verification of a hypothesis occurs when observations or experiments and data that could show the hypothesis to be false fail to do so. This is the principle of falsification. Thus, there must be not only verifiable evidence that supports a hypothesis, but also no verifiable evidence that falsifies the hypothesis.

The last point is important and is often the most forgotten or neglected step in the scientific method. A reasonable and unbiased effort must be expended to discredit the hypothesis. Verifiable evidence that appears to falsify a hypothesis should not be overlooked, neglected or dismissed. If a hypothesis truthfully and faithfully explains the phenomenon or event that occurred, it will stand against tough, critical scrutiny. Conclusions based upon inductive reasoning alone may be flawed unless they are well tested by falsification. MT

Randall Noon has been sorting out industrial failures and other kinds of mayhem for more than 30 years. A popular contributor to this publication, he is the author of several articles and texts and a licensed professional engineer in several states.




View Comments

Sign up for insights, trends, & developments in
  • Machinery Solutions
  • Maintenance & Reliability Solutions
  • Energy Efficiency
Return to top