What do reliable results mean in science

Can a single study be enough?

The simple answer is: "Hardly any". Only rarely will a single, fair therapy comparison provide sufficiently reliable findings (evidence) on the basis of which a decision can be made about various therapy options. Occasionally, however, this can happen. These rare individual studies include a study that showed that taking aspirin during a heart attack reduced the risk of premature death [1], and a second that showed that the use of steroids is fatal in acute traumatic brain injury (see below and Chapter 7, What does a “significant difference” mean between the therapies ?, Paragraph 2), as well as a third study in which caffeine was identified as the only drug that can prevent cerebral palsy in premature infants (see Chap 5, Antibiotics for Premature Labor, Section 2). Usually, however, a single study is only one of several comparisons in which the same or similar questions are investigated. Therefore, the results from individual studies should always be evaluated together with the results from other, similar studies.

The British statistician Austin Bradford Hill, one of the pioneers of fair therapy testing, demanded in the 1960s that the following four questions should be answered in research reports:

  • Why was the investigation started?
  • What was done
  • What was found out?
  • And what do their results mean anyway?
Why was the investigation started?
“Few principles are more important to the scientific and ethical validity of medical research than the principle that studies should investigate questions that need urgent answers and that they should be designed in such a way that they can provide meaningful answers to those questions. These two goals require that relevant previous research is identified. ... An incomplete picture of the existing knowledge represents a violation of the unspoken ethical basis of the contract with the study participants, according to which the information that is to be obtained with their help is necessary and useful to other people. "

Robinson KA, Goodman SN. A systematic examination of the citation of prior research in reports of randomized, controlled trials. Annals of Internal Medicine 2011; 154: 50-55.

Even today, these key questions have lost none of their importance, and yet they are all too often insufficiently addressed or even completely overlooked. The answer to the last question - what do the results mean? - is particularly important because it is very likely to influence therapy decisions as well as decisions about future research projects.

Let us take the example of the short-term administration of an inexpensive steroid-containing drug to women who are threatened with premature birth. The first fair test of this therapy, reported in 1972, found that premature infant mortality rates decreased after maternal use of such a steroid-containing drug. Ten years later, further investigations had been carried out on this: These were, however, small studies, the individual results of which were confusing because none of them had systematically taken into account similar studies previously carried out. Otherwise it would have become clear that one could have derived very sound evidence in favor of a beneficial effect of these drugs from them. Since this was not made up until 1989, most obstetricians, midwives, paediatricians and infant nurses were in the meantime not even aware of how effective this therapy was. As a result, tens of thousands of premature babies suffered and died needlessly. [2]

To answer the question "What do the results mean?" To answer the question, the findings from a single fair therapy comparison must be evaluated together with the results of other, similar fair comparisons. Publishing new study results without interpreting them in the light of other relevant and systematic reviews summarized results can delay the identification of both useful and harmful therapies and lead to unnecessary research.

Summarize information from research

Rayleigh, lord. In: Report of the fifty-fourth meeting of the British Association for the Advancement of Science; held at Montreal in August and September 1884. London: John Murray, 1884: pp. 3-23.