It would be fair to say that Wikipedia is a peer-reviewed source of information, although many of the reviewers are not experts. These people can feel strongly about an issue and are willing to devote the time to making their opinion count in an article. In my view, Wikipedia is an excellent source of basic information on topics that are not controversial. The main advantages of Wikipedia over review articles published in scientific journals are the following: {a} Wikipedia articles are available free of charge (there are some freely available scientific articles, too); {b} they usually cover all possible aspects of a topic; and {c} they are written in lay language. The main disadvantages of Wikipedia compared to review articles published in scientific journals are these: {1} Wikipedia articles occasionally contain inaccuracies and {2} quality of sources is sometimes poor (an article may cite some key facts from a book, website, or another non-peer-reviewed source). In my experience, most reviewers of scientific articles do not allow authors to cite Wikipedia as a source of information. Authors can cite only studies published in peer-reviewed journals for a scientific article’s essential information (definitions, argumentation in favor of a theory, and rationale for a study). This raises another important issue. If an author reports research data in a book and a whole study is based on information from other books and similar non-peer-reviewed sources, you can disregard this information as untrustworthy. Citing peer-reviewed sources is one of the main criteria for considering scientific information reliable.
The sixth factor is the track record of the author or authors of the experiment. You need to be skeptical of authors with a history of fraud (accusations or conviction). Authors who have no scientific publications and do not have academic degrees are unlikely to produce valid research data. Authors who are on the blacklist of the website
QuackWatch.org
are usually in the category of unscrupulous peddlers of dangerous or untrustworthy information or of useless health-related products. QuackWatch is a website that contains information on quackery and health fraud and helps consumers make intelligent decisions regarding various unconventional and conventional medical treatments. The website contains a lot of useful information. Occasionally I disagree with articles posted on QuackWatch. The website used to list the late Robert Atkins as a quack, despite several studies having shown that his diet causes the fastest weight loss among many diets. Nonetheless, overall the information presented on QuackWatch is sound and trustworthy.
Finally, credibility of a scientific study’s results is higher when the authors do not have financial conflicts of interest (also known as competing financial interests) associated with publication of the study. Let’s say a research article demonstrates beneficial effects of a drug, and the authors are employees of the pharmaceutical company that makes the drug. This article will be less credible than a similar one published by academic researchers with no financial ties to the drug company. Research shows that studies funded by the pharmaceutical industry are more likely to report beneficial effects of a drug produced by the sponsor and to underreport adverse effects [
34
]. For example, a research article is comparing the effectiveness of drugs A and B. The article is likely to show that drug A is better than drug B if the funding for the study comes from drug A’s manufacturer. Another group of investigators, whose research received funding from the manufacturer of drug B, publishes a similar study. But this paper will report that
drug B
is superior to drug A. Both articles can use rigorous statistics and methodology, pass peer review, and be accepted for publication in respectable journals. There are real-life examples of this kind of biased research [
35
], known as the “funding bias.” Even double-blind randomized controlled trials are not always immune from this bias. In 2007, a research division of pharmaceutical company Eli Lilly & Co. published a promising study of a novel antipsychotic drug in a prestigious scientific journal [
935
]. This double-blind randomized controlled trial included 196 schizophrenic patients and the results were both clinically and statistically significant. Many in the scientific community viewed this study as revolutionary because it reported a new class of antipsychotic drugs, based on a mechanism different from all previous classes of these drugs. Sometime later, in April 2009, Eli Lilly announced that the second trial of this drug in schizophrenia failed to show better than placebo benefits. Thus, either the results of the first clinical trial were a rare coincidence or the
funding bias
had crept into that first trial. We can conclude that several independent groups of investigators should repeat an experiment and reproduce the findings in order to prove the validity of any type of scientific results. Note that the presence of competing financial interests does not necessarily mean that the reported results are always biased; only that there is a chance that the funding bias is present.
An author’s royalties from a book, for example, by themselves do not constitute a competing financial interest. Authors of scientific articles do not receive royalties from their sale by the publisher, but these authors, nonetheless, have clear economic rewards from publishing. For someone working in academia her amount of published work determines job promotions, access to research funding, and size of salary. Health care authorities do not consider these economic motives conflicts of interest. On the other hand, if a research paper or book presents in a favorable light some product the author sells—this is a competing financial interest. So is promotion of a product made by a company where the author is a shareholder.
To summarize, an experiment will produce strong evidence of a beneficial effect of a treatment (a drug, diet, or medical procedure) if:
According to these criteria, most of the evidence from my self-experimentation presented in this book is weak. Let’s say a single participant reports using the mental clarity questionnaire twice: before embarking on the modified high-protein diet and after 3 weeks of the diet. This study has a control group: the subject serves as his own control when tested without treatment, or before the diet. The study, however, is not randomized and not blinded, which is a minus. Even though a placebo control is not always possible in nonpharmacological studies, the author could still conduct a blinded study with a well-designed control diet. The author didn’t do this. He is reporting the results in a book, not in a scientific journal, which is a minus. The book is not directing readers to buy any goods or services from the author and all proposed techniques are non-proprietary (not protected by patents): a plus. The author has some background in biomedical sciences and
QuackWatch.org
has not blacklisted him (at least not yet): a plus. The number of test subjects is small (one), and therefore the results are statistically insignificant. But the author claims that if he repeats the experiment 10 times or more, it produces the same result. This is somewhat (but not exactly) similar to testing the diet once on 10 different test subjects. These results are more convincing than a single trial on a single person, but the evidence is still weak.
In conclusion, it would be relevant to mention epidemiological studies, a different category of study on human subjects. A detailed discussion of epidemiology is outside the scope of this book (and I am not an expert either). In brief, an epidemiological study conducts no actual experiment but explores in various ways existing statistics about some segments of the population. The goal is to identify correlations (associations) among some factors; for example, between smoking and life expectancy or between consumption of red meat and cancer. Only a tiny minority of these studies can show that one factor is likely to cause the other—the studies that satisfy many or all of the so-called Bradford-Hill criteria [
36
]. Most show a correlation,
not
causation. (Reference [
37
] contains a list of epidemiology-based lifestyle recommendations that scientists refuted by randomized controlled trials.) The mass media occasionally report epidemiological studies in misleading ways. A journalist can report a statistical correlation as a causal relationship, even though the authors of the research paper make no such conclusion. For example, a study may show a correlation between some personal habit and some disease; that is, people who have some habit are more likely to have some disease. Your evening TV news program may report this as “so-and-so habit can cause such-and-such disease” or “such-and-such habit increases the risk of so-and-so disease.” In actuality, if the research article does not prove that the habit is likely to cause the disease, then the reported correlation does not necessarily imply causation. The habit and the disease may have no causal relationship whatsoever between them. A third, unidentified factor can be the real cause of both of them. Alternatively, the disease may have symptoms that make the patients more likely to adopt this habit. In other words, it is the disease that causes the habit, not the other way around. Tobacco smoking and schizophrenia are a good example. Up to 80 to 90% of schizophrenic patients are smokers, but to date, it remains unclear which causes which. Some studies suggest that smoking is a form of self-medication with nicotine by the patients, and thus it is possible that schizophrenia leads patients to smoking.
Keep in mind that a statistical correlation between a habit and a disease is necessary, but
not sufficient
for the existence of a causal relationship between them. If there is no statistical association between a habit and a disease, then there is a 99% chance that the habit does not cause the disease. On the other hand, if the statistical correlation does exist, then the habit
may
cause the disease. A randomized controlled trial will be necessary to either prove or refute this hypothesis. We can conclude that one should exercise caution when interpreting results of epidemiological studies.