The Truth About Clinical Studies | Health Newsletter

Date: 05/09/2016    Written by: Jon Barron

Everything You Always Wanted to Know about Clinical Studies

The Truth About Clinical Studies | Health Blog

Between the Baseline of Health Foundation and Baseline Nutritionals, we get thousands of questions and comments every month. Most, in one form or another, ask us to play doctor, which we cannot legally do. But second to that in number are the questions about studies, which usually take one of two forms:

  • What do we think about some new study that challenges some aspect of alternative health or that claims that some herb or nutraceutical either doesn't work or is harmful to your health?
  • Where are the independent, third-party clinical studies that back up alternative health? (The snarky implication being that alternative health is bunkum and that only pharmaceutical drugs are backed by science.)

In fact, both types of questions are actually based on a gross misunderstanding of what studies--and particularly clinical studies--are, what they mean, and how much faith we should actually place in them. Over the years, we've addressed these issues in some detail--but in pieces, here and there, some in one newsletter and some in another. Today, let's put it all together and discuss everything you've ever wanted to know about clinical studies. Let's begin by taking a look at the three types of studies predominantly used in the health and medical fields: Case Control Studies, Clinical Studies, and Cohort Studies.

Case Control Studies

In Case Control Studies (also called Retrospective Studies), subjects are not tested. Rather, data is analyzed. Specifically, the risk factors of people with a certain disease (cases) are compared with those without the disease (control). In effect, researchers study the medical and lifestyle histories of the people in each group to learn what factors may be associated with the disease or condition being examined. For example, in studying cancer, one group identified in the data may have included lots of fruits and vegetables in their diets, whereas another group did not. In fact, virtually all previous studies trying to identify the relationship between diet and cancer were Case Control Studies.

There is a problem with Case Control Studies, however.

They can produce false results--indicating, let's say, that a fruit and vegetable diet is beneficial in preventing cancer--because "they are inherently biased." Since Case Control Studies, by definition, seek to identify the factors contributing to cancer by comparing people who have the disease with those who do not, but are otherwise similar, they can easily introduce bias if they do not adequately establish that the two groups being compared are, indeed, otherwise similar. For example a study looking at the role of fruits and vegetables in preventing cancer can easily find itself more likely to "select" health-conscious people as their controls since health conscious people tend to eat more fruits and vegetables almost by definition. This makes them inherently dissimilar from the case group in ways beyond just the intake of fruits and vegetables. And thus, according to some researchers, invalidates their results.

Those researchers who distrust the bias factor in Case Control Studies tend to opt for Cohort Studies instead.

Cohort Studies

A Cohort Study is a research study in which a particular outcome, such as death from cancer, is compared in groups of people who are alike in most ways but differ by a certain characteristic, such as smoking -- or those who eat lots of fruits and vegetables and those who don't. Or to explain it another way, a cohort is any group of individuals who are linked in some way or who have experienced the same significant life event within a given period. There are many kinds of cohorts, including disease, death, location, ethnic background, sex, etc. Any study in which there are measures of some characteristic of one or more groups of similar individuals at two or more points in time is a cohort analysis. In general, Cohort Studies attempt to identify cohort effects. Are changes in the dependent variable (cancer) due to different rates of consumption of fruits and vegetables, for example? In other words, cohort studies are about the life histories of sections of populations and the individuals who comprise them…over time.

At first glance, the difference between Case Control Studies and Cohort Studies might appear subtle, but in the real world, it is significant.

In a Case Study, only after the data is assembled by outcome (death from cancer, e.g.), do the researchers then seek to identify those factors that might have contributed to a medical condition by comparing subjects. In a Cohort Study, groups are separated in advance (those who eat lots of fruit and vegetables and those who don't, for example), and then their outcomes are tracked over time. If the difference in outcome is substantial, the differentiating factor is determined to be substantial. If the outcomes are similar, the factor is considered to be not substantial.

In general, among researchers, a Cohort Study is considered to be less prone to bias and more likely to identify causation. Control studies are simpler, cheaper and although more prone to bias are highly effective at "indicating" causation and pointing researchers in a direction worthy of further study.

That's the theory, anyway.

In fact, Cohort Studies are highly subject to the vagaries of interpretation and can lead to some very bizarre "peer reviewed" conclusions based on the bias inserted during cohort selection. A great example can be found in the Cohort Studies on the effectiveness of the seasonal flu vaccine that indicate a 50-90% effectiveness rate for the vaccine and that are cited as gospel by doctors all over the world and by medical experts on television. However, even a casual examination of these studies reveals their absurdity. The Atlantic Monthly published a great article eviscerating these studies, concluding that the flu vaccine Cohort Studies are rendered irrelevant by the bias of the cohort selection.1 Consider:

Because it's virtually impossible to identify who has the flu and who doesn't, the researchers identified their cohort as those who died from all causes (flu, coronary events, lightning strikes, accidents, whatever) and then broke that into two: those who received the flu vaccine and those who didn't. Choosing the cohort as deaths from all causes introduces a bias into the study. To be fair, the researchers figured that any difference in outcomes between those who had the flu vaccine and those who didn't would by definition sort itself out and be the result of the flu vaccine alone since lightning strikes would obviously be the same in both groups. Unfortunately, what is obvious is not always true. In the end, you have to be absolutely blind to miss the anomalies once you look at the published results. What were these results? These studies show a "dramatic difference" between the death rates of those who get the vaccines VS those who don't (50-90% less depending on the study cited). And therein lies the problem.

According to the National Institute of Allergy and Infectious Diseases, deaths from influenza account for -- at most -- 10 percent of the total deaths during the flu season. Nevertheless, the Cohort Studies found that receiving a flu vaccine reduced total deaths by 50 percent -- five times the total number of flu deaths. That's one amazing vaccine!

To put this in perspective, according to Dr. Tom Jefferson of the Cochrane Collaboration, and as quoted in The Atlantic article, "For a vaccine to reduce mortality by 50 percent, and up to 90 percent in some studies, means it has to prevent deaths not just from influenza, but also from falls, fires, heart disease, strokes, and car accidents. That's not a vaccine, that's a miracle."

And as icing on the cake, there was also no difference in mortality rates based on whether the deaths occurred in flu season or out of it. Truly, a miraculous vaccine!

So is there any valid conclusion that we can make from the flu vaccine Cohort Studies? Probably! Since people taking the vaccine had fewer heart attacks, deaths by accident, lightning strikes, etc. (but not necessarily from the flu itself), we can probably conclude that people who choose to get vaccinated are likely to be more health and safety conscious than those who don't -- and thus avoid dangerous situations and risks. But as far as the primary purpose of the studies, the efficacy of the flu vaccine, no conclusions can be made because of the bias introduced into the studies by the flawed cohort selection of people who died from all causes.

The bottom line is that Cohort Studies are not necessarily less biased than Case Studies. And they are not necessarily likely to produce more "accurate" results. It all depends on what biases are introduced and how they affect the data and conclusions drawn.

Observational VS Interventional

There is a subset of Case and Cohort studies, and that is whether they are observational or interventional.2

Observational studies, also called epidemiologic studies, draw their conclusions from a sample population where the independent variable is not under the control of the researcher because of ethical concerns or logistical constraints. Observational study designs are often retrospective and are used to assess potential causation in exposure-outcome relationships and therefore influence preventive measures.

Interventional studies, also called experimental studies are often prospective and are specifically tailored to evaluate direct impacts of treatment or preventive measures on disease. As the name implies, these are studies in which the participants undergo some kind of intervention in order to evaluate its impact. An intervention could include a medical or surgical intervention, a new drug, or an intervention to change lifestyle.

Clinical Studies

Clinical studies (or trials) are probably the type of study you're most familiar with since they're the standard for evaluating and approving new drugs and are frequently cited in advertisements promoting those drugs. The perceived "gold standard" in clinical trials is the double blind placebo control trial,3 and that's why people keep asking for the independent, third-party, clinical trials in support of alternative health--thinking that their lack in alternative health is a glib way of dismissing alternative health. But in fact, as we will see, most clinical trials for drugs are anything but independent--often having only the illusion of being third party--and, as we will discuss, are undergoing a major rethink by the very peer reviewed journals that have made clinical trials the "gold standard" people think they are. So let's take a closer look.

They are often executed in phases as part of the drug development process in which the effects of a drug are tested in people. In clinical studies, patients voluntarily participate in trials in which they actually use the drugs being evaluated to verify the efficacy and safety of those drugs. In most such studies, some of the volunteers are unknowingly placed in a control group and receive a placebo to provide a baseline for evaluation of the "real" drug. The three phases of a Clinical Study are:

  • Phase I: Testing in healthy volunteers, known as subjects. Although numbers involved in testing may be as few as five, they usually run between 20-100 test subjects and last a minimum of six weeks. I've actually employed Phase I testing to evaluate the efficacy of several of my own formulas. (And yes, the testing company I contracted with was both independent and third party.)
  • Phase II: Testing in patients in order to demonstrate the efficacy of the drug and find the optimum dose. Larger than Phase I testing, the number of subjects involved in Phase II testing will usually reach upwards of 300.
  • Phase III: Testing in a large number of patients (sometimes 3,000 or more) in order to test the safety and efficacy of a drug and detect any side effects.

It should be noted that clinical testing is stacked against alternative health remedies by virtue of the costs and requirements involved. Full scale testing, through all three phases, can often run $100-800 million per drug candidate.4 However, since drugs are patentable and can produce revenues of tens of billions of dollars over the life of a single drug, the cost of testing is not necessarily unbearable for the large pharmaceutical companies. In fact, as much as they complain about the costs, they actually like the costs involved in testing since it limits the competition. Only the big boys can afford this level of testing.

As for alternative health remedies, the costs are simply impossible for anything beyond Phase I. Natural remedies comprised of natural ingredients (unless they contain some absolutely unique combination never before seen or some major technical innovation) are extremely difficult to patent. And in March of 2014, it got even worse. The United States Patent and Trademark Office issued new guidelines that instructed patent examiners to reject any patent claims for purified natural products.5 That means if a company wishes to prove that the combination of herbs in a blood cleansing formula such as Essiac Tea, or the Hoxsey Formula, or Baseline Nutritionals' Blood Support Formula helps cure cancer, it would cost them well over $100 million, and if proved successful, they would be unable to patent the formula. That means that every other herbal company in the world could use those studies as the basis to produce and market an identical formula without having to bear the $100 million cost of performing the same studies. They could, therefore, effectively sell the formula at a huge discount and prevent the original company from ever recovering its testing costs -- thus driving them into bankruptcy. Bottom line: without the ability to patent natural formulas, full scale clinical testing is a financial impossibility.

In 2014, the USPTO instructed its examiners to reject any patent claims for purified natural products. via @BaselineHealth


It should also be mentioned that clinical testing is hardly flawless -- particularly since drug companies control all aspects of the tests they pay for. This includes being able to suppress any studies that contain negative results, with no requirement that those negative results ever be published. In other words, the law allows for clinical testers to cherry pick data. (We'll talk more about this later.) Is it any wonder that we regularly learn of drugs that passed clinical testing with flying colors, that were approved by the FDA, and that down the road turned out to kill people by the thousands? Can you say Vioxx, Avandia,6 and Celebrex?7

But when it comes to clinical trials, there's much worse to explore. Let's talk about that independent, third-party thing for a bit.

Ghostwriters in the Sky

Back in 2008, as part of an ongoing investigation led by Senator Charles Grassley, Congressional letters were sent to the pharmaceutical company, Wyeth, and to DesignWrite, a medical writing company, requesting that they disclose payments related to the preparation of journal articles and the activities of doctors who were recruited to put their names on them for publication.

Not surprisingly, Wyeth denied the allegations and claimed that Senator Grassley was recycling old arguments. According to Doug Petkus, a Wyeth spokesman, "The authors of the articles in question, none of whom were paid, exercised substantive editorial control over the content of the articles and had the final say, in all respects, over the content."

Well, that would be a clean response except that the facts on the ground do not bear him out. Mr. Grassley's staff on the Senate Finance Committee released dozens of pages of internal corporate documents gathered from lawsuits showing the central, previously undisclosed role of Wyeth and DesignWrite in creating articles promoting hormone therapy for menopausal women as far back as 1997.

For example, one cited article was published as an "Editors' Choice" feature in May 2003 in The American Journal of Obstetrics and Gynecology, more than a year after a big federal study called the Women's Health Initiative linked Wyeth's Prempro, a combination of estrogen and progestin, to breast cancer.8 Nevertheless, the "Wyeth" article claimed there was "no definitive evidence" that progestin caused breast cancer and added that hormone users had a better chance of surviving cancer.9 Although the article was signed by Dr. John Eden, an associate professor at the University of New South Wales and director of the Sydney Menopause Center in Australia, it turns out that Wyeth executives not only "suggested" that Dr. Eden write such a paper in 2000, but also had the outline and draft manuscript written for him.10 When it was published, there was no mention of Wyeth's or DesignWrite's contributions or connections.

The released documents further showed that Wyeth executives actually came up with ideas for medical journal articles, titled them, drafted outlines, paid writers to draft the manuscripts, recruited academic authors, and identified publications to run the articles -- all without disclosing the companies' roles to journal editors or readers.

Although the issue of pharmaceutical ghostwriting has indeed come up in the past in association with Wyeth and Merck, the documents released by Senator Grassley provided a detailed look at the practice -- from the conception of ideas for journal articles through the distribution of reprints.

In case you were wondering, The World Association of Medical Editors says that ghost authorship, which it defines as a substantial contribution not mentioned in the manuscript, is "dishonest and unacceptable." It also happens to be dangerous, and in some cases may result in actual "murder" as many patients may be prescribed drugs based on faulty, biased information. When this happens, it is not a victimless crime. You and others like you, in fact, are the victims.

I've addressed the unreliability of medical studies many times before, but to summarize:

  • We know for a fact that some of the studies and articles in peer reviewed journals are bought and paid for.
  • You have no way of knowing which articles those are.
  • The journals themselves are incapable of controlling the problem . . . if indeed they truly want to. After all, the pharmaceutical companies are their biggest advertisers.

Some of the studies and articles in peer reviewed journals are bought and paid for. via @BaselineHealth


Medical Journals Make a Big Discovery--Not

Yes but, Jon, everything you've mentioned so far happened several years ago. Surely the medical journals, once they discovered the problems, cleaned them up, and they are no longer an issue, right? If only that were true. In fact, medical journals have yet again, in the past few months, suddenly "discovered" what we've been talking about for years. (Really, how can you "discover" something that other people have been talking about for several decades?) So in answer to the question as to whether the peer reviewed journals have cleaned up their act, the answer is: no, they have not. In any case, they appear clueless as to past history and have only "recently" discovered that pharmaceutical companies are gaming the system--manipulating the way in which studies are published in peer reviewed journals in a way that dramatically exaggerates the benefits associated with their drugs. Aren't you just shocked? I know the peer reviewed journals say they are.

To understand what's going on, we need to once again go back in time and talk about Paxil. The antidepressant paroxetine (sold as Paxil in the US and Seroxat in the UK) was patented and released by SmithKline Beecham, now known as GlaxoSmithKline (GSK), to treat depression in 1992. It quickly became a huge moneymaker for GSK, earning them approximately $2 billion a year. On the basis of a clinical trial that was funded by GSK, known as Study 329, Paxil/Seroxat was eagerly prescribed by the medical community to millions of children and teenagers around the world as a "good, safe" treatment for depressed adolescents.11 But the study was later found to be flawed. The trial failed to report the true numbers of young people who considered suicide while on the drugs. In response, in 2003, the UK drug regulator instructed doctors not to prescribe paroxetine to adolescents.12 The FDA, in a concession to GSK, thought that an updated warning on the label was sufficient.13 Notably, although the study was deemed to be flawed, it was not retracted.

Not surprisingly, given the FDA's gift to GSK, over two million prescriptions were written for children and adolescents in the United States in 2002 alone. (So much for "black box" label warnings.)  In 2012, GSK was fined a record $3 billion, in part for fraudulently promoting paroxetine. For what it's worth, GSK had $11.7 billion in Paxil sales from 1997-2006, according to data the company supplied as a part of a lawsuit. And that was just in the U.S. for a nine-year period. $3 billion was a slap on the wrist, as they say.

Incidentally, as I pointed out in a previous article, the original manuscript for 329, as published in the Journal of the American Academy of Child & Adolescent Psychiatry, was not actually written by any of the 22 named authors but by an outside ghostwriter hired by GSK. In addition, "the paper's lead author--Brown University's chief of psychiatry, Martin Keller--had been the focus of a front page investigation in the Boston Globe in 1999 that documented his under-reporting of financial ties to drug companies." Even worse, the American Academy of Child and Adolescent Psychiatry refused to intervene and retract the paper once it was discovered to be fraudulent. And Brown University, the home of a number of the study's authors, including its lead author, chose to keep silent over its faculty's involvement in Study 329.

Anyway, since then, Study 329 has become the poster child for what is now known as "outcome switching"--or changing the outcome criteria of a study from the original objectives because they fail testing to more "agreeable" outcomes that seem to indicate success. In Study 329, for example, Paxil proved no better than a placebo when it came to improving depression based on any of the original criteria for which it was tested. So the researchers came up with a whole new set of 19 criteria for measuring success half way through the testing--criteria that they felt the drug could succeed with. Astoundingly, even when cherry picking criteria after the fact to favor the drug, Paxil still failed to produce results on 15 of those criteria. In summary, Paxil failed on every single one of the initial criteria used to evaluate it; it failed on 15 of the 19 criteria that were cherry picked in the middle of the study to guarantee its success; but it did show some benefit with four of the cherry picked criteria. In the paper--notably not written by the academics listed in the journal, but by a dubious ghostwriter hired by GSK and published and in the peer reviewed Journal of the American Academy of Child & Adolescent Psychiatry--those four "successes" were presented as if they had been the only measures of success ever evaluated. A 100% success rate!! 

And make no mistake, this is not limited to Study 329, which brings us up to what's happening now.

Outcome switching is the hot topic of the day in the medical community as researchers are now finding that they are unable to reproduce results in a number of studies looking at everything from psych drugs to cancer treatments. You would think that since this was exposed some 13 years ago that these kinds of games would no longer be possible--especially since serious trials are now required to register what they're studying before they begin, detailing exactly what they will be investigating, how they will go about it, and exactly what criteria they are using to conduct their evaluation. You would think that would take care of the problem! But two studies published in 2015 found that it ain't necessarily so. One study published in BMC Medicine found that 31% of the studies they looked at showed "substantial" variability between what the studies promised to do and what they actually looked at.14 And a second study, published in Plos One, found that 18% of the studies they looked at altered their primary outcomes and 64% had altered their secondary outcomes.15 Or as the study's authors summarized it, "Discrepancies between registry entries and published articles for primary and non-primary outcomes were common among trials published in leading medical journals."

And just this year, the COMPare Project published the results of its study, which examined every trial published in the top five medical journals (New England Journal of Medicine, the Journal of the American Medical Association, Lancet, Annals of Internal Medicine, and BMJ) between October 2015 and January 2016.16 As they stated:

"We compared each clinical trial report with its protocol or registry entry. Some trials reported their outcomes perfectly. For the others, we counted how many of the outcomes pre-specified in the protocol or registry were never reported. We also counted how many new outcomes were silently added.

"Here's what we found.


Trials that Were Perfect

Outcomes Not Reported

New Outcomes Silently Added





"On average, each trial reported just 62.1% of its specified outcomes. And on average, each trial silently added 5.3 new outcomes.

"When we detected unreported or added outcomes, we wrote a letter to the journal pointing them out. We tracked which journals published our letters -- and which did not."



Letters Unpublished after 4 Weeks

Letters Rejected by Editor





In summary, of the 67 studies they looked at, only nine were perfect. The other 58 were flawed and had switched outcomes without providing justification for those switches. In total, the studies did not report 300 relevant outcomes that actually had been uncovered by the studies, while at the same time adding 357 outcomes that were never part of the original protocol.

So once again, returning to the question we asked earlier: surely the medical journals, once they discovered the problems, cleaned them up, and they are no longer a factor, right? The answer, as of today, appears to be a resounding: NO!


Now, look. I'm not saying that clinical trials are useless. They are the foundation of modern medicine and modern science. Without them, we might still be treating diseases by trying to regulate bad humors and expel evil spirits. But, and this is very, very important to understand: clinical trials are not the be all and end all of scientific knowledge. They often contain serious flaws and biases, not to mention being subject to the vagaries of human greed and duplicity. In the end, individual clinical trials should be considered merely as divining rods that "possibly" point us in the direction of important conclusions. Certainly, multiple studies that come to the same conclusion are a better indicator than individual studies--but even then, the result is not guaranteed. If the same bias is carried from study to study (confusing synthetic vitamin E with full complex, natural vitamin E, for example), you still end up with a flawed conclusion, just reached multiple times. But once we understand the limitations of the different types of studies, we can see that decades, centuries, and even millennia of anecdotal information about herbal and nutraceutical remedies can be equally effective in "possibly" pointing the way" to important healing discoveries.

Tweet: Only about 15% of medical treatments have ever been validated by clinical trials--as flawed as those trials might be. via @BaselineHealth


Oh! And it's probably worth remembering, as we have discussed before, that only about 15% of medical treatments have ever been validated by clinical trials--as flawed as those trials might be. And, even more telling, according to studies conducted by the medical community itself, only about 11% of physicians even rely on those studies--flawed or otherwise--when prescribing treatments for their patients.


  • 1. Shannon Brownlee and Jeanne Lenzer. "Does the Vaccine Matter?" The Atlantic. November 2009/ (Accessed 18 Apr 2016.)
  • 2. Matthew S. Thiese. "Observational and interventional study design types; an overview." Biochem Med (Zagreb). 2014 Jun; 24(2): 199--210.
  • 3. Shobha Misra. "Randomized double blind placebo control studies, the "Gold Standard" in intervention based studies." Indian J Sex Transm Dis. 2012 Jul-Dec; 33(2): 131--134.
  • 4. Robert Fee. "The Cost of Clinical Trials." Drug Discovery & Development Magazine. 09/06/2007. (Accessed 10 Apr 2016.)
  • 5. "The New Patent Policy on Natural Products Is a Game Changer for Universities and Life Sciences Companies." Bradley Arant Boult Cummings LLP Intellectual Property News. 9/16/2014. (Accessed 10 Apr 2016.)
  • 6. Associated Press. "Diabetes drug raises fears of another Vioxx." 5/22/2007. (Accessed 10 Apr 2016.)
  • 7. Diedtra Henderson. "How safe is Celebrex?" February 25, 2007. (Accessed 10 Apr 2016.)
  • 8. "Proposed Collection; Comment Request; Women's Health Initiative Observational Study." Federal Register. 11/07/2005. (Accessed 10 Apr 2016.)
  • 9. John Eden. "Progestins and breast cancer." American Journal of Obstetrics & Gynecology , Volume 188 , Issue 5 , 1123 - 1131.
  • 10. DUFF WILSON. "Wyeth's Use of Medical Ghostwriters Questioned." December 13, 2008. (Accessed 10 Apr 2016.)
  • 11. KELLER, MARTIN B. et al. "Efficacy of Paroxetine in the Treatment of dolescent Major Depression: A Randomized, Controlled Trial." Journal of the American Academy of Child & Adolescent Psychiatry , Volume 40 , Issue 7 , 762 - 772.
  • 12. Sarah Boseley. "Mood drug Seroxat banned for under-18s." theguardian. 11 June 2003. (Accessed 17 Sep 2015.)
  • 13. "Questions and Answers on Antidepressant Use in Children, Adolescents, and Adults: May, 2007." FDA May 2007. (Accessed 17 Sep 2015.)
  • 14. Jones, Christopher W. Keil, Lukas G. et al. "Comparison of registered and published outcomes in randomized controlled trials: a systematic review." BMC Medicine 2015 13:282.
  • 15. Padhraig S. Fleming, Despina Koletsi, Kerry Dwan, Nikolaos Pandis. "Outcome Discrepancies and Selective Reporting: Impacting the Leading Journals?" PLoS ONE. May, 21, 2015. 10:5. Outcome Discrepancies and Selective Reporting: Impacting the Leading Journals? Padhraig S
  • 16. COMPARE: Tracking Switched Outcomes in Clinical Trials." Compare Trials. (Accessed 18 Apr 2016.)

Click for Related Articles