If you want to make important decisions about your health, you need to understand the evidence hierarchy.
We are surrounded by misinformation, especially when it comes to health. Unless you have a strong background in science, it can prove difficult to make health decisions, such as:
- Are keto diets good or bad for you?
- Should you eat vegetable oils or not?
- Is gluten inflammatory?
- Do the new COVID-19 vaccines affect your genes?
- Are vegan diets safe?
Ask these questions in a public forum, and you will get a host of answers, most of which contradict each other. Also, most answers will contain little more than the opinions and experiences of the average person.
Ideally, someone will offer something more useful than opinions—they’ll provide evidence of their claims.
Evidence is important if you want to make good health decisions. And it’s necessary, even when the advice comes from experts or well-respected people.
If some famous actor spouts off on Twitter about the benefits of intermittent fasting, you know to take that with a grain of salt. But when someone with extensive experience and fancy degrees writes a book about intermittent fasting, filled with evidence… well, it’s far more convincing.
Except for one problem: as great as evidence is, not all evidence is created equal.
Good Vs. Poor Quality Evidence — Two Examples
On a health-related Facebook page (with tens of thousands of followers), somebody claimed that the mRNA vaccines were “gene therapy.” I stated they’re NOT gene therapy and posted a link from Medical News Today explaining how they work, while somebody against the vaccines posted a link to a BitChute video with a scientific expert who worked as a vaccine researcher.
Who’s more trustworthy? An expert who’s done research on the vaccines, or an article from MNT? They may seem equally compelling, but as you’ll learn below, one easily trumps the other in terms of evidence quality.
Debate exists on the healthiness of vegetable oils. Mainstream medicine touts their healthy qualities, but a group of other medical professionals claim they’re bad for you. Both will cite a huge slew of legitimate, peer-reviewed scientific studies supporting their case, so whom do you believe?
You believe the one with the superior evidence, and one of these clearly has that over the other.
I’ll provide the answer to both of these examples later. First, let’s discuss how we make these kinds of decisions.
When presented with evidence for any kind of health-related claim, you have to examine what kind of evidence you’re dealing with… and where it falls on the hierarchy of evidence.
The Evidence Hierarchy
If someone manages to provide evidence for their claims, cheers to them. But in the end, how much confidence you have in the claim depends on the type of evidence. Here are the most common kinds of evidence you’ll see, from weakest to strongest.
This refers to people offering stories from their individual experience. For example, a friend lost weight and felt great on a keto diet and recommends it to you.
As compelling as a true story is, anecdotes are extremely weak forms of evidence. The results from one person are just that: one person. And we have no idea how that person approached their diet, what they ate, how much they ate, whether they exercised, or the state of their general health.
This one may seem surprising, especially if the expert has credentials (a PhD or MD, for example) and experience in the area of interest. But an expert is only as good as their evidence.
Do they cite scientific evidence at all, or do they merely provide logical-sounding explanations? For example, a book I read on the paleo diet provided a persuasive narrative for why humans aren’t designed to eat grains, then offered some dramatic anecdotes to buttress the argument.
Convincing stuff, until you learn that scientific evidence overwhelmingly shows that whole grains are good for your health.
If the expert cites actual studies, a higher form of evidence, where do they fall in the hierarchy? Let’s find out.
A case study is a study with one person, where the subject has a specific illness and researchers want to study it in-depth. Unlike the one-person anecdote, the case study is conducted by experts under specific conditions, and the results shared with other professionals.
However, a case study has limited applicability because it’s still only one person, so the next step is to conduct a similar study with a larger group of participants.
Animal studies represent a solid leap in the evidence hierarchy. They’re carefully controlled and conducted by experienced scientists, and provide a way to explore health-related questions more easily and quickly, which can lead to major breakthroughs.
However, animals—whether mice, rats, worms, pigs, etc—aren’t humans. Yes, animals have more in common with us biologically than you may think, and sometimes what works for them works for us. But more often, what applies to a rat or pig may not work in a human.
To make a better scientific argument, you need human studies.
These studies are “observational,” in that they examine a specific thing in a group of people at a given time, and that’s it. For example, a study that examines the prevalence of heart attack in men aged 50-60 in 2021 is cross-sectional, since you’re studying a “cross section” of the population.
These studies are relatively easy/cheap to conduct and can provide a basis for more in-depth study. However, they don’t tell us much about what causes or predicts heart attack, or what happens with these men long-term.
These studies take cross-sectional studies to the next level. Here, you divide the participants into cases (those who’ve had, say, a heart attack) and controls (those who are the same age/race/etc but haven’t had a heart attack), then compare them on some factor (e.g. family history of heart attack, diet, smoking, etc). For example, did those who had a heart attack have a higher prevalence of smoking or a poorer diet?
The drawback? You can only estimate the odds of having a heart attack based on those factors studied, but you can’t make any claims about what causes heart attack or heart disease. Differences between case and control groups could be due to many factors not included in the study, known as “confounds.”
Cohort studies take a group of people and follow them over time. For example, you could take a group of men whose fathers developed heart disease and follow them to see if they develop it as well. These studies will collect data on many other factors that could contribute to disease.
Unlike cross-sectional and case-control studies, cohort studies don’t just look at a fixed period in time, but instead monitor how each participant changes over time. As such, they tell us much more about the factors that cause or contribute to disease because they have before-and-after data that can be used to predict health outcomes such as heart disease.
Cohort studies are a robust form of evidence, but they’re costly and they lack one thing: the ability to clearly establish cause for disease. That’s where RCTs come in.
Randomized Controlled Studies (RCTs)
RCTs are considered the gold standard for scientific evidence. In an RCT, you divide participants into treatment and control groups. The former receive a treatment or drug (e.g. a drug to lower cholesterol) while controls get a placebo.
To eliminate bias, neither the participants nor the researchers know which participant gets assigned to which group; this is known as a double-blind experiment. When the study completes, if the treatment group has lower cholesterol than the control group compared to the start of the study, you can conclude with confidence the change is due to the drug. You can’t do that with any of the other study designs.
For a drug or vaccine to receive FDA approval, researchers must test the efficacy (and safety) of the drug in a series of randomized controlled studies.
So, if RCTs are so great, why doesn’t everybody do them? They’re not always possible, for one thing.
If you want to know if Chemical B causes cancer, you can’t do an RCT with that chemical and see if the cases get more cancer than the controls, for ethical reasons. Or, if you want to know whether growing up poor impacts IQ, you can’t put children in poor environments to see how their IQs turn out compared to kids placed in privileged environs. Instead, you have to study kids already in those environments.
However, there is one type of evidence that surpasses even the RCT.
A meta-analysis takes the results of numerous studies and combines them into one analysis. Many times, the results of one study have limited impact; it’s just one study, after all, and may only have a small sample or a weak result, even if it’s a good study.
Combining studies, however, offers more statistical power and can make the results clearer and more powerful, especially if the contributing studies have conflicting results.
The one drawback: a meta-analysis is only as good as the studies it includes.
Which Evidence is Best?
Returning to the two examples I used at the beginning of this article, which evidence wins the day?
Example #1 (COVID Vaccines and Gene Therapy)
The Medical News Today article included links to actual human studies, which are a much stronger form of evidence than the video with the expert (expert opinion). Even if the expert worked on mRNA vaccine technology, his opinion stands in opposition to that of hundreds of other vaccine researchers, and most importantly his claims weren’t backed up with good evidence.
MNT wins this round by a mile. Also, anyone who knows how gene therapy works would never claim the COVID vaccines are gene therapy.
Example #2 (Healthiness of Vegetable Oils)
Curious on this topic, I investigated the debate between mainstream medicine (who claims these oils are healthy) and other experts (who claim they’re really bad for you), and wrote up my results here.
In a nutshell, I found a plethora of evidence high on the hierarchy citing they’re safe, while the veg oil haters had mostly anecdotes, expert opinion, and animal studies backing their claims. An even more comprehensive article by The Nutrivore came to the same conclusion.
Mainstream medicine is the clear winner here.
Now you know how to evaluate the evidence people offer to back up their claims. When you do, keep a few things in mind:
- No form of evidence is perfect — all have their strengths and drawbacks.
- Not every health topic lends itself to higher forms of evidence. Again, we can employ RCTs only for certain research questions.
- Lower forms of evidence are better than none at all. In many cases, a new area of study must begin with animal or cross-sectional studies and work its way up.
- Higher forms of evidence cost more and take more time, which is why we see fewer of them.
- Finally, even higher forms of evidence can have significant limitations, even flaws. Just as no form of evidence is perfect, no study is perfect.
Want to get notified when I publish another article? Sign up here!