Is Steak Really That Bad? 6 Statistical Flaws in Red Meat Study that made Headlines.

Duco van Rossem
11 min readSep 30, 2024

--

In June 2019, a group of researchers from Harvard and China published a paper associating red meat consumption with increased mortality in the US.

These headline-grabbing findings triggered a wave of media coverage with alarming titles such as “Extra red meat helpings linked to increased odds of death”:

Screenshots of news article headlines from 2019 discussing the article in question on red meat
Some of the Headlines this red meat study made in Newsweek, CNN, The Independent, TheGuardian in 2019.

These published articles have a significant impact on dietary choices and policy worldwide. However, their potential societal influence often outweighs their statistical robustness.

This post examines a specific 2019 study to highlight some of the statistical pitfalls in observational research used to examine health outcomes associated with red meat consumption.

Observational Studies Are Based on Data and are not Medical Experiments

Before diving into the specifics of the red meat study, we take note that the data this study is based on The Nurses’ Health Study and the Health Professionals Follow-up Study. These are two long running investigations into health outcomes running more than 35 years in the United States.

These observational studies involve periodic surveys of participants (in this case nurses and health professionals) about their diet, lifestyle, and health outcomes.

Observational studies differ significantly from the gold standard in medical research: the randomized controlled trial. In a randomized controlled trial, participants are randomly assigned to different groups (e.g., one group gets a new medicine, another group gets a placebo), and all other factors are controlled as much as possible.

In contrast, this red meat study is an observational study that analyses data from people’s real-world behaviors and outcomes. And as we will see, this is susceptible to confounding factors and biases that can skew results.

The Red Meat Study in Question

The particular study covered here is:

Association of changes in red meat consumption with total and cause specific mortality among US women and men: two prospective cohort studies.

The aim of the 2019 study was to answer the following question:

Do changes in red meat consumption have an impact on health-related mortality?

However, what the study actually measured was closer to this:

Do nurses and health professionals who self-report eating more of any type of ‘red meat’ have a higher incidence of any health-related deaths over two different time periods?

Understanding what is actually being tested highlights the key limitations of such an observational study and the conclusions that can be made from it.

First, a brief summary on what was found in this study and then we will explore the statistical pitfalls of this study and why the headlines it generated stand on shaky statistical ground.

In simple terms the study does the following:

  • It looks at two categories of red meat: ‘unprocessed’ and ‘processed’ red meat. It links an increase or decrease in consumption of both these categories with more or less health related mortality over a subsequent 8 year time period.
  • Categorizes ‘processed red meat’ as: bacon, hot dogs, sausage, salami, bologna, and other processed red meat. So this is mainly pork-based processed meat.
  • Categorizes ‘unprocessed red meat’ as: beef, lamb, pork as main dish, hamburger, or as a sandwich or as a mixed dish. So basically a fillet mignon and a Wendy’s hamburger are in the same category.
  • The study applies a statistical model to control for lifestyle factors (which we will discuss later) to isolate the effect of eating more (or less) of the categories on any health-related mortality risk.
  • For unprocessed red meat: 2 of the 6 relevant models show a barely statistically significant relative risk of 1.09. This means that there is an increased risk of mortality of 9% over the subsequent 8-year period after which the increase in red meat consumption occurred. 4 out of 6 relevant models are not statistically significant.
  • Processed red meat has a slightly stronger and slightly more statistically significant effect overall than unprocessed red meat. 5 of the 6 relevant models are statistically significant. The study suggests that eating more processed red meat implies a relative risk of 1.13 — or a 13% higher mortality risk of increasing processed meat consumption.
  • The study pools the processed and unprocessed categories into one ‘red meat’ category and finds a relative risk of 1.10 for increased red meat consumption (more on that later).
  • Based on this pooling of unprocessed+processed categories, the study goes on to conclude that “Increases in red meat consumption, especially processed red meat, were associated with higher overall mortality rates”.

The Statistical Pitfalls of this Red Meat Study

Pitfall Number 1: The Relative Risk is Very small

Let’s start with the magnitude of the relative risk being discussed here — the overall relative risk reported in the red meat study was 1.15, with many findings being statistically insignificant (close to 1.0). To put this in perspective, let’s compare it to other well-known risk factors:

+-----------------------------------+------------------------+---------------+
| Risk Factor | Health Outcome | Relative Risk |
+-----------------------------------+------------------------+---------------+
| Smoking | Lung Cancer | 20-30 |
| Smoking | Cardiovascular Disease | 2-4 |
| Obesity (BMI > 40) | All-cause Mortality | 2.5-3.0 |
| Heavy Alcohol Consumption | Liver Cirrhosis | 9-10 |
| Physical Inactivity | Cardiovascular Disease | 1.5-2.0 |
| Red Meat (per additional serving) | All-cause Mortality | 1.15 |
+-----------------------------------+------------------------+---------------+
These are approximate relative risk values just to get a sense of the magnitude.

The relative risk associated with red meat consumption in this study is substantially smaller than many other well-established risk factors.

A relative risk of 20 for smoking leading to lung cancer is so incredibly large that one can permit less precise model specification.

This raises a key point: with such a small relative risk, it becomes more important to consider whether unmeasured factors could be responsible for the observed association.

Pitfall Number 2: Treats all non-processed meat as the same.

This study lumps in beef, pork, lamb, hamburgers in the same ‘unprocessed meat’ category.

This means a hamburger at McDonalds is grouped in the same category as an organic fillet mignon. It also means that pork and beef, that have very different nutritional profiles — are grouped together into one category (beef is about 3 times higher in Vitamin B12 than pork).

How the ‘unprocessed’ red meat category is defined in the paper

It is quite possible that much of the negative effect found in the study’s results for unprocessed meat consumption could be linked to the impact of eating fast-food hamburgers, which are far from truly unprocessed.

By the way; some news articles depicted a steak as if it were the main object of the study. This is misleading. It is mainly processed red meat that has the more negative impact on mortality (as far as that holds statistically).

Pitfall Number 3: Correlation vs Causation

Let’s assume that the study’s findings on eating processed red meat are completely sound and that eating this kind of food leads to slightly higher mortality… The next question would be: what exactly about processed red meat makes it unhealthy?

Imagine we create a category ‘Processed Potato’ (e.g. french fries) and link that to a worse health outcome. Does that mean the potato is bad or is it the manner it is processed that is unhealthy?

Most would agree that industrial food processing degrades the nutritional quality of food — regardless of the quality of the food being used in that process.

In the case of meat — it could be the other ingredients that make it unhealthy — e.g. additives. One could speculate but the point is just to say that just because a study statistically establishes a link— there could be many causal mechanisms at work.

‘Biologically’ or ‘Nutritionally’ speaking it could be the additives or even the cooking methods and not the ‘meat’ itself that is the culprit.

Pitfall Number 4: Omitted variable biasHealthy user Bias not captured by the model

The participants who increased their meat intake went directly against what was widely recommended at the time. Especially since this study is based on nurses who should be aware of general health advice. This means these people who increased their meat intake were also more likely to not follow other health advice.

For example — the study shows that people who increased their red meat intake were also more likely to smoke, relative to the people who did not increase their red meat intake.

The paper’s model does control for the lifestyle choices where it can, however not all bad health decisions can be tracked or controlled for. This means that increased meat intake could just be correlated with increased bad health decisions — and the increased mortality is capturing that effect and not the effect of eating more meat itself.

Pitfall Number 5: Omitted variable bias Why doesn’t the study control for other food groups?

The most important part of this study is what it does and does not account for.

The study accounts for consumption of these food groups: vegetables, fruits, whole grains, and sugar-sweetened beverages.

Critically, several important food categories are not included: ‘snack foods and sweets’, ‘refined grains’, and ‘desserts’. These are all valid food groups present in the underlying data, yet they are not controlled for in the analysis.”

So if the study were to control for these omitted ‘junk food’ categories, would the results still be significant? It’s possible that the effects attributed to processed/unprocessed red meat consumption are actually capturing the effects of other sweets and snack foods — because they are correlated. In other words, people who frequently eat hamburgers at fast food restaurants are also more likely to consume sweets and snack foods.

Since the study does not control for sweets and snacks, it’s possible that the results are actually reflecting the effect of consuming sugar-rich food items rather than red meat itself

Pitfall number 6: Self Reported Data and the Hawthorne Effect.

All the data in this study is based on self-reported dietary estimates collected every 4 years. Food Frequency Questionnaires (FFQs) are known to be inaccurate; that is, there is a gap between what people actually eat and what they report eating. Typically, the correlations between reported food intake and actual food intake are around 0.50–0.60.

Generally, with a large enough dataset, it’s possible to distinguish the ‘signal’ from the ‘noise.’ This means that even though there might be errors in the self-reported types and amounts of food consumed, with sufficient data, statistically significant links could still be established.

HOWEVER, the limitation arises if there’s structural bias in the self-reported surveys. Consider this scenario: Nurses have been conditioned to believe that red meat (or even just hamburgers) is unhealthy. If they’ve become less healthy over the course of the study, they might be more likely to self-report higher intake of foods they assume to be unhealthy, in an attempt to make sense of their health outcomes. It’s as if they’re thinking, ‘I’ve gotten unhealthier, and yeah, I guess I have been eating a lot more hamburgers from Wendy’s.’

Similarly, nurses who have taken steps to reduce their meat intake, likely for health reasons, can also be expected to adopt other health-improving behaviors. They might think, “I believe eating a lot of hamburgers is bad, while yoga and taking hikes in nature are good for health.” However, ‘yoga’ and ‘hikes in nature’ are not lifestyle behaviors that are typically controlled for in these studies. Yet, they’re likely inversely correlated with the frequency of fast-food burger consumption at places like Wendy’s.

In other words, the self-reported red meat intake could be a proxy indicator for overall health consciousness, rather than the primary factor affecting health outcomes.

Pitfall Number 6: Pooling and p-hacking.

Many of the models run on unprocessed red meat yield non-significant results. However, the study combines processed and unprocessed red meat into a single ‘red meat’ category. This allows the researchers to draw conclusions about ‘red meat’ in general, even though the effect seems to be primarily driven by the processed meat category.

How steak was unfairly implicated in health risks by news coverage of this study:

  1. The study categorizes steaks in an ‘unprocessed red meat’ group along with pork and hamburgers.
  2. The study combines the unprocessed red meat category with the more statistically significant processed red meat category (mainly pork-based sausage products).
  3. The paper reports a slightly higher mortality risk for the combined ‘red meat’ category, based on questionable statistical grounds.
  4. Newsweek publishes an article about ‘red meat’ with a picture of a steak, stating that increased red meat consumption could shorten lives.

The study’s conclusions depend on how the statistical model is set up. There are numerous ways to configure such a model — researchers can choose different food categories, group data in various ways, or combine different factors. This flexibility in model design is known as ‘multiplicity’ in statistics.

With so many possible configurations, potentially thousands or even millions of different model versions could be created. This raises an important question: How many of these different model versions would reach the same conclusions about red meat’s health effects?

Ideally, to ensure the reliability of their findings, researchers should perform what’s called a ‘sensitivity analysis.’ This involves testing how their results change when they alter various aspects of their model. Such an analysis would help determine if the conclusions about red meat are consistent across different model specifications or if they’re sensitive to particular choices in how the model was set up.

Bonus Pitfall: Data Gatekeeping — no access to the data

Unfortunately, it is impossible for just anyone to access this data. Researchers must go through a detailed request procedure to access the data and be able to publish anything about it.

I would like to examine the models applied in these papers myself: How was the data cleaned? Which statistical assumptions were made to construct the p-values? How sensitive are the results to different model specifications?

This data is important as it is clear it has affected food decisions and health worldwide — open source it. Even heavily anonymized or synthesized versions of the data would be valuable for the sake of transparency of research.

Final Commentary

Observational studies in general can be very helpful — like establishing that exercise is good for you and that smoking is harmful. But the manner in which studies have been set up when it comes to red meat seems too quick to point fingers.

Even if the studies’ statistical basis were 100% bonafide, the conclusion is an overstatement of what was found.

Indeed I wonder to what extent academic papers should have such ‘digestible’ conclusions on its first page here. The ‘higher overall mortality rates’ is not presented within the context of the relatively low hazard ratio. It is overambitious with conclusions at best, ideology pushing at worst.

When the statistical margins are so thin, it becomes even more important to grill the statistical models used to arrive at the results — so as not to serve up half-baked conclusions.

--

--

Duco van Rossem
Duco van Rossem

Written by Duco van Rossem

Statistics and Data Science. Examining the nuances in analysing and gathering conclusions from real world data. ducovanrossem.com

No responses yet