You might have seen the following headlines in the news recently.
- “Avoid low-carb diets if you want to live longer, study suggests” (Fox23 News)
- “Low-carb diets could shorten life, study suggests” (BBC)
- “Meat-heavy low-carb diets can ‘shorten lifespan’: study” (DailyMail)
- “Low-carb diet linked to early death, medical study suggests” (USA Today)
If you’ve been implementing, or thinking about implementing, a low carb diet, these headlines may have led you to second guess or even forsake your low carb ways.
If you’re just trying to make sense of all the confusing and conflicting nutrition information out there, these headlines certainly don’t help.
Do low carb diets really shorten lifespan?
Are these headlines legit, or are they poppycock?
Let’s find out.
The first step we’ll take to assess if these headlines are legit is get a picture of what went on in the actual study.
“We studied 15 428 adults aged 45–64 years, in four US communities, who completed a dietary questionnaire at enrolment in the Atherosclerosis Risk in Communities (ARIC) study (between 1987 and 1989), and who did not report extreme caloric intake (<600 kcal or >4200 kcal per day for men and <500 kcal or >3600 kcal per day for women). The primary outcome was all-cause mortality.
We investigated the association between the percentage of energy from carbohydrate intake and all-cause mortality, accounting for possible non-linear relationships in this cohort.
We further examined this association, combining ARIC data with data for carbohydrate intake reported from seven multinational prospective studies in a meta-analysis.
Finally, we assessed whether the substitution of animal or plant sources of fat and protein for carbohydrate affected mortality.”
In a nutshell, they analyzed data from a study called ARIC for a relationship between carbohydrate intake and mortality, then combined that data with data from seven other studies in a meta-analysis (study of studies), and then looked at if there was any effect from the amount of animal food consumed.
Here’s what the researchers found:
“Both high and low percentages of carbohydrate diets were associated with increased mortality, with minimal risk observed at 50–55% carbohydrate intake.
Low carbohydrate dietary patterns favouring animal-derived protein and fat sources, from sources such as lamb, beef, pork, and chicken, were associated with higher mortality, whereas those that favoured plant-derived protein and fat intake, from sources such as vegetables, nuts, peanut butter, and whole-grain breads, were associated with lower mortality, suggesting that the source of food notably modifies the association between carbohydrate intake and mortality.”
The researchers did not say that low carbohydrate diets shorten lifespan.
The researchers did not even say that low carbohydrates diets “could” or “can” shorten lifespan.
This is something news headlines misrepresent repeatedly when reporting new scientific findings.
Just because there’s an association between two variables does not mean one of those things causes the other.
Correlation does not imply causation.
For example, you might see a relationship between children’s shoe sizes and their reading abilities.
Obviously, the size of one’s feet does not cause one to read better and one’s ability to read does not encourage foot growth.
Rather, children with larger feet are often older and have had additional education, which is the real reason that they are better able to read.
The children’s level of education is a confounding variable, or “a variable that distorts the association between two other variables (the exposure and the outcome)”.
Sometimes there aren’t even any confounding variables contributing to a correlation (spurious correlations).
The next time you see a news headline saying something causes something else, look for the study they are referencing and get a feel for what the researchers actually concluded (often in the abstract or conclusion)
You’ll likely find that the headline misrepresents the researchers’ findings.
Observational studies in the hierarchy of evidence
Let’s start by taking a closer look at this ARIC study to see just how strong the evidence of this correlation is.
“The Atherosclerosis Risk in Communities (ARIC) study is an ongoing, prospective observational study of cardiovascular risk factors in four US communities (Forsyth County, NC; Jackson, MS; suburbs of Minneapolis, MN; and Washington County, MD), initially consisting of participants aged 45–64 years who were recruited between 1987 and 1989 (Visit 1). Study participants were examined at follow-up visits, with the second visit occurring between 1990 and 1992, the third between 1993 and 1995, the fourth between 1996 and 1998, the fifth between 2011 and 2013, and the sixth between 2016 and 2017.”
The ARIC researchers met with study participants six times over roughly 30 years, collecting various data but not intervening in the participants’ lives in any way (aside from the six meetings).
This is what’s called an observational study, since the participants were observed with no specific intervention and no control of variables.
In the hierarchy of evidence, not all studies are created equal and observational studies have their limitations compared to other study designs.
At the top of the evidence hierarchy are randomized controlled trials (RCTs), studies in which participants are randomly divided into separate groups, exposed to different interventions with as many other variables as possible controlled, and observed for an effect from the intervention.
At the bottom of the evidence hierarchy are anecdotes or opinions based on fundamental theories.
Observational studies such as ARIC fall somewhere in the middle of the hierarchy, although they are considered by many to be “low quality”.
The applicability of nutritional epidemiology
The use of large, population- based observational studies like ARIC to identify health outcomes across populations, fall under a type of research known as “epidemiology”.
While epidemiology plays a major role in shaping public health initiatives, it is not without its critics, particularly when it comes to nutrition.
In his recent paper, “The Challenge of Reforming Nutritional Epidemiologic Research”, Dr. John P. Ionnidis of Stanford University pointed out the following:
“Individuals consume thousands of chemicals in millions of possible daily combinations. For instance, there are more than 250 000 different foods and even more potentially edible items, with 300 000 edible plants alone. Seemingly similar foods vary inexact chemical signatures (eg, more than 500 different polyphenols).
Much of the literature silently assumes disease risk is modulated by the most abundant substances; for example, carbohydrates or fats. However, relatively uncommon chemicals within food, circumstantial contaminants, serendipitous toxicants, or components that appear only under specific conditions or food preparation methods (eg, red meat cooking) may be influential.
Risk-conferring nutritional combinations may vary by an individual’s genetic background, metabolic profile, age, or environmental exposures. Disentangling the potential influence on health outcomes of a single dietary component from these other variables is challenging, if not impossible.”
I’m by no means saying that epidemiology is worthless or that we should completely discount observational studies.
However, these kinds of studies must be done properly in order for us to take any meaningful information from them, especially when it comes to making suggestions about mortality.
The food frequency questionnaire
Now would be a good time to discuss how the researchers collected the participants’ dietary data.
“Participants completed an interview that included a 66-item semi-quantitative food frequency questionnaire (FFQ), modified from a 61-item FFQ designed and validated by Willett and colleagues,16 at Visit 1 (1987–89) and Visit 3 (1993–95). Participants reported the frequency with which they consumed particular foods and beverages in nine standard frequency categories (extending from never or less than one time per month, to six or more times per day).”
ARIC participants filled out what are called “food frequency questionnaires” once between 1987-1989 (Visit 1) and again between 1993-1995 (Visit 3).
Food frequency questionnaires are forms participants fill out to report what they eat and how frequently they eat it.
Similar to observational studies, food frequency questionnaires can be useful but have some serious limitations so far as the strength of the evidence they provide.
In the paper, “Controversy and Debate: Memory based Methods Paper 1: The Fatal Flaws of Food Frequency Questionnaires and other Memory-Based Dietary Assessment Methods”, six reasons are offered for why food frequency questionnaires (FFQs) memory-based dietary assessment methods (M-BMs) “are invalid and inadmissible for scientific research and cannot be employed in evidence-based policy making”:
“Herein, we present the empirical evidence, and theoretic and philosophic perspectives that render M-BMs data both fatally flawed and pseudo-scientific.
First, the use of M-BMs is founded upon two inter-related logical fallacies: a category error and reification.
Second, human memory and recall are not valid instruments for scientific data collection.
Third, in standard epidemiologic contexts, the measurement errors associated with self-reported data are non-falsifiable (i.e., pseudo-scientific) because there is no way to ascertain if the reported foods and beverages match the respondent’s actual intake.
Fourth, the assignment of nutrient and energy values to self-reported intake (i.e., the pseudo-quantification of qualitative/anecdotal data) is impermissible and violates the foundational tenets of measurement theory.
Fifth, the proxy-estimates created via pseudo-quantification are physiologically implausible (i.e. meaningless numbers) and have little relation to actual nutrient and energy consumption.
Finally, investigators engendered a fictional discourse on the health effects of dietary sugar, salt, fat and cholesterol when they failed to cite contrary evidence or address decades of research demonstrating the fatal measurement, analytic, and inferential flaws presented herein.”
During the past year, how often on average did you eat tomatoes?
Beans, cooked or dried?
Beef, pork, or lamb as a sandwich or mixed dish, e.g. stew, casserole, lasagna, etc.?
Cold breakfast cereal?
Rice or pasta?
What’s not asked in this FFQ are questions about the ingredients in items like “cake”, “cereal”, “stew”, “casserole”, “lasagna”, and items like “rice or pasta” are considered equivalent.
We arguably don’t have many better options for assessing dietary intake, but just because crappy data is available doesn’t mean we should use it.
The adjustments (or lack thereof)
Remember confounding variables?
Well, fortunately, researchers can make statistical adjustments/corrections to their analysis to try to account for these confounding variables.
For example, if you were studying the relationship between shoe size and reading ability, you could make statistical adjustments to account for the influence of education levels to get a feel for if there is actually a relationship between the two.
Let’s take a look at how well the researchers accounted for potential confounding variables in the ARIC study.
“We adjusted the ARIC analyses for demographics (age, sex, self-reported race), energy intake (kcal per day), study centre, education, exercise during leisure activity, income level, cigarette smoking, and diabetes.”
Want to know what was not adjusted for?
I’m not sure what data was or was not collected during the ARIC study, but variables such as these could – and likely did, as we’ll discuss shortly – have an effect on the outcomes of this study.
Recall that the ARIC data was collected at six points over a period of roughly thirty years, but that the participants eating habits were only collected twice over a period of roughly six years.
How did the researchers get thirty years of data out of six years of data?
“We did a time varying sensitivity analysis: between baseline ARIC Visit 1 and Visit 3, carbohydrate intake was calculated on the basis of responses from the baseline FFQ. From Visit 3 onwards, the cumulative average of carbohydrate intake was calculated on the basis of the mean of baseline and Visit 3 FFQ responses.
We did not update carbohydrate exposures of participants that developed heart disease, diabetes, and stroke before Visit 3, to reduce potential confounding from changes in diet that could arise from the diagnosis of these diseases.”
From 1995 onwards, the researchers assumed that the participants’ carbohydrate intake matched the average of Visit 1 and Visit 3, unless they were diagnosed with a disease between Visit 1 and Visit 3.
Not only are we depending on an inaccurate tool (FFQs), but we’re assuming that the participants’ diets did not change at all for roughly twenty years.
The authors next integrated their data from the ARIC study with the data from other studies in a meta-analysis.
Here’s how the studies were assessed for eligibility in the analysis:
“Briefly, papers were eligible for inclusion if they were a published full-text report, observational study, or randomised controlled trial with a minimum of 1 year follow-up, reporting relative risks (ie, HRs, risk ratios, or odds ratios with CIs), and adjusted for at least three of the following factors: age, sex, obesity, smoking status, diabetes, hypertension, hypercholesterolaemia, history of cardio-vascular disease, and family history of cardiovascular disease.”
The authors identified nine variables that could affect mortality:
- Smoking status
- Hypercholesterolemia (high cholesterol)
- History of cardiovascular disease
- Family history of cardiovascular disease
If a study controlled for at least three of these factors, it was good enough to be included in the analysis.
That is, if a study controlled for age, sex, and obesity, but didn’t correct for whether participants smoked or had history cardiovascular disease, it was still eligible for analysis.
Similar to the ARIC analysis, factors like alcohol intake and food quality were not considered.
Adjusting for animal- vs. plant-based protein/fat
The authors do attempt to address food quality by assessing the quantity of plant-based or animal-based protein and fat the participants were eating.
“We created animal-based and plant-based scores by dividing participants into deciles for either animal-derived or plant-derived fat and protein, and carbohydrate intake, expressed as a percentage of energy as previously described.”
This is a pretty myopic view of food quality, however, as we’ll discuss shortly.
The authors report that the average caloric intakes for all five levels of carbohydrate intake in the ARIC data were between 1558 kcal/day and 1660 kcal/day.
These values are significantly lower than average caloric intakes reported by several other studies, suggesting that the FFQs were either inadequately designed or inaccurately completed.
The lowest quantile for carbohydrate intake was reported to be 37% carbohydrate content by energy intake.
If we are to assume that the energy reported energy intakes are correct, this results in an average carbohydrate content of 144 grams per day.
This is hardly what most low-carbers would consider “low carb”.
Does this distinction detract from the results of the study?
Not necessarily, but it does demonstrate that the folks writing headlines referring to “Super-Low-Carb” diets, or warning “keto lovers” and “keto dieters“, either don’t understand what they’re writing about or are knowingly misleading their readers.
Additionally, since we’re using percentages and not absolute carbohydrate intake, this could mean that some folks in the “low carb” group weren’t actually eating fewer carbohydrate so much as they were eating increased quantities of other stuff that’s negatively affecting their health.
This may be speculative, but we’ll discuss next how the results of the ARIC study and the meta-analysis support this notion.
Healthy User Bias
There’s a reason I harped on the adjustment methods used in the ARIC study and the meta-analysis.
“Participants who consumed a relatively low percentage of total energy from carbohydrates (ie, participants in the lowest quantiles) were more likely to be young, male, a self-reported race other than black, college graduates, have high body-mass index, exercise less during leisure time, have high household income, smoke cigarettes, and have diabetes.
Overall, mean consumption of energy from animal fat and protein was higher than from plant fat and protein across all carbohydrate quantiles (table 1). Participants in the lowest carbohydrate quantile had higher average consumption of animal fat and protein and lower average consumption of plant protein and dietary fibre than participants in the other quantiles.”
Now is a good time to discuss “healthy user bias”.
Healthy user bias is the concept that individuals who engage in one behavior that is perceived to be healthy – like following nutrition recommendations, for example – are likely to live healthier lifestyles in general.
Thus, when assessing the effects of these individuals’ nutrition habits through observational studies, it’s difficult, if not impossible, to correct for all of the other lifestyle factors that might affect health outcomes.
The ARIC study suggests this is a valid concern.
Not only were the participants in the lowest carbohydrate group more likely to have had a high BMI, exercise less, smoke, and have diabetes (which were corrected for) but they were also more likely to be eating more animal-based foods and less plant-based foods at a time when such dietary habits were (are) repeatedly discouraged.
I don’t think it’s out of the question to assume that many of eating more carbohydrates, more plants, and less animal foods – behaviors perceived to be “healthy” – were likely prioritizing minimally processed food, drinking less alcohol, managing stress, spending time outside, and prioritizing sleep.
Similarly, I don’t think it’s out of the question to assume that those who ate fewer carbohydrates, less plants, and more animal foods – behaviors perceived to be “unhealthy – were likely eating more processed food, drinking more alcohol, not managing stress, spending more time indoors, and slacking on sleep.
This is why I don’t think the “adjustment” for animal vs. plant-based protein and fat provides an adequate account of food quality.
Did the participants die early because they swapped out refined grains, added sugar, and processed vegetable oils for pasture-raised beef, wild caught fish, leafy, colorful vegetables, fruit, and root vegetables?
The data suggest otherwise.
I doubt these folks were eating paleo-style diets, as suggested by the headline…
Not to mention the fact that paleo diets are not defined by their carbohydrate content.
The result left out of the headlines
“High carbohydrate consumption was associated with a significantly higher risk of all-cause mortality compared with moderate carbohydrate consumption.”
While some article headlines acknowledged that both low and high carbohydrate intakes were associated with increased mortality, were there any headlines about high-carbohydrate diets alone?
Low carbohydrate diets were the only ones to be singled out in headlines.
I have my tin-foil-hat-worthy suspicions as to why this is case, but I don’t have much (if any) evidence to support my suspicions, and I’d prefer to keep this blog as legitimate as possible.
The mechanism (or lack thereof)
Finally, I am not aware of any mechanism by which the restriction or elimination of dietary carbohydrate could reduce lifespan.
Carbohydrates are not essential nutrients.
Restricting – or even eliminating – carbohydrates isn’t like giving up an essential vitamin or mineral.
…like vitamin B12, for example…
This is where I think the failure to account for food quality is most unfortunate.
Is the increased rate of mortality in the groups eating fewer carbohydrate a result of carbohydrate deficiency, or is it because they were eating other trash?
The authors acknowledge in their discussion that it likely is not the macronutrient composition of the diet, but rather other nutritional factors, at the root of this association:
“There are several possible explanations for our main findings. Low carbohydrate diets have tended to result in lower intake of vegetables, fruits, and grains and increased intakes of protein from animal sources as observed in the ARIC cohort, which has been associated with higher mortality.
It is likely that different amounts of bioactive dietary components in low carbohydrate versus balanced diets, such as branched-chain amino acids, fatty acids, fibre, phytochemicals, haem iron, and vitamins and minerals are involved.28 Long-term effects of a low carbohydrate diet with typically low plant and increased animal protein and fat consumption have been hypothesised to stimulate inflammatory pathways, biological ageing, and oxidative stress.
On the other end of the spectrum, high carbohydrate diets, which are common in Asian and less economically advantaged nations, tend to be high in refined carbohydrates, such as white rice; these types of diets might reflect poor food quality and confer a chronically high glycaemic load that can lead to negative metabolic consequences.”
None of these nuanced considerations made their way into the news headlines.
What you can do with this information
Do I think this study is totally worthless?
Actually, I kinda do.
I had written that question expecting to follow it up with, “not entirely…”, but I just couldn’t bring myself to do it.
This was an observational study.
The data was based on Food Frequency Questionnaires.
The authors knowingly left several confounding variables unaccounted for.
To be clear, I couldn’t care less how many carbs people eat – some of us do great with a ton of carbs, whereas some of us do great with no carbs at all.
What I do care about, however, are headlines and articles that use scare tactics to discourage us from making changes that might be just what we need to improve our health.
Such headlines serve no purpose other than to get clicks and continue to push the status quo, one-size-fits-all, nutritional advice that’s been failing us for decades.
Stop getting your nutrition advice from the news.
Eat real food.
Find the carbohydrate intake that works best for you.
You’ve got this.