Wits & Weights | Fat Loss, Nutrition, & Strength Training for Lifters

How Can We Trust Hypertrophy and Nutrition Science? (Eric Helms) | Ep 391

Dr. Eric Helms Episode 391

Build muscle and lose fat with evidence-based fitness. Free custom plan when you join Physique University (code: FREEPLAN): witsandweights.com/physique

How can you tell when science is solid and when it’s just being sold to you?

Dr. Eric Helms returns for his third appearance to unpack how we interpret fitness research, why “evidence-based” doesn’t always mean “accurate,” and what it really takes to think critically about the information you consume.

We break down the philosophy of knowledge and why understanding how we know things leads to better results. Eric and I unpack skepticism vs. cynicism, spotting red flags in “sciencey” claims, and balancing real-world experience with research. You’ll also learn a simple framework to stay curious without getting misled.

Today, you’ll learn all about:

2:25 – Setting the stage for scientific thinking
10:50 – Why critical thinking beats blind belief
15:07 – The meaning of epistemology
25:01 – How empiricism changed modern science
34:52 – What black swans teach us about truth
48:27 – Cynicism vs. healthy skepticism
59:50 – Making sense of the hierarchy of evidence
1:12:56 – Turning data into practical results
1:28:50 – Where to find credible fitness research

Episode resources:


Support the show


📱 Get Fitness Lab - Philip’s science-based AI app for fat loss, muscle building, and strength training for people over 40. It adapts to your nutrition, recovery, and training to improve body composition without guesswork.

🎓 Try Physique University - Evidence-based nutrition coaching and strength training to help you lose fat, build muscle, and master your metabolism with support and accountability (free custom nutrition plan with code FREEPLAN).

👥 Join our Facebook community - Free fat loss, muscle building, and body recomposition strategies for adults over 40 who want practical, science-backed fitness guidance.

👋 Ask a question or find Philip Pape on Instagram

Philip Pape:

If you've been following evidence-based fitness, you probably feel confident about what works. You trust the research on protein, progressive overload, and calorie deficits. But what if that trust is at least partially misplaced? What if the very foundation of how we evaluate fitness science has important blind spots we often fail to acknowledge? My guest today, the Dr. Eric Helms, reveals why our relationship with research deters between strong and fragile, how the studies that guide us have limitations that could mislead us if we're unaware, and why the difference between knowledge and understanding determines whether you actually get results. You'll also discover why seemingly obvious truths often crumble under scrutiny, how to navigate the gap between what studies suggest and what works for you personally, and a framework for evaluating fitness and knowledge that will better inform your choices in the future. Welcome to Wits and Weight, the show that helps you build a strong, healthy physique using evidence, engineering, and efficiency. I'm your host, Philip Pape, and today I'm joined by the magnanimous Dr. Eric Helms for his third appearance on the show. Eric is a lot of things, so I'm going to narrow it down to just five. A WNBF Pro Natural Bodybuilder, a senior research fellow at AUT, co-founder of 3D Muscle Journey, co-author of the Muscle and Strength Pyramids, and part of the Mass Research Review team. And I have to say, what I personally appreciate about this gentleman is two things. The first is his ability to bridge the research and the practice so that it's understandable, but also respects the nuance and I'll say self-efficacy or self-determination to personalize the information. And second is he's a kind man. He is generous as a human being. He gives time to platforms like ours, but he also nudges, educates, and role models the fitness industry into something more positive and welcoming. And I think we need more of that. So Eric is the ultimate trust builder. What better man to help us answer the question? How can we trust what we hear, see, and read about fitness research? What are the hidden assumptions and blind spots in evidence-based practice? And why is understanding the philosophy of knowledge itself so important to calling ourselves science-based? That's right, folks. Today it's all about epistemology. Eric, welcome back to the show.

Dr. Eric Helms:

That was an awesome introduction. I am uh I just want to sit and listen to you, but apparently I have to do something on this show.

Philip Pape:

You have to do something, you have to teach us something. And uh hopefully I don't fall too much over my feet when it comes to philosophy. And I mean, I guess it starts with understanding what the heck we're talking about. Like what is epistemology? Is this something that you think about in your sleep? Why should the average, albeit highly intelligent listener, care in the context of hashtag science?

Dr. Eric Helms:

And are all of you just things I dream about in my sleep? Is that the reality I exist, or is this a simulation? Um, no, it's a great question. And, you know, despite the fact that technically anyone with a PhD has a doctorate in philosophy, we're so far away from that reality uh in terms of modern science, where we have philosophers who have specific degrees in philosophy who are taught this, and then we have an extreme amount of variability in how much exposure scientists in different fields get to epistemology. So uh, do I think about this in my sleep? Um, only recently, I think is the answer to that question. And I would say that sports science, exercise science, and even medical science is a relatively young field, not so much medical science, they have probably focused more on methods than methodology uh in a lot of the spaces that are relevant to us, those of us who are interested in you know, listening to people with PhDs, MDs, RDs, CSESs or whatever in the fitness space, health space, nutrition space. We think about where we operate. You've got your kind of top-tier science communicator, and by top tier, I'm gonna talk about purely from an exposure perspective. You have like your Hubermans, everything on down to us little guys, you know, uh trying to communicate to very niche audiences, or even just on a one-on-one basis. Someone who's been in the game a while as a fitness professional, might have gone back to school to get their bachelor's, uh, opened their own gym, and they're they're at the Thanksgiving dinner, and Uncle Tom says something about, hey, tryptophan makes you sleepy, and you go, hey, let's talk about that. So I think this is something that pervades our life, and it's a it's a good problem to have right now because big science is sexy. So there's an increasing interest in science and using science for fitness. At the same time, there's an increasing skepticism. So we have multiple competing things. One is always bad, an anti-intellectual anti-science movement. There's only slight slivers of me being very generous as to the silver lining there of it. Maybe it enhances skepticism and makes people want to think for themselves. But I think cynicism is a very different thing than skepticism. Uh, one bad thing, another bad thing that's maybe reflective of a good thing, but purely bad, is that you can use science to get very, very popular. Um and unfortunately, that means that when you approach the pursuit of knowledge from a bottom line capitalist kind of marketing, let me sell you something, let me get something out of this, let me extract something from uh everyone else, um, you really need to have some safeguards in place so that it's not um a misuse of science to take that wonder and interest people have, and then just pull the old rug out from underneath him and have him buy your book or follow some silly protocol. And that interacts with social media and what that rewards, which is controversy, uh, the double take, like the wait, what doctors hate him kind of messaging, which often is inherently unscientific or lead you there. And I say there's some safeguards, and I say that at a sense of humility. I make money off the fitness industry, I sell books, I sell a research review, I sell coaching. Uh, I think the guardrails in place that are important so you can assess the different people you're interacting with uh are what exactly are they selling? Is it Dr. Carnivore Tom, who has married himself and his image and his ego and his wallet uh to a specific protocol? Or is it someone who is selling science communication? That makes it a little easier for me to operate in a space where my inherent biases and need to pay my mortgage don't come at the cost of manipulating my audience. I need to get your attention, I want you to get your information from me, but ultimately what I'm selling you is does this work? Does this help you be a better practitioner? Does this help you achieve your goals? Uh, and does this keep you informed? So, you know, we don't balk at paying universities tuition fees for going to their classes. Uh, so we are still operating in a space where there are many conflicts of interests, but at the very least, I think the interest in science and there being a lot of money in it is potentially a good thing, but also manipulated. So, and the final thing, the third thing I said that's just good is hey, people are interested in science, which is fantastic. It's a good time to be alive, but it is also a time where no longer are folks like myself operating as the people with a water bottle in the desert giving fluids to those dehydrated because there's so little relevant science or access to it. And say at 2007, eight, or nine, when I first got into this space in 2025, we are all drowning, and we're not sure whether we're uh you know being nourished by hydration of accurate information or drowned in a sea of misinformation and sometimes disinformation. So that's my little preamble. But to understand that, the traditional methods that science communicators have used in our space of combating misinformation, disinformation, and pseudoscience are very short-term whack-a-mole strategies. They are basically every time something pops up, I tell you no, that's not right. It is the um actually warrior. And that's tough because you can get blocked, uh, you can just get shouted down, and it's a constant job. It's still an important one, and you do have to do some of it, but to some degree, I think there is value in exactly what we're gonna do on this podcast of teaching people how to learn. You've got to learn how to learn so that they can go out into that wide world and figure out how do I discern the difference between the pseudoscientists and the scientists, how do I fact check this, and how do I give myself a little more of a robust BS detector so that when they hear something from Eric Helms, they don't just come across someone who's slightly more confident, slightly bigger biceps, and I say slightly because it's hard to imagine, maybe a second PhD or a more legitimate PhD or an MD, they're a double doctorate, and go, well, I'm just gonna I'm gonna take this appeal to authority more. And also, he did say something about a PubMed ID. I'm not gonna go ahead and read that, so let's just go with it. And I think sometimes people in our space, and I'm almost done with this preamble, I swear to God, Philip, uh, we throw up our hands. We go, listen, if you're not gonna read the full text, if you're just gonna read the abstract, then I can't help you. So I can tell you, I can give you the keys, but if someone else comes by and and you're not willing to read the full text, then you're not evidence-based. And what I'm gonna tell you is that I think with an understanding of just the very basics of the philosophy of science, which is about all that I have anyway, um, like I said, because I wasn't properly trained as a philosopher, even though I have a PhD, you can identify some red flags and you can identify what should be done when people are constructing knowledge and logical inferences and using research in an appropriate way, to where almost in an interdisciplinary manner you can gauge who is maybe even not necessarily a bad actor, but just a misinformed actor. Because, like I said, there are plenty of actual scientists who are not well informed on this, who make errors and become more victim to their own biases and then go on to inadvertently pass on misinformation. But there's also a lot of snake oil salesmen and women out there who are actively giving you the disinformation because it will help their bottom line either through ad revenue or directly selling you a protocol or a product.

Philip Pape:

You know, as a parent of two daughters who know way more about logical fallacies than I ever learned because my wife does a great job, uh, they would have identified probably eight of them in the examples that you gave about the pitfalls that I think we're going to learn some about today of what to look out for. Because we're not trying to transform the listener into a double PhD researcher, right? That's kind of what you're in mind. And there's a sense of humility and self-efficacy here, self-awareness, just like in the fitness space where we're trying to, you know, you come to this thinking there's a protocol and method that everybody follows, and then you realize, no, it depends, it depends, it depends. And at the end of the day, it's down to the, you know, me and and my ability to evaluate what's coming at me and make sense of it. I think of, you know, when you said methods versus methodology and you talked about uh the relative at the dinner table having the chat about science. I feel like the more humility we have, the more humility we build in ourselves to where we we realize we know less and less. And that's a good thing. Because then it opens us up to more and more skepticism that I've found to the point where I literally don't, I almost want to get, you know, hide in a hole and not answer anybody's question for fear of saying the wrong thing. So the way I structure this episode, we can go totally off track, but I was thinking we could tackle areas of epistemology and then kind of go off of that. But then you you're welcome to take wherever you want, Eric. So the first one that is just, I think maybe simpler to start with is um depth, epistemic depth, which is knowledge versus understanding. And and even I have a hard time making sense of this one. It sounds simple, deceptively simple, I guess. But I think of it as there are facts, and I guess a priori facts that we say that are true, and that's a different part of epistemology. And then there are how we make sense of those, I guess, in context, but maybe um I've got that wrong. What do we mean when we say knowledge versus understanding?

Dr. Eric Helms:

Uh yeah, I think if I may reframe or redirect, I think perhaps a better way to start this is first, I think some people are like, well, what the hell is epistemology, right? So I think we probably want to get to a place where the listener has agreed with us on a certain view of the world. Um, because it's very difficult to even talk about, and I don't think it's productive to talk about um the ways of potentially viewing the ways the world could work or the ways we can get knowledge, because then we never really help the listener. I I think they would just want to hide in that hole even deeper. But if maybe we can give a little bit of background on what ontology and epistemology are, just so they know what we're talking about, and then we get to, hey, in the hard sciences, which is not necessarily a superior thing to something like a sociology, but in the world we live in as fitness professionals who are interested in physiology, behavior, and human-based research, where we're actually testing hypotheses and looking at outcomes to therefore infer something we do in practice, um, which is rooted in empiricism, I think we have to understand what empiricism is, where it came from, why we use it, and to identify when people are making claims about what you should do, which stems naturally from empiricism, but they're not actually empiricists, and how that can essentially operate as an intentional or unintentional bait and switch. Is that are you good with that, Philip?

Philip Pape:

Yeah, I'm good with that. And empiricism, so we're going to be talking about, for example, objectivism versus subjectivism, or what we think of as truth.

Dr. Eric Helms:

Yeah, I think we should probably want to get to like which ontological commitment do we have and what epistemological commitment comes from that? And then what do we how should that look when we see it in terms of just, you know what, this is the program you should follow, or you know what, this is what you do. So um, and we can put that into some specific boxes, we'll simplify it. But I think there is there is a natural flow here if we want to talk about the end user. Uh, whether that end user is a personal trainer who wants to read research and then help Tom, or whether it's Tom. I think I think both can benefit from that structure if you're okay with me kind of leading in that direction.

Philip Pape:

Please lead it because then uh I'll pull the threads from there.

Dr. Eric Helms:

All right. So, first off, I think uh epistemology follows naturally from ontology. And at a very high level, ontology is simply the question of what is there to know. So, this is our basic understanding of how do we think about the nature of reality. So you could imagine that two different people who have a different view of the nature of reality could fundamentally disagree about your ability to even know things about it, right? So I think we have to start with an acceptance that we believe in a formation of reality where we can know facts, like you were talking about. Because if we don't start there and we have to talk about that, we can get to this point where, well, how do you even know it's you know, like like I was joking about, are you all just something I'm dreaming?

Philip Pape:

The matrix. Yep.

Dr. Eric Helms:

Exactly. So if we start with that, then we go, okay, well, once we have agreed upon a given nature of reality, that there are things that are knowable within it, and that they can be at least to some degree objective. And then how do we access that? That is where we get to epistemology. So epistemology is what can we understand, and how can we understand it given the reality that we've agreed upon, is is what we think is is going on here. So there are several epistemological stances you can adopt, and there are way more than several, but I think it's useful to put them in a little more constrained boxes. So there is essentially uh objectivism and subjectivism to some degree. That that's that's a decent place to start. And the name and what we're basically saying there is okay, is there an objective reality or is is it not objective? And in objective reality, what I mean by is that we think that if you had a god-brained computer that could quantify every single atom and molecule in the universe, its current direction, position, and know everything about it, which of course is impossible because it would take more energy than there is in the universe to measure it, that we could therefore predict the next things that are happening. And if you manipulate one thing, something else happens. It's essentially a belief in cause and effect, right? If someone is not on board with that, carry on and good luck to you. You know, so um that that's also the premise of the foundation series. Exactly, right? So, yeah, great, great, great book if you want to if you want to dive in, and uh not quite as great show, but still good. So, yeah. Anyway, so if we start there, then we go, okay, well, well, what kind of traditions can we access that? So, for example, three big boxes positivism, constructivism, and critical realism. And uh those people you you might have heard of those, but positivism is kind of your pretty hardline objectivist uh approach to saying the world consists of real things and that we can gain objective knowledge about it without a whole lot of like caveats to it. And I think probably a different view is constructivism, where it's saying like everything is constructed in our mind. There may or may not be an objective reality, and even if there was, we couldn't access it. So constructivists don't necessarily deny the existence of an external world, but they are highly skeptical as to the degree to which we can access it without influencing it, and that you know, your perspective of reality is therefore inherently different than mine. And this is like if when you think about annoying debates between philosophers, if you're a pragmatic person that go nowhere, it's generally someone at the table as a constructivist. And I'm not saying it's a bad thing, I'm just saying it is what it is. And if you're someone who is like, so what do I do in in the gym? You know, it's like, well, is the reef in the gym? It's like, okay, stop it. So I and I'm being a little critical. Obviously, you're getting a hint at what is my stance, but I am not simply a uh a positivist. I am probably what you would describe, and I think honestly, most people would fall under the umbrella of critical realism, which is essentially saying there is an objective reality out there, and it can be accessed, but unfortunately, it's accessed through these flawed meat sacks that we we operate through called the human existence. So it is framed through our senses, uh, the devices we use, and then it's interpreted and it's influenced by bias. Uh, and it is what it is, and everybody's kind of the blind men all touching an elephant, but maybe with the right approach, we can do some things that are useful. And I think you can take a pragmatic or even utilitarian approach and say, okay, well, if we care about outcomes, are we still like in a really rough spot where there's only a couple hundred thousand of us around and we're an ice age away from being on the extinction list of the many species that have tried to make it on earth and are not here? Or have we succeeded as a species? And there's debates around that as well as to what defines success. But we have at the very least, like an infectious disease, uh, expanded and um gone through the age of enlightenment. We have done a lot of a lot of useful things. We have new problems because of that. We might be damaging the planet, we might be making future generations have some new problems. But as a survival machine, we're in a much better spot than we are now than we were hundreds of thousands of years ago. So you could say, hey, we've developed, you know, combustion, we've developed the circuit, we have electricity, all this good stuff. We have modern health care. We're at the point now where people have more complexity in their phone than NASA had when it sent us to the moon, more complexity than the initial Cold War uh missile system uh that was controlled by a supercomputer and even a like a Gen 2 smartphone by a hundredfold, right? So that's where we're at. We're at the point now where people are so disconnected from the level of advancement we have that they've actually regressed. And now people, because they don't even understand it, they're like, well, the world's probably flat, and I'm gonna go ahead and type that on on Facebook because I can't actually tell with my own eyes that the world is round. And they don't realize that the timestamp when they post it on Facebook is actually using non-Newtonian physics to use GPS tracking data to know where you are and the actual time you're at, not the wrong time because we're still operating with just pure Newtonian physics. So there's an irony and just our existence as humans. So that is what we need to get to first, Philip. There's an objective reality, and most people, uh, probably whether they realize it or not, especially in our space, are critical realists. And we're saying, yes, we all have different interpretations and ability to access it and interpret it, um, but there are probably some systems we should use to determine what to do in this objective reality and to get facts from it, if you will. Uh, and that is where we get to the tradition of what we would describe as empiricism, um, which is the empirical study of testing hypotheses. So we form a hypothesis, we say, hey, I think this is the way it operates, and we attempt to falsify it, which is a whole other discussion. So, how how do you like that as an intro?

Philip Pape:

That was a great intro. I had a lot of thoughts, but I'll just hit on a couple. When you got into critical realism, two things came to mind. One is a conversation my wife and I have had a lot over the dinner table of like, how do we know what red is red? Right? How do how do we know that red is red? You know, you never mind colorblindness, right? But we think of objectivism like we all understand that that is red, what we're seeing there, that apple is red. Why what do you see and what do I see? And because of the human meat sacks we are and interpretation and sensory input and all that, can you ever prove it? And that's where then my mind goes, well, what are the objective things outside of us like wavelengths of light, right? And frequencies and reflection and things like that. And it that definitely intrigues me because when we talk about nutrition science or fitness, a lot of times I think people make it about right or wrong in the context of individuals' responses as opposed to outside the individual, what doesn't change versus what changes once individual meat stack gets into the feedback loop, so to speak. Um, the other thing I thought about when you said uh you're talking about how we regressed, but we've advanced a lot as a civilization. Again, I have similar conversations with people of were we better off as tribal pre-agricultural, you know, human beings or not? But um, there's a podcast called How to Change the World that just came out recently by Sam Eric's a guy I know personally. And Eric, I think you'd love that. He actually is gonna spend the next decade going through every major like class nine, class 10 invention of man in all of human history. I just thought of that just as an aside. So let's empiricism. So now it's the scientific method. I was thinking of my physics teacher in high school who taught us how in the Middle Ages people would um think that the trajectory of something thrown was uh half a parabola and then a straight vertical line. And it's before they had tested that. So anyway, go take that where you will.

Dr. Eric Helms:

You know, so so that that that's that's one that I haven't been exposed to. But uh what when I think of empiricism and where we were at beforehand, uh it it kind of comes down to when we were and it really stems from what can we measure, you know. So like if we go to pre-empiricists, which is actually a long time ago, there was um, you know, empirical traditions in before the Middle Ages in India. Uh we're, you know, in the West were a lot more informed as to like the empiricist debates that kind of most recently ended with, at least in the hard scientists, we're like, hey, we're going with Karl Popper. Like that's kind of where we got to. Is the closest thing to a consensus, which is by no means a consensus, but sufficient to create an entire field of quantitative research, which by the way, quantitative just means that you can put numbers to something. Qualitative just means that it is um you can't put numbers to it. It has to do with the human experience. What you said is with your point with what is red, um, red is quantifiable if we look at wavelengths, but it is qualitative. Like what number red is that? You know, I I don't know, but I'm I that is red, you know, and knowing that that's a little bit different between people. So you use this information differently. And uh the qualitative experiences can inform quantitative and vice versa. So when we deal with research, human research, uh, we are addressing both qualitative and quantitative frameworks. And critical realists are not pure uh quantitative researchers. Um, it's very common uh in my field as a sports science and sports nutritionist uh and in public health to interview people to get their experiences because that can help form a hypothesis. And if we can measure something, then we can see what's a way to have a better outcome. It's the whole discussion you'll hear in popular discussions of, well, what about the lived experience? Like, you know, what does that mean? And without connecting with people's qualitative experiences, it's very easy to ask questions that you can quantify an answer to and provide information about that simply no one cares about uh or that aren't useful. So that's kind of the link there. So you can't be committed to critical realism without acknowledging the value of the qualitative experiences of people, as well as needing to quantify what can and should be quantified to provide objective answers for at least the nature of a relationship, if not necessarily what to do on a day-to-day basis, but often what actually to do when you have a very specific question. So empiricism um was something we could really only start doing once we had the ability to measure things, you know. So if we think about like kind of the stereotype of the Greek traditions, and we think about some of the things they come up with, like the four humors explaining the way the body works. You know, there's like bile and black acid and these four colors, and it's the mixture of them. It is a logically consistent, for the most part, set of rules that explains what we see sufficiently that our own biases allow it to operate for decades and even centuries, where we go, yep, that's the way things work. So if we look at the history of medicine, basically until empiricism became a little more established and then rooted in medicine, people were doing some real effed up stuff to humans. Like, you know, leeches, bloodletting, lead. There's a long list, if you just want to see like why going to a hospital was just as much rolling the dice as just letting that disease carry itself out until about 1930, 1940, right? Bad times. It varies by region, it varies by the specific subfield, and there's a lot more complexity to it. But when you are operating in a space where you can't actually discern cause and effect, the subjective nature of reality takes hold and humans can do some some things that are that are less than ideal. But we also have to acknowledge that humans aren't so bad because empiricism didn't come out of nowhere. We got to it from observation. So science is not this separate thing from us. It's just that the way we're built is not necessarily to always discern cause and effect, it is to survive. So we will accept things that are less efficient if it just allows us to pass on our genes. So if we don't want to walk around with existential dread, we're going to have certain biases. A good example is that if you're a hunter gatherer in the forest and you hear a twig snap, you're going to be attuned to immediately thinking it's something dangerous first. And this is where negativity bias stems from and why social media can be crippling. You know, you get one negative comment, a hundred positive comments, you focus on the negative, it drives. Drives you down and then you become less effective. Social media makes you depressed, even though you get ninety-nine positive comments. I may or may not resemble that as an example. Another example is, you know, we need to, you know, have the motivation, have the motive to do things and progress our human existence. And you could argue that religion or a belief in God or something of that nature, and I'm not making a claim as to whether it's accurate or not, is important. Or why do anything? Right. And of course, there are ways to have values and morals and be an atheist, and there's plenty of atheists who have lots of meaning in their lives. But that can lead to certain behaviors. Uh, you know, we believe, based upon the observations with these limitated, these limited subjective nature of our reality, that the sun comes up because I pray to the sun god. And we really shouldn't not pray to the sun god because what if the sun doesn't come up? That's a big problem. And each time I pray to the sun god, the sun does come up. So therefore, praying to the sun god causes the sun to come up. So there's a lot of muddying, you know. So this is before an understanding of something like a control group or a placebo. Not that you could have a placebo prayer, placebo sun, or placebo sun god, but you get my point. So some of the reason why we had thousands of years of pre-empiricism and kind of forming these rationalist logical constructions that they they're internally consistent and they sound legit based upon what you observe. So we we often attach ourselves to them, but they're in fact false, is why we need empiricism. And the most uh prescient example, I would say, that is used if we skip ahead to Karl Popper is the claim that was present in Europe that all swans are white, right? So this was just an objective truth. And you know, we have this in society now in very like simple terms, like there's stuff that we just agree upon is true and we never question it. And some of these are like just fitness myths, you know, like you know, I heard blank. Like, of course we do that, and and you're not supposed to question it. And a lot of times it's accurate, sometimes it's semi-accurate, but this was one of those things in Europe, you know, they'd never seen a black swan because in Europe there are no black swans, and it wasn't until colonialism continued, ships went out to Australia and they were like, huh, that's a black swan. And now all of a sudden, this objective truth, in quotes, is not true anymore. And you have to modify it and say, all European swans are white, but some swans are black. And this is, you know, comes to the fundamentals of logic, which I don't think we need to necessarily go through, but it does go, all right, we can't just have these baseline agreed-upon truths. The best thing we can do instead of trying to prove things is try to falsify things. And this is basically the idea of, well, I'm gonna set up a hypothesis, I'm gonna say, I think all swans are white, and then I need to form different experiments or different tests of this hypothesis to try to falsify it. And that's why we use weird language in in science, why we say, like, you know, if if you ever go through a master's degree or a property or something, yeah. Exactly. Like, why can't we just say this is better than that or these things are the same? Well, we didn't test that. And there's actually a different statistical model. So the statistics we use generally uh in most science that people are trying to apply is based upon the idea that we're trying to falsify a hypothesis. And the terminology of a hypothesis and a theory, theory differs from what we use colloquially. When we say theory, we think, you know, I got a theory, and it's basically like a hot take. But the theory of gravity is a little more than the hot take, and we can test that right now. I just dropped a little earphone thing I was playing with. It still is fall to the ground, so you know, Newton, Newton's doing all right. Theory of gravity is still holding up, and that's because attempts at falsifying it have all failed. So a theory is basically a hypothesis that has survived multiple, multiple, multiple, multiple attempts at falsification. It's moved from being a model to uh an accepted theory. So things like the theory of evolution, the theory of gravity. And even in uh qualitative research, there are theories of behavior change, you know, and there are theories that are are quite useful, which you can then go and empirically test. And it you slight to you start to slightly modify these models. So that's kind of where we get to.

Philip Pape:

And can I add in there, the theory thing is fascinating because there's also the general versus specific theories, right? Like you mentioned gravity. Einstein found that gravity didn't hold under all circumstances when you go to the level of galaxies or planets, right? But it didn't falsify that it held at the level of Earth, kind of like the white swan and that white swans in that one area was the truth for that area, but in reality, it wasn't the bigger truth. And so then you had to refine it. I think I think that's fascinating because that's where we can do it. You mentioned at the very beginning in your opening, I'll say monologue, was the practical nature of applying the fitness research to our lives has to have a constraint, right? Yes. We have to have that special theory of fitness relativity for our lives. So that came up and a great example.

Dr. Eric Helms:

Yeah. If you don't, if you don't mind, just to bolster you and then I'll shut up. Like no, sir. Um Newtonian physics was great until we started trying to leave the gravity well, right? Uh, and we started noticing like, well, hold on. Why are the why are these we're trying to put stuff in orbit and we're we're missing or it's not working out, and we're not realizing there there's a there's a time delay, you know, and uh so special relativity became a necessity once we like you're operating at NASA. The difference between NASA and you and I trying to help Tom, though, is that the lowest like educated person in the room has a master's degree in mechanical engineering, you know? So, but for us, it's everybody. Like everyone is interested in health and fitness. So it's very challenging to provide a level of you know, baseline kind of knowledge and education that doesn't go over the head or it's not unnecessary. And I know we've already lost a lot of people, but hopefully there's enough trainers who can pass us on and maybe do a better job than than I at simplifying it further.

Philip Pape:

You know, Tony is listening. Tony, you're still listening. Thank you, Tony.

Dr. Eric Helms:

So so yeah, like I think essentially what I'm getting at is that some scientific fields are fully encapsulated within the halls of science, and it's experts talking to each other and trying to figure out how to do things better for a specific purpose, like putting a satellite into geosynchronous orbit. And other times we deal with the very frustrating reality where we've spent 10 years educating ourselves on how to study humans and come to objective truth. And someone who is, you know, they have a high school education and has been duped by uh cult of personality. Sound science-y is on the tip of Mount Dunning or Mount Krueger, and they're like, nah. And you're like, ah, god damn it. And we have things like, you know, people being unwilling to get vaccinated and having a child die despite their best efforts, and think they're doing the right thing. And that is that's the modern world we live in, and it's really tough. So uh just just to put that out there.

Philip Pape:

Yeah, that's true. And you mentioned the amount of information, and then now that layer on top of that AI, which itself is built on misinformation and to some extent of how it's trained. And so I will get every day somebody saying, I've been listening to your podcast for you know 300 episodes, but then I asked Gemini XYZ and they said something different. Like, okay, what do I do with that? Good conversation starter. But you mentioned uh the other thing you talked about was the history of this. Now, I guess my question to you when you analyzed the history of the scientific method and empiricism, is there a long stretch of time? And again, I think the Western world with the Middle Ages, which weren't weren't as dark as people make it out to be. There's a lot of innovation happening. But was it the power structures and and the very things we're talking about that are still relevant today, like religion, you know, the Catholic Church or whatever, that simply uh disincentivize or suppressed any progression in the use of empiricism, even though there are pockets of this always happening?

Dr. Eric Helms:

This is happening constantly. And even in people who and in fact, I would say it's in all of us. You know, for example, I would say generally, people who are left-leaning in Western cultures are pro-science, but they are also tend to be people who uh trend towards activism and motivated use of science. So, and it is starting with a conclusion because you're seeing a population in need. And I'm not making a claim of whether this is good or bad, but we go, this this group is struggling in this way. Uh, I want to assist, and I think we need to do this. And this can result in the motivated, biased interpretation of research. So, for example, if you take some of the more extreme proponents as a fitness example, at health at every size, you can interpret some of the data in such a way that you can make a seemingly consistent claim, if you cherry pick, that body fat is not causative for at high levels of or even predictive of health issues. And you can claim that it is a correlation, it's a marker, but it is other things, and the real issue is stigma. And it's not that stigma is not an issue, and it's not that at certain body fat levels the association is weak, but they will deny that there is a less than 1% chance if you have a given person plucked from a population with a BMI of over 35 that they will be metabolically healthy, and that there is actually a contributory role of adipose tissue, especially visceral fat, leading to that. And that motivated reasoning comes from a great place of trying to make the medical space better and more kind to people who are living in large bodies due to no fault of their own, through a combination of genetics, how they're raised, and the changing society that makes moving unnecessary, uh, which dysregulates our hunger and our metabolism and provides us with a constant supply of high palatable, easily to consume foods that are affordable, which is exactly what our human bodies are engineered to get. And when we look at primates in captivity, they slowly gain weight too. And we're not sitting here claiming, you know, orangutans or are lacking the willpower to be good people, right? So I think that's an example where even in groups that self-identify as science forward and science open, we have desires. We are operating in this critical realism space where we want change. And as soon as you want to do something subjective with objective information, it goes through a lens of interpretation and there's no way around it, even if the intentions are great. I come to the table with thinking science is positive. I want people to use this, empiricism is the best tool we have, all this stuff. But no matter how I how much I try to give people the best information, I am also starting with that premise. So any any good, like if you really sit down like at a like at a philosophy conference, everyone starts with like a positionality statement. Here's who I am, where I was born, what I believe, and here's what I bring to the table. So everyone can be uh you know aware of your biases and and your motivated in your motivations and things like that. So the degree to which I think you want to spend time on that is is is questionable in different contexts, but I think it's important to understand that because then we don't just go, oh, well, the dark ages, blah, blah, blah, blah, blah, blah, blah, uh, or the Catholic Church or systems of oppression or what have you. Like you can even see in in like, for example, communist regimes in China, the idea of, hey, we're just going with science, but oh, these scientists who have any beliefs related to religion, we're actually going to kill them. And we essentially create a similar level of fervent zealotry and lack of questioning around the state. So, yeah, we are objective and we're not dealing with this silly religion stuff that has ruined, you know, uh society. Look at the Catholic Church, but the science needs to support the state, you know, and it's the same problem, just through another, another medium. And so I think starting with a conclusion or starting with an understanding of the world that you accept without being willing to question it is the issue, but it's also kind of the natural state of humanity if they don't align themselves towards empiricism. So that is what you see in some pockets of the fitness industry now. I have a quote unquote model, not a scientific model, but I have a certain set of truths that I've come to accept, which I may even have formed based upon cherry-picked data. And that leads me to a conclusion. And if anything that conflicts with that conclusion, I question that data, I question the people who did it, and I apply a higher level of skepticism to that rather than questioning the model. And then that's confounded by the fact that I am now selling you something. I have identified a belief or identity related to it, Dr. Carnivore Tom, for example. I'm part of an in-group, and or I am just so deep down the rabbit hole because I got exposed to a certain way of viewing things. And sometimes we don't realize that the person who exposed you to this information, they're not empiricists, even if you are. But the people who still kind of align themselves with empiricism, who just don't realize that the person they've learned from is not, can find a way out. A really good common example, I think, for both of our age groups and time we've spent in the industry is people who get exposed to Gary Tobbs. You know, so Gary Tobbs was one of the biggest proponents and growers of late-stage, you know, low-fat, sorry, low-carb high-fat diets, and investigate a journalist, uh, started with a belief and positioned people like Ansel Keys, positioned uh institutions like the World Health Organization, more modern uh, or or just you know, the Dietetic Association or all of the Western countries as having motivated uh reasonings for why they would villainize fat, industry connections, and then he would specifically take the research, which logically fit together if you ignore all the other research, and could form a mechanism, could form an outcome, and it looked like someone who was an empiricist walking you through how it is actually carbohydrate and specifically sugar through the mechanism of insulin, which has led to obesity, and how power structures, like you said, have kept us in a state of unhealth. And it's logically consistent, but it's also incorrect. And you only know that not if you are necessarily questioning whether he's an empiricist or not, but if you actually look at the research and you go, well, is that all of it? Because in human research, we have a tremendous number of false positives, and we have to look at the overall body of data, and we have things like the hierarchy of evidence, and there's systems in place to address these biases so we don't just cherry pick our way to a false conclusion. So it appears to be empiricism, but it's it's an it's it's not. And how does the average person address that? How do they deal with it? And then that comes to well, let's take a look at the attempts at falsification and how do they respond to that? So there's almost these two things you have to do. You have to know this philosophy that we're getting at, and then you have to look at, okay, so a given paradigm, whether that's low-carb ketogenic dieting, or that's a certain way of training, or that's a certain supplement, or a certain approach to menopause, how do they respond to attempts at falsification? Is there an immediate dismissal, safeguarding, siloing, and silencing critics? Or is there an openness and a relatively fluid shift in what you're recommended to do? And that is kind of what you have to do in the real world.

Philip Pape:

Yeah. So there we'll we'll get into the hierarchies of evidence, but what came to mind was the hierarchies of skepticism when, especially you're new to the industry, you're just you know, general population, um, you don't have many premises other than all the would be called, I guess, the illusion of obviousness or the justified but false beliefs that we talked about, where you just kind of grew up with them and they're in your brain rattling around until proven otherwise. And what you're saying is that, you know, there's a lot of people doing the call-outs and the hot takes where the criticism is not on the model or the evidence, it's on some other component or actor or institution or something like that. And then the average person, how do they know what to be skeptical of? Should they be skeptical of that person? Because on one hand, I'm I could be hearing you say, well, don't just immediately attack the funder of the research. But should you at some point, you know, be skeptical of the funding of the funder of the research, right? And so you never quite know what to be skeptical of. So maybe I don't know if the hierarchies of evidence, the hierarchy of evidence allows us to do a step through or kind of a where start here, and here's the flow chart. You know, I just woke up today, babe in the woods, and I want to know what to eat and how much protein to get, and I have no idea where do I even begin, right? That's just one example. Where do we start?

Dr. Eric Helms:

No, I think that's great. I think one thing I'm gonna take a lateral step because it's really important, um, is that uh, and there's actually research on this. In modern times, especially like the post-COVID era and conspiracy theories, there's a lot of things that masquerade a skepticism. And there's been some attempts to try to identify how can people with relatively high IQs who are relatively well educated in certain ways fall victim to conspiracy theories. And it's because of that motivated reasoning that we are all going to default to, and a fact that we don't necessarily train people in this type of thinking, empiricist thinking, right? And if there's there's can be a constellation of factors related to impulsive logical thinking and um cynicism, which can actually be measured on a scale. And cynicism, which you can again measure on a validated scale in, say, social psychology, correlates pretty strongly with holding false beliefs and gullibility, which you can also measure on a scale. Now, cynicism from a scientific kind of definition perspective is when instead of questioning the information or questioning whether there's an issue with logic, uh, you jump to making inferences about the source of it, the person, and essentially you poison the well. You don't go, oh, maybe we can add some additives to the water in this well to maybe get some fluids out of it, because there's no better option. This is a swamp over here. You go, no, the well has been trying to kill us for years. We need to drink from the swamp, right? Um, which leads to problems. And maybe the well is a better option than the swamp because the swamp, oh, I don't know, it's a swamp. So that that's kind of where we're getting at. So there's a lot of people who uh trend towards cynicism. And cynicism is not just about, oh, are you kind of like a negative person, like cynical, like not that, not that colloquial term, kind of like the colloquial theory version of it. It really does come down to uh a distrust of people and it's an emotional response. So you can identify when you are being cynical versus when you are being skeptical, because skepticism comes from kind of you logically assessing something like, oh, A doesn't follow B, or sorry, B doesn't follow A, right? And instead of going, oh, I don't like that. So cynicism typically feels like defensiveness in an argument. Uh, you're loading your barrels before you fully consider what they say. You don't want what they're saying to be true, rather than you're trying to discern truth. So this is something that takes a little bit of self-awareness to identify. And it is something that you can come to knee-jerk with the whatabout. You know, what about is different than going, okay, so if that was true, what else would be true? And how could this not be true? You know, so what you are trained to do as a scientist, and ideally what you want to do in the real world, is you question the beliefs you hold and you figure out how could I figure out if I'm wrong, not how do I search for the evidence which proves that I'm right. Because you will be able to find the evidence that proves you're right. That's what we're built for after millions of years of evolution. You can maintain that belief. You can have a selective interpretation and even acknowledgement of the existence of the evidence available to you. So instead, the only way, truly the only way to get around that bias is to look for ways in which you could be wrong. And skepticism gets you closer to that, cynicism pushes you further away. And it's an opportunity for your ego, your existential desire to hold on to your identity or your understanding of the world which feels safe, to go, right. I don't have to change or threaten that. I don't have to be open to a threat, which you know we're not a big fan of as a survival machine. I can shut that down. If I'm cynical of that source of information, like the entire apparatus of science, cool, then I can just go with, I think the world is flat and I've closed myself off to that avenue. So I think we have to start there. Is that we don't have to dive down that rabbit hole, but it's important.

Philip Pape:

No, it does. Um, a friend of mine and I were having a conversation just recently about the God gene, the god delusion, I think it's called by Richard Dawkins theory that tribalism and religion could be a genetic advantage from the past, where, you know, you I mean, tribalism, we we know that the other uh is is kept out and now we protect our group. And I was trying to go back and forth with him about the conspiracy theory mind today. And is that the same thing? Is it the same kind of thought or is it different? And you're talking about cynicism here, and I'm not gonna ask you to to suggest whether it's genetic or not, but is it the same or are there differences? Because I feel like one is is about a long-held belief as a group, and another is a mindset of being open to these crazy ideas and then holding on to them falsely. I don't know. That makes sense.

Dr. Eric Helms:

Yeah, I I think they operate in the same constellation. Um, there are going to be certain people, whether it's genetic, epigenetic, or, you know, the way they're raised culturally, who are more predisposed to this and more likely to pass it on. And then you get kind of a group think socio-cultural thing that kind of expands it and it can impact the whole society. But you do have to ask the question is, you know, why are some people who are raised in a cult hook line sinker, they're down, they're all about it, and they will never question that belief, and other people go the complete opposite direction. You know, you can meet just as many atheists who are raised fundamentalist insert religion uh to people who are raised atheists. And there are atheists who have never questioned their atheism, and there are atheists who have then become religious. So, you know, that that's why we have to kind of just go with empiricism, because if one of those they both can't be true. And if you can start with the truth and move further away from it, then we can't trust our human senses fully without a system, which is kind of links back to okay, so what systems do we have in place? Well, within empiricism, we have the hierarchy of evidence, which is a useful heuristic and model. But like all heuristics and models, it is a shortcut. That's the definition of a heuristic, and all models are wrong, but some are useful. And in most cases, the hierarchy of evidence is useful. And that is the simple understanding that empirical designs which allow us to infer causation, X causes Y, are at the top of the pyramid. And then because there is human variation and all the issues with sampling and uh the need to get a precise estimate from a population from a smaller group, which is the statistics that we talked about earlier, getting a whole bunch of uh designs of research that are based upon causation together and you know, reviewing them in a systematic manner, and maybe even statistically, a meta-analysis or a systematic review, respectively, sits at the top of the hierarchy. So I can find a randomized controlled trial that typically in our fields are too small, that has done all the things we need that we've alluded to, like having a control group and having a placebo group. So we know that the inference of expectation, uh, or not the inference, sorry, the influence of expectation, uh, that effect is less than the actual change from the intervention, and also that just the normal variation in a group doing nothing, and that change is also smaller than the intervention. Uh, if we got an RCT with only 10 people in each of those groups, just by random chance, there's a decent possibility that it's either wholly misrepresenting the directionality of the effect of the intervention, or it's showing no effect, or too strong of an effect, or it's overestimating the magnitude. But if I can take 10 of these 10 sample size RCTs, combine them together appropriately, and I could either kind of vote counting style, do a systematic review and say, hey, in eight cases there was null effects, one there was positive, one there was negative, there's probably nothing going on here, or hey, in seven cases there was null, three cases there was positive. This is probably a legit thing that's only showing up sometimes. So we're can expect a small to trivial true positive effect from this intervention, or a more robust way of doing it, where we account for things like the variability within study, the hierarchical nested nature of data, and double counting if you're getting two samples from the same thing. We do a properly done meta-analysis, and we can actually get a standardized mean difference. We can have some weighted controls for the individual strength of each study in the meta-analysis, and you see some of these cool plots that we see where we get this little diamond. And this is why, with a well-done meta-analysis, and this is something I can see the echoes of not understanding these things happen all the time on social media. So just today, I posted uh in collaboration with the Sports Nutrition Association uh part of me giving a lecture on a meta-regression that we published on protein. And someone said, Oh, yeah, great, a meta-regression on a bunch of poorly designed studies, isn't that just amplifying the uh the poorly designed studies that that like basically you throw garbage in and it makes it more garbagey. And that's a fundamental misunderstanding of an effectively done meta regression. Because the reality is that you can look at 90% of studies on moderate versus high protein intakes, and you will see a null. Doesn't seem to make a difference for the outcome of interest, fat-free mass change, fat mass change, you know. But there are no studies, to my knowledge, which find a significant difference favoring, actually, there is one study I'm aware of out of quite literally uh hundreds and hundreds, maybe a thousand now, uh, that has found lower protein provide some kind of some type of statistically significant advantage uh for body composition change. And maybe about eight to ten percent of studies find a positive significant effect for higher protein. And if you do a proper analysis of all those studies combined, and you essentially give yourself the same power of a really high-end RCT, it tells you, hey, there's there's a legitimate but very small positive effect from a high protein diet that you need 2,000, 1,000, 2,500 participants across 60, 70 studies to detect. So should we be spending a lot of time quibbling over 1.4 versus 1.8 grams per kilogram of protein? No, it depends on the context, but generally, no, if we're talking to Tom at Thanksgiving. If we're talking to a young Ronnie Coleman trying to turn pro, maybe a different answer. But, or if I'm deciding what I'm gonna do, you know, two weeks out from me getting on stage in Taiwan, potentially different answer. But that's the context we're talking about. But the fundamental misunderstanding there is this rejection of the evidence hierarchy because of not understanding how the evidence within the hierarchy operates, not understanding that a meta regression doesn't inflate error, it reduces it and it enhances precision and deals with uncertainty if it's done right. But not all meta regressions are done right.

Philip Pape:

I always think of it as like as like a filter or sieve or whatever. So when people think of these studies, you just said the one point five versus one point eight may not make a difference. It may make a difference to someone who's optimizing. That raises the question a lot of people have, and how do you communicate this as a science communicator? But cause and effect in general from these studies, even if you do see an effect, you're calculating over a sample, you're you're coming up with usually a curve, right? Like a normal curve with a mean, how is that useful to me? It's kind of the question of are we coming up with a new box of constraint, you know, a constraint box of the effect, and that's the usefulness, knowing that an individual falls somewhere within that, is that the usefulness of it? You know what I mean? That's what comes up in people's mind. It's like, well, what about just me doing it myself and having the most direct evidence possible? It's anecdotal, but it's me. And where does all that fall in the hierarchy?

Dr. Eric Helms:

That's a great question. So essentially, when you are trying to provide information that's useful from some of these uh big-ticket items at the top of the hierarchy, either your your well-done RCTs, meta-analyses, systematic reviews, or you know, if there's not a systematic review or meta-analysis, your kind of own subjective interpretation of the five or six RCTs you got, or the one RCT you've got, you have to sort of, as a science communicator or a trainer, anticipate the errors people will make when you then give them a new heuristic based upon that. So the common one that we have in our space is yeah, higher protein diets are better. And then people trend towards higher protein diets do all the effect I see in that study, and more is better. And they assume a univariate linear relationship because that's an easy thing to conceptualize. More protein, if I go from 0.4 to 0.8, that's the same thing as going from 0.8 to 1.2, that's the same thing for going 1.2 to 1.6, same thing for going 1.6 to 2, all the way up to I'm eating 400 grams of protein, right? Um, and it's better, and it's better for these things. So if I have an issue, the answer is increased protein. So without thinking about your baseline or where you're at, you tell me you're hungry, um, you're not getting the gains you want, or you're losing too much muscle mass in your diet, try adding 50 grams of protein. And that's always going to be the same. When in reality, there are points along that curve where that line gets less certain, flat lines, and it becomes an increasing barrier and friction to real world application and just having really smelly farts and pissing off the people around you, right? And thinking about, okay, well, what's the cost? Because if I pull one lever, it impacts other things in the real world. And that directly connects us to the benefit and the shortfalls of anecdotal information. So when I do a meta-regression, we have done statistics to isolate the effect and the relationship of protein on the other variable. So we can actually quantify the degree of heterogeneity and influence of other variables. And we might say something like, here's another good example, like the effect of volume, right? Uh there's a great meta uh regression that's out right now by Pellin and colleagues, and we're looking over a thousand people, and you can say, hey, we see an 8% increase in hypertrophy over a typical study length when you're doing, you know, 15 to 25 sets, right? But that's not saying that volume is causing the entirety of that 8%, right? If you took that same person doing that amount of volume and you said, Well, I'm gonna knock out your uh your genes that allow you to turn on mTOR, they're probably not gonna get 8% growth. They may not get any growth, they may get smaller, right? Or we go, hey, guess what? You're not allowed to sleep, you know? And when I say that out loud to you, you go, Well, of course, I know that. But when you look at that graph and it tells you a very simple thing and it says this growth, that volume, your monkey brain wants to go, well, I'm doing 15 to 25 sets. I'm not gonna do less than that, it's less growth. But we don't think about those interacting variables. And that's in the construct of a study where we isolated cause and effect. But it's cause and effect is not like as simple as it is in physics, where if I tap that pen, the pen's gonna move. That pen is in a storm, and I've tapped it, and it's gonna get like explodes in the number of different vectors and impacts that it has and decelerations, accelerations. And that is true in human research all the time. So it's a contributing factor to the overall constellation of things that impact us. So when we step away from that, we're still in that harsh reality, but now we don't have the idea, the ability to identify independent effects. Now, from anecdotally, you're going, well, I don't need science to tell me this. When I ate more protein, I looked better, I gained more strength, I lost more weight. But what else happened when you ate more protein? And how often do we have people come to us who create a false association? Because we're association machines, or we wouldn't be here today. That twig, probably a tiger. Well, that wasn't a tiger, but if you didn't think it was a tiger and you didn't associate twig snapping with danger, the tiger would eat you and you don't get to pass on your genes. So we're getting better and better at falsely associating twig snapping with tigers because it's good enough to get the tribe alive today to talk on a podcast, right? So, what that means is that I get people come up to I had a guy come up to me when I was in in London and he goes, Hey man, like every time I take protein, my elbow starts to hurt. I've been thinking about going off protein powder. And this is a guy who's a high-level calisthenics athlete, looks incredible. Like, you know, he could he could take a shirt off in any locker room and be like, okay, what bodybuilding program are you on? And he's like, I I don't I don't lift weights, you know. But he starts taking protein at certain times. What motivates the decision for him to start taking protein? I'm getting serious about my training. I need to get my protein up. I'm cutting right now. I'm going through a higher volume block, my sleep's a little disrupted. It's being used as I come to talk to him, probably in more in a prophylactic manner at times when he's pushing the envelope. Now he's associating it with the thing he's paying attention to. I got this new protein. You know, I've heard things about the supplement industry. I operate in a relatively skeptical place, maybe a bit of a naturalist fallacy. Should I be focusing on real food? Man, elbow's hurting every morning or every afternoon. I'm chucking this powder in here. Is that hurting my elbow? And we see like the entire traditions of reality and life built upon that, right? So I think it's important to understand that you will associate things that you draw attention to. Like you could have, like I, just as likely, if we think, like, I can't think of a mechanism of why protein powder would hurt his elbow, right? But he doesn't know that. He hasn't thought through those things. And if I presented him with, hey, when's the last time you changed your toothpaste style? You know, and he's like, actually, same time period as me taking this protein powder. I'm like, well, do you know about all the additives in your in your toothpaste? Like, no. And you look at the well, look at the back of it. What's on there? Oh my God, what is what is this? What's what's fluoride? I don't know. Maybe you should Google fluoride. Google fluoride and health risks. Like, oh shit, it's my toothpaste. And it was neither one, right? So that is the the issue with with anecdotes and even well-done anecdotes, which is essentially just everything on the hierarchy of evidence below randomized controlled trial, prospective cohort studies, epidemiological research. Now we don't throw those out. That's that's the cynic sneaking in. We acknowledge that really, really well-controlled, well-observed uh, you know, anecdotes are essentially the most of science. We can't always do an RCT on something, and we definitely can't meta-analyze it. There's a classic meta-analysis on the use of parachutes, which shows you how even the heuristic of evidence-based practice can be taken too far to absurdity. And it states we could find no randomized controlled trials on the use of parachutes, and therefore the recommendation to use parachutes when jumping out of planes is not evidence-based, and we should question their utility in skydiving. And it was literally published just to prove this point of saying, hey, like, and we see this all the time in terms of our field now. You want cholesterol to not be associated with negative heart health, you want saturated fat to not be associated with it. Give me the RCT where you experimentally gave people high cholesterol and high-fat diets that they followed in a metabolic chamber until they died, and then let's see who died first. And you're like, oh, wait, wait, you want the 30-year metabolic ward, thousand-person RCT? That's not a thing. The best thing I can give you is what happened to cholesterol and blood pressure over an 8 to 12 or maybe six month study. And then what happens if I do a large-scale epidemiological prospective cohort study where I sample people and give them food frequency questionnaires and then follow up with them 20, 30 years later and look at the odds ratios of dying. And then I do some statistical controls on smoking, obesity, socioeconomic status, uh, environmental stuff, and everything I could think of and see if both of those kind of gives an inference of what's in the middle. But the thing about this heuristic is I can say that and you go, oh, that makes sense. But then also in later in some some some part of this conversation, I'm going to say, hey, don't, if someone is saying, hey, you know, cortisol is a stress hormone and therefore you shouldn't train fasted as a menopausal woman because of X, Y, and Z, and they have this long mechanistic explanation and they talk you out of certain ways of training or eating. And I go, well, you should just focus on outcome studies. Like if they're telling you that fasted training is going to harm body competition, well, then you want to look at studies with the outcome body composition. And that's true. But in the context of public health nutrition, it's not true because we cannot have that evidence. So even an accurate heuristic, a useful model that we have established and that's been well established and accepted since the 90s with Sackett, who ported over evidence-based, or that where we took evidence-based medicine and ported over to evidence-based practice and strength conditioning and nutrition, it is not always true. So a big part of the evidence-based hierarchy is that you have to use the best available evidence. And then you have to scale your confidence, your precision, and your specific claims to the best available evidence. So many times we have to operate and we have to make decisions in absence of an RCT, in absence of a systematic review, in absence of a meta-regression or meta-analysis, and even in absence of anything except observational data or short-term non-outcome-based RCTs, and then observational epidemiological research, which is basically the whole of public health. But if you don't accept that, or if you manipulate that, or if you're not fully aware of those nuances, you can be operating where you felt like, hey, no, Helms told me I need to focus on outcomes. Therefore, I'm not buying this whole cholesterol-saturated fat link to heart health because show me the RCT, show me the outcome data. I want to see the RCT where you you killed more people by feeding them, you know, this certain diet over 30 years. So it is, it's it's complex.

Carol:

Before I started working with Philip, I had been trying to lose weight and was really struggling with consistency. But from the very beginning, Philip took the time to listen to me and understand my goals. He taught me the importance of fueling my body with the right foods to optimize my training in the gym. And I lost 20 pounds. More importantly, I gained self-confidence. What sets Philip apart is the personal connector. He supported and encouraged me every step of the way. So if you're looking for a coach who cares about your journey as much as you do, I highly recommend Philip Pape.

Philip Pape:

Yeah, it makes my head hurt because it it's what I see every day uh happening on social with the cherry picking and the use of studies as ammunition almost, and then the cynicism that we talked about as well. So I did have one follow-up on anecdotes, and that is, well, kind of two. One is where does the n equals one, where does the individual come in in terms of not the example you said where we're finding causation where it doesn't exist so much as incorporating what does exist and is understood to be the best available evidence, and then pivoting from there into refining it for you? That's where I'm coming from. And then also totally different question is do we use anecdote predominantly for these hypotheses today? Or is are a lot of the hypotheses we come up with in nutrition science built upon well-established science, if that makes sense?

Dr. Eric Helms:

Good question. Uh, good questions. And I'm gonna tackle the first one.

Philip Pape:

Very different. Yeah.

Dr. Eric Helms:

So yeah, yeah. No, but they're they're related. So the first one is basically, you know, well, how do we use this on an individual basis? The trainer's going, okay, this has been great, guys. I hung with you, but I'm gonna go train my clients on Monday through Friday, what do I do? And that is basically the question of what is the best available evidence? And the best available evidence depends upon your application. If I'm gonna be writing a position stand uh in the Journal of the International Society of Sports Nutrition on broadly what should bodybuilders do. And by the way, that's on the way. So check that out. Coming to a peer-reviewed publication near you. Um, the best available evidence is gonna be RCTs, meta-regressions, and meta-analyses, right? And a trainer, though, who is going to be training Tom, who wants to do his first show. Apparently, your uncle really took your advice seriously at Thanksgiving last year. Good job, Tom. The best available evidence on what you should do when you hit a fat loss stall is not a meta-regression, meta-analysis, or RCT. Uh, it might be informed by those general principles, but it is looking at the best anecdotal data you can get from them. And I think that's where a lot of people slip up because it can easily become, hey, that worked for me. But that's okay. So long as it's done in a responsible, well-controlled manner, you're not changing six things at a time and it's not coming from a heavily motivated position where, you know, I believe only low-carb diets can be pull-shredded. So I've tried everything. Well, except, of course, you know, the thing I accept could not possibly be true of trying to have carbohydrates, you know, that that's an example. But if you are operating with the right philosophical stance and you don't have a tremendous number of uh strongly held beliefs that are not informed by good data, then well-reasoned anecdotes that are informed by the principles and relationships we see in the data are the background to the decisions we make. So, for example, instead of going 15 to 25 sets based upon the Pell and meta regression, should produce 8% hypertrophy within my client. My client's stalled. I've looked at a lot of their nutritional factors, sleep. Uh, they want to make more progress by next year. What can I do to turn up the rate of progress? Well, they're only doing 10 sets per muscle group per week and they have the time to do more. Um, they're in a moderate level of efficiency. I know from the Pellin Metro regression that there's diminishing returns, you lose efficiency the higher volume you do, but you get something back. So long as they can recover from it, they like it, they're not getting joint pain, and they can maintain the same proximity to failure and exercise selection, like they do in those studies, because they've got a controlled lab environment. I can increase the dose and maybe get marginally faster gains. Let's try that. Not I'm gonna expect to go from 6% to 8%, and that all of my clients should be on 15 to 25 sets, right? So that is the type of thing is you understand the general relationship of volume to hypertrophy with all of the context, and then you apply that in the situation. But then here's the critical piece. You have to accept that if it doesn't work or does work, better or worse than would be represented by that meta-regression, then both of those things can be true in the same reality. And you don't have to figure out why. And you can't. And that and that's a tough thing for people who like science because that's not what we want. We want concrete answers, we want math to work. But probabilities are a a type of math that people who actually like math don't like, you know, you know, which which is, you know, like you'd start talking about quantum states and and some of the fuzzy math we don't fully understand to someone who just really, really likes things to make sense, and they're like, that's not fucking math, you know. Pardon my French. So I think that's where where people struggle with it. They go, Well, when I reduced volume, I got better gains. So Pellin's meta regression is nonsense. And you go, okay, hold on. So you saw in yourself, in an uncontrolled environment, something that did not match up with 35 studies, a thousand individuals in multiple labs, a properly done meta regression, and you're rejecting that, but accepting your own reality. And that is much as you think you're being rational, you're assuming that your reality is fundamentally different or more valued. It's more relevant, and it should be what you go with, but you don't reject the meta regression. You don't assume those thousand people aren't real, because that's actually what you're doing. And it's easier for me to do that because I've ran these studies. I've seen the the 15 people come in and I've put an ultrasound on them and watch what happens. I've made them go to failure. I've prescribed different training protocols to different people. So to me, I have my monkey brain connected to it. And it's easier to go, well, for whatever reason, I don't know. I'm not even gonna venture a guess. I could if I was your coach and we would try to figure out what was the bottleneck to why volume didn't go the way we wanted, but I'm not gonna reject the relationship. I'm just gonna go, okay, well, for whatever reason, kind of your cap to a benefit is 10 sets. 15 sets didn't go well, back to the drawing board. Not let me throw out drawing boards because science is bullshit, you know, and I'm gonna now follow a Mike Menser approach or something like that. Yeah. So I think that's where a lot of people get stuck. So for you need to be able to hold two realities, that there is a relationship that's out there, but it is describing an average, a mean. Uh, and it's not that we're all special snowflakes, it's just that that isolated effect that I like I said, only 80% uh that that 80% of the relationship between hypertrophy and volume is not related to volume. It's related to all the other things that volume sits in the background. It's something you have to hold on to to hold those two things together. So I guess what I'm saying to summarize it really simply, Philip, is that the best available evidence, once you've understood these principles and relationships, is the anecdotal realities and the changes on the ground. But it is fundamentally flawed as well, and you need to stay up to date with both. Because this tells you what levers you can pull and what it should do. But it doesn't matter if it should do this or shouldn't do that and something else happens in the real world, because that's the only data that matters once it comes down to coaching people.

Philip Pape:

Yeah, I think it's actually very helpful and it goes back to objectivism that if you do something and get a result that is vastly different from what you now accept as an empiricist as the best available evidence, that's a really good data point because of that deviation to then put you on a different path with your detective skills, or if you have a coach helping you out. That's what I get out of that. And like you said, the special snowflake thing is important too, because I think people confound being different with being uniquely special, completely separate from any human that ever existed, right? Which are two different concepts. So I like that. And I was also kind of laughing in my head because when someone asks me what I should do or what number should this be, more and more my answer is try this and get back to me because I just don't know. Yeah. Let's experiment. Let's eat that banana now before you work out and see what happens.

Dr. Eric Helms:

Can I give you an example of of how I think one should deal with this when they're presented with things that don't seem to make sense? I remember being asked by uh Ian McCarthy way back in 2012 why I thought diet breaks and refeed seemed to work in our clients. And I was on a podcast with him, for those who don't know, this is old school kind of evidence-based stuff in the YouTube-Facebo. Ian McCarthy was someone who was promoting evidence-based practice. So was I. Uh, it was early days for me and my journey, but late enough that I kind of understood some of the things we were talking about, maybe less explicitly but more implicitly ingrained. And, you know, the data on refeeds and diet breaks was highly mechanistic, highly speculative. And the narrative around it was you get a metabolic boost and you can eat more but lose weight faster. And that wasn't quite what we were seeing, but generally we were seeing that when we would use refeeds and diet breaks, we'd get better outcomes, more consistent fat loss, uh, and people looking better on stage. And the cost was it takes a little longer. You have to start a little earlier. Um, but net benefit pretty large. And the experiential effect of, you know, if I could predict what would happen in the research, if we did a whole bunch of studies on physique competitors using diet breaks and refeeds, which we still don't have, by the way. We have an athletic population, so there's like four studies, it's more, but it's really not what we would need to confirm our anecdotal observations was man, this thing has a moderate to large effect on multiple outcomes of interest, but primarily body composition change. And it seems to make the process easier as well. Like, whoa, that's a huge win. And, you know, and Ian asked me, like, well, uh, but how? And my response was, I don't know. And I think that is I'm proud of myself looking back then because I could have said something hand-wavy about the impact of leptin and downstream regulation and metabolic rate, or uh the inability of us to measure the actual true energy expenditure change because we're just measuring BMR or resting energy quotient and not the effect of NEAT. And maybe you do get this energy boosting advantage, which I would say now, based upon the data we have, we know is probably not true. I would have been wrong. I would have been hand-waving, I would have been selling you a sexy story, I would have been leaning into my desire to have an understanding of why is the thing I'm doing working and constructing a logically consistent, potentially true outcome that people would have latched onto because I'm Eric Helms. And I've got the anecdotal proof. And, you know, we're good coaches, and I've seen it work on myself. And now I've created a greater level of bullshit that has to be disentangled once the data comes out, and more of a placebo effect and tribalism around this approach and a belief that that is good, and then people who are harder to teach this kind of thing that we're talking about on this podcast later. But instead, I I was again, I'm, you know, give myself a little bit of the pat on the back. I said, I don't know. This seems to work, and I'm okay with that. And I'm not claiming that it's some metabolic effect or advantage, because it's probably not that, but something else is happening. But just because we haven't measured it yet doesn't mean it doesn't exist, which is where the the tendency, the logical fallacy that empiricists can lean towards is a desire to have answers. And I think you have to lean into a willingness to accept that science creates more questions. We do get answers along the way. So don't, you know, like this throw you throw your hands up. But we shouldn't try, like when you read discussion sections, it's a lot of speculation. And that's fine, it's great because this now answers your second question. How often does research stem from anecdotal practice or other research? It's a good mix of both. And it depends upon you know where that study came from and the line of research that presupposes it. And ideally, it should be informed by both. A lot of the PhD uh students that I work with here at AUT, the study design we have looks to investigate what's being done in the field, and this is specific to uh the sports science of physique, sport, and bodybuilding, as well as comb the literature. So, for example, I have a student, shout out to Takahiro Itagaki from Japan, who is looking into proxies for hypertrophy, because it's really hard to study every single muscle, every single combination of exercises to figure out well, what's the best exercise I should choose for hamstring hypertrophy in the context of a bodybuilder and a full five-day program, yada, yada, yada, yada, right? Um, we got some head-to-heads of a seed to leg curl versus a lying leg curl and RDL versus this, but just imagine how many variables we'd have to layer to empirically get to from first principles the best exercise selection. Give me a hundred years, we'll be 10% there if we had a million dollars a year. Not going to happen. So, what do we do? Well, we can comb the literature and see which principles exist related to um, you know, uh resistance profiles and muscle lengths, which we can get some generalizable principles, but we should also probably sit down with top bodybuilding coaches who have some epistemic commitment to science, like maybe they have bachelor's degree in exercise science and a proven track record uh and experience in the field, and do something like a Delphi survey, which is where you take experts on a given topic and you try to find a consensus to certain percentage. And that can give you some hypothesis generation, or you can do a focus group, qualitative interviews about people's uh subjective thoughts on something. Or you can even get quantitative survey data. Hey, what exercises do bodybuilders use? Is there differences between the pro and amateur level and say wellness versus bikini? Because you know, they have different uh, you know, categorical requirements for the judging criteria to grow different parts of their legs, you know, glutes versus quads and glutes, are their natural selection and kind of raw, uh, if you will, capitalistic nature of performance where the cream rises to the crop. Is there are are are there some success leaves clues types of anecdotes we can get from that? Are all top bikini competitors doing specific glute exercises? Are all top wellness competitors doing specific quadricip exercises? Let's put that in the mix. So let's form a hypothesis based upon a scoping review of the literature and we identify the gaps, but also what could infer exercise selection, as well as top coaches and top athletes, what are they doing? And if the Venn diagram crosses in those two places, that's a strong hypothesis. But if they don't converge at all, interesting. This is understudied, or maybe it's based upon traditionalism and people are ignoring the science, or there's a lack of access to the science. So even that process can give you a hint about where you should go next. And that is literally what we do. So we have Takahiro is doing an e-delphi study, and he's also done a scoping review to try to inform which proxies uh should we look at uh to to infer hypertrophy long term? How are coaches telling, you know, before the hypertrophy has actually occurred, whether or not an exercise is effective? It's based on the pump, uh the subjective soreness, the the what, you know. Um are they do they have any access to technologies that have become cheap enough and miniaturized enough where they're confident that they are giving something useful, uh, like like AMOXI data or or velocity or what have you, versus what can we do in the lab that might be useful in an applied setting, but maybe not for practitioners. And then what do we look at?

Philip Pape:

Well, thanks for taking us behind the scenes, because that that was really enlightening as to how this all closes into a loop. I think a lot of people think this the studies are done in a vacuum or the hypotheses come out of thin air. They're just, you know, there's a lot of thought behind this. And you just touched on all the principles we we addressed today, not only the ontology, um, but but combining the qualitative and the human and the anecdotal elements into this in a in a very systematic way, and even using proxies and things like that to make it practical and getting a faster uh path to something, you know, this year as opposed to a century from now. All good stuff. What I want to close with, because we're low on time, is of all the resources available out there and podcasts and like masks, you know, is there one thing that the listener can go to next that would help them start developing or training their epistemic thinking kind of in a very accessible way? Anything come to mind?

Dr. Eric Helms:

Yeah, selfishly. And this is where maybe you know, because I sell information, you go back to what I was saying before. It's like, well, can I trust Eric that he is the best person to teach me to learn how to learn? I think I am. I may not be, um, but I certainly am motivated to tell you to check out Mass Research Review because we don't only just review the research on a monthly basis. That's the principal thing you'll get in the PDF or the online issue. What's pretty sick in the last month and how do you apply it? But we also have a guide to interpreting research and a guide to interpreting statistics, which is specifically built to built to teach you this stuff. And I include a healthy dose of not just here's what you should do, you know, like bottom line it for me, uh, or a healthy dose of whack-a-mole, which I have to do. I always include how to think better. So, just as an example, a few issues back, I wrote a whole piece on whether or not the eccentric is a waste for hypertrophy. It's just a fatigue generating thing that if ideally we could remove it from exercise, you'd be better. And I didn't just I could have made it a freaking four sentence article like. Like, hey, here's the most recent meta-analysis and 20 other, you know, meta-science studies that show it's either neutral to positive, including the eccentric, this is wrong, kill this baby in the crib. Instead, I went down a rabbit hole and talked about Karl Poplar and empiricism and how do we know what we know. And, you know, like when you should question a model, what is a model and what is theory? And I talked about the meta-analysis. So we try to um upskill our readers in not only the access to data that they have, but also their ability to interpret it over time. So they become less and less reliant on us and just going, well, I have no idea, but what does Mass think? They go, you know, I think Mass would probably think this. Um, and they are able to extend beyond the walls of what we can cover, because we're only covering eight to nine topics in each issue, and we are combing through hundreds of pieces of research that come out each month, which is only about 10% of all the research that comes out in our field. So there's a lot of information out there. So that's the first thing I would say. And the second thing I would say is more generally, if you want something less biased, uh, the most accessible, least likely to be wrong piece of information that you can approach are systematic reviews in high-impact journals, or what we call Q1 or Q2 journals, uh or position stands that are done in the same manner as a systematic review from large organizations where they pull the experts. So you have experts identifying other experts and using a systematic approach to give you a qualitative interpretation rather than a meta-analysis, which could be done wrong, or even if it is done right or wrong, you have no idea how to do that unless you have a degree in stats. Um, these are at the top of the evidence hierarchy. And if you kind of use those few precepts I said, you know, so you go into Google Scholar, is this thing been cited once or has it been cited a hundred times? You know, check out Google the journal and say what's its impact factor in our field. If it's a number over two or three, great. Then that's kind of like the mid-range of an impact factor. Um, is it PubMed indexed? That's a good sign. So if you get a PubMed indexed cited a lot, I mean, obviously it was published last week, it won't be cited a lot, but well-cited, high-impact journal, systematic review, scoping review, umbrella review, those type of things, but not a narrative review or a position stand. It's like, say, for example, the Journal of the International Society of Nutrition, the American College of Sports Medicine, the National Strength Conditioning Association. These are all evidence-based uh organizations that actually have peer-reviewed journals attached to them, uh, and researchers and academics and practitioners working in concert, typically people are both, um, trying to give practical guidance for people working directly with humans in sport, fitness, health, uh, who are trying to get to the best outcome. Those are the least likely things to be wrong, and even when they are wrong, it is a matter of degree uh or it's just being too conservative. Like if you were to follow the 2009 ACSM guidelines for resistance training, you would definitely get bigger, faster, stronger, right? Is are they slightly off compared to what we would do in 2025? Yes, but not to the point where it doesn't work. So for utilitarians, great way to go. And um, like I said, you don't need to have uh statistics degree. So those are kind of my two parallel recommendations as far as where people want to go if they want to try to get good info.

Philip Pape:

Cool. All right, and on the second recommendation after we edit the podcast, I'm gonna extract that as some some some instructions so people can see that in the show notes. Perfect. And then for math, go subscribe. I'm I've been a subscriber for a while. I actually became a subscriber when I was a student briefly at UPenn, taking a positive psychology certification, and I was able to get the student rate. So I'm locked in. But uh, you can get the full rate or the student rate at Mass. Uh, it's one of the first things I read every time. And um, yeah, you're gonna get a lot of the flavor of what you saw here. Um, this was so much fun, man. I like nerding out no matter what, even if it is one person left, and I know there's a lot of guys listening and ladies listening still. So we're gonna throw in those links. Is there anything else uh you want to send people to or are we good there?

Dr. Eric Helms:

I think we're good there. I just want to say, you know, thank you for hosting what could be a a very confusing or dry conversation, but hopefully we delivered it in a way that is engaging enough for the deep thinkers, and they can go out and spread the good word, maybe in a more simple terms than than than we did.

Philip Pape:

That's the goal. That's the goal. And everybody who listens to this is a deep thinker, Eric, right? Every everybody listening is a deep thinker. So, all right, man. Well, thanks so much again for coming on Wits and Weights uh for the third time, and um have a great day, man.

Dr. Eric Helms:

My pleasure. Back at you,

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.