top of page

Season 1 Episode 6: Training Research Methodologists with Dr. Lehana Thabane

Updated: May 30


Lehana Thabane

But we now know that the single is not good enough. We have to try and replicate the same study in different settings. It is the collective wisdom from all this replicated studies that really helps us to advance science and systematic reviews is really the best methodology of trying to appraise collective wisdom from all those studies and see where the deficiencies are if we are to learn from them.


Matt Miller

Hello, everyone, and welcome to another episode of Methodology Matters. I’m here with the one, the only Dr. Bradley Johnston. Brad, how are you doing?


Bradley Johnston

I’m doing great, Matt. Good to see you. Good to be with you.


Matt Miller

Yeah, yeah. Good to be here talking about a really interesting interview with another one and only Dr. Lehana Thabane, who I just was fascinated to listen to him.


Bradley Johnston

Yeah. I’m excited to share this interview with our listeners. Lehana has great perspective, which I think everyone will really enjoy. In particular, we’re going to talk a lot about methodology, which is obviously a link back to the name of the podcast. Why is methodology or applied research methodology unique to this day in terms of advancing the field of nutrition?


Matt Miller

Yeah, absolutely. It’s almost as if Methodology Matters. So let’s contextualize a couple of things here. So we mentioned McMaster University quite a bit, and Dr. Thabane is currently at McMaster University. Has been for a number of years. Why are we talking about Master?


Bradley Johnston

Yeah. Great question. Our listeners might recall that we did two episodes with Doctor Gordon Guyatt, who’s also at McMaster. And now again, we’re back to McMaster for another interview, and yet we’re in the US. So what is it about McMaster? Well, I guess I should admit that I did training at McMaster, so I do have a history there. But what’s really special about McMaster is it’s actually thought of as the home of evidence-based medicine. And what’s interesting about evidence-based medicine or the principles of evidence-based medicine, is there’s a real overlap with something called clinical epidemiology. So McMaster was the first Department of Clinical Epidemiology in the world. The department is now called Health Research Methodology, the Department of Health Research Methodology. But you can kind of think of being an evidence based medicine practitioner and a clinical epidemiologist as being synonymous.


Matt Miller

Interesting. Okay.


Bradley Johnston

Yeah. So clinical epidemiology is kind of the middle path between epidemiology, so capital E, classic epidemiology and kind of the more basic science or the mechanistic sciences, which are kind of laboratory-based, often mice models and so forth. So pre-clinical data. So Clin-Epi kind of comes down the middle. And McMaster being the home of evidence-based medicine/the first home of where they train people to be clinical epidemiologists to this day, I believe, is still a special place because there’s a real emphasis on teaching, often clinicians, a lot about study design and research methodology. And really, I would say, challenging the status quo in terms of what we’re doing clinically or from a public health perspective, we can always probably do a bit better. And McMaster really kind of trains people to understand methods and question the evidence base and look for areas of improvement.


Matt Miller

Yeah, that’s really interesting. On the one hand, you have classic capital E epidemiology. On the other hand, you have sort of the basic sciences. And then in the middle, you’ve got this evidence-based methodology that a lot of folks like Dr. Guyaa, Dr. Thebane, yourself, are all really trying to follow is kind of a middle path as far as the best thing for looking at the evidence, but also the values and preferences of the public or the patient and sort of trying to apply that in the correct way, using evidence the best that we can.


Bradley Johnston

Yeah, to improve decision making. And it’s really, I think, kind of a humanitarian-type approach. Where this idea of bringing values and preferences into decision-making is actually still new to the field of nutrition.


Matt Miller

Oh, Yeah. Sure.


Bradley Johnston

So, does it make sense in nutrition? And if it does, how do we best do that?


Matt Miller

Sure, alright. So McMaster, super important ,not just for medicine in general, but specifically for evidence-based medicine. So then why are we talking with Dr. Thebane?


Bradley Johnston

Yeah, sure. So, Dr. Thebane, he really I would say embodies, the McMaster ethos of being an evidence-based practitioner or being a clinical epidemiologist. So he’s by training, a biostatistician. The other part of the ethos or the other parts of the ethos that I would say he really embodies are he’s very collaborative in his nature, very respectful. So I think we’re all like to collaborate in general. It’s a part of doing better, but he has a special way of doing it as well, that I think is very respectful of different collaborators from different backgrounds, from different disciplines, so that there’s kind of a collective wisdom, so that we get better research design and better answers to our questions. And he’s obviously got a focus on methodology in particular. He’s a biostatistician. So, as he says, he plays in everyone’s backyard, but he really has a focus on randomized control trials and systematic reviews of randomized trials or observational studies. And I guess the other thing that he really emulates is the idea of continual improvement. There’s always room to challenge and improve the current evidence-based so that we can do better. And he’s been at McMaster for, I think, well over ten years. It could even be over 15 years. So really, he’s been the Chair of the Department of Health Research Methodology. He really knows his history of McMaster, so he’s a great guy to get some perspective from.


Matt Miller

Yeah, I think that’s great. Also, and it’s not scientifically irrelevant, but he is like, one of the nicest people I’ve ever met.


Bradley Johnston

Yeah. That doesn’t hurt either.


Matt Miller

No, it doesn’t.


Bradley Johnston

He totally has a way of letting people be themselves.


Matt Miller

Yeah. Which is great. It’s clear that his ego does not get in the way of his improvement or the improvement of his work. And that is a rarity, I think in any field. I want to mention a couple of quick things before we get into the actual interview. We hear a little bit about the background McMaster. We talk about John Evans and Dave Sackett and kind of how it was founded. We also talk about sort of the principles on which McMaster was founded, but also that they kind of continue to operate. A lot of these principles that you mentioned, Dr. Thabane embodies, collaboration, challenging the status quo, examining the evidence, letting the research guide, the current body of research guide the research question, all really great stuff. And you really do get, I love that you say talk about this idea of continual improvement. You really get a sense that, like Dr. Thabane, his colleagues at McMaster, like they’re living in a world where all they’re trying to do is continually improve the current body of evidence and make decisions based on that evidence.


Bradley Johnston

And he also talks towards the end, he makes the distinction between a biostatistician and a methodologist, an applied research methodologist, which is really, I think, something born out of McMaster, like training people to actually really understand the methodology of all different study designs and be able to distinguish to have a distinguishing mind in terms of what’s an actual good cohort study versus a bad one and what’s in between the areas of gray, for example.


Matt Miller

Great. That’s great. Well, it was fascinating to sit and listen to the two of you talk. So without further ado, let’s let the audience get to part one of our interview with Dr. Lehana Thabane.


Bradley Johnston

Dr. Lehana Thabane, welcome.


Lehana Thabane

Thank you. Thank you, Brad.


Bradley Johnston

Wonderful to have you with us today. Just to kind of re-emphasize Dr. Thebane’s background, he’s a professor of Biostatistics at McMaster University. He’s the former Associate Chair of the Department of Health Research Methodology, Evidence and Impact at McMaster, formally called Clinical Epidemiology and Biostatistics. Dr. Thebane’s collaborated on over a hundred randomized trials in his career thus far, including trials published in top journals such as New England Journal of Medicine, JAMA, Lancet, etcetera. And really Lehana, if I can call you Lehana,, I think we wanted to speak to you today, Matt, and I, because in my mind, you really do exemplify the leadership skills that the founders of McMaster University’s Department of Clinical Epidemiology and Biostatistics now Health Research Methodology, Evidence and Impact, long acronym or long term. You really kind espouse those leadership skills. You’ve mentored over 200 graduate students and early career scientists from around the world. I’ve worked with you now for maybe about five years or so, and I only have very positive impressions and experiences. And I know you know a lot about the history of McMaster University in terms of some of the founders, Dr John Evidence and Dave Sackett, the founder of Clin-Epi Biostatistics Program. And so we wanted to talk to you about why it’s unique. And is it still unique? So the first question, if I may, is, why is McMaster’s Health Research Methodology graduate program unique to this day, worldwide?


Lehana Thabane

Thank you, Brad, for your kind introduction. One of the things that really continues to make McMaster attractive, particularly the HRM program, if you allow me to now call the acronym HRM Program, for the Health Research Methodology Program, is because of the uniqueness of the program itself. Everything we do centers around applied research methods. And it’s all about really letting the research question guide the process. It guides the methods you choose to answer the question. It guides the collaborators you want to work with. And really the process of how the whole journey of trying to seek answers to the research question will take place in many ways. One of the things that in addition to really let the research question guide the process, including your collaborators or the choice of your collaborators, is the experiential nature of leading within the program. Many of our students are embedded in different research groups, which tend to really focus on different content areas. But this experiential learning provides another opportunity for our students to really understand how research applies in health. The other thing also is, as we see the importance of working together with others to solve problems, collaborations have become a central piece of how we train students, and they can see how the professors within the program actually collaborate with other people, collaborate with each other, and they embrace all those principles that really makes us still a very unique program in terms of how we instill these fundamental principles of working together.


Bradley Johnston

Yeah. It’s really interesting. I think everyone in the research world would agree that collaboration is really important, and if you don’t have it, you’re basically hooped. But in my experience at McMaster University, there’s something unique about the collaborative spirit there. Can you comment on that? Why is it different at McMaster than maybe it is that the University of Toronto, or maybe a University in the U.S.?


Lehana Thabane

True, yeah. I think the best way to really appreciate it is to tell you my story as a biostatistician. I’m a biostatistician by training. And quite often when I’m brought into a collaboration, people wouldn’t come to me saying, I just need the sample size, or I just need you to tell me what statistical methods to use. Rather, they would actually bring me into the effort earlier on as they’re thinking about how to solve the technical problems they encounter in practice. So all our collaborators really are brought in at the time when we say here is a clinical issue, and we all huddle together figure out how to then translate a clinical issue into a researchable question. As a biostatistician, I’m not just bested in trying to determine the best methods, but in understanding the clinical issues and how those clinical issues can then be turned into a researchable question for which then we can figure out what best methods to use. So there’s mutual respect of all collaborators, regardless of their background discipline or regardless of their background culture, for that matter. So, that’s one of the unique natures that we don’t brand people as consultants, but rather as true collaborators with mutual respect across all disciplines.


Bradley Johnston

That’s great. So I wanted to kind of continue on the theme of what is unique to this day about McMaster’s Health Research Methodology Graduate program. I’ve heard you say to me and truthfully in the pre interview for this podcast, you said that the McMaster ethos is really about continual improvement around the current body of evidence, like trying to find, create, develop more evidence so that we can make more confident decision making. Can you talk a little bit more about that ethos?


Lehana Thabane

Yeah. Thank you. If we think about really the history of how Dave Sackett and his colleagues really tried to do when they first started the Department, this was in the Sixties when the medical school was just established. And what Dave and his colleagues were trying to do is that they realized that in most cases, doctors were actually making decisions to provide care. But a lot of that was not really based on sound evidence or sound evidence principles, but rather they were based on whatever has always been done before, with all the doctors, mentors telling them what to do because they were also told what to do. So they then instill this culture where they were trying to encourage doctors to rely more on evidence to make clinical decisions to support care. And that culture has always been about really challenging the status quo in terms of how people approach generation of evidence, the synthesis of that evidence and application of that synthesized evidence into guidelines that guide practice. And this continues to be really the culture that in everything that is done within the program. It’s all about really looking at are there things we can do better about how we generate evidence? If there are things we can always improve on, that’s what everyone will always go for, including once we have generated the evidence, is the way we synthesize it optimal enough? Is the way we translate it into best practice guidelines good enough? And then is the way we then take those guidelines and help doctors make better decisions for care good enough. So everything is about really how to continue to make improvements across the spectrum from generation of evidence and application of that evidence by the bedside.


Bradley Johnston

It’s really interesting. Continues to be interesting. Maybe I could paint a bit of an example for our listeners that’s nutrition oriented. So, Lehana, you and I recently published a paper in the British Medical Journal on low carbohydrate diets for diabetes reversal. Or I should say, remission. And so low carb diets are obviously controversial. I think a lot of people that have been talking about this evidence going back 5, 10, 15 years ago have been looked at by maybe the conventional nutrition community as a bit wacky. But as the evidence kind of mounts, there’s now 23 randomized control trials, I think we’re starting to learn that there’s some potential promise here. We did a systematic review, and we kind of summarize the best estimates of effect and the certainty of the effect for each of the estimates. But in doing so, it was a wonderful opportunity to look at the state of the evidence and understand how good clinical trials are doing. And we identified a lot of things that need improvement. So, I’m just going to share a few examples, and you can reflect on that if you wish.


Lehana Thabane

Sure.


Bradley Johnston

So we found, for example, that people are not very clear on their definitions of what low carbohydrate means. To some people it’s less than 26%. To other people, it’s less than 40% carbohydrates. And then if you, for example, are considering Ketogenic diets, it’s less than 10%. So there’s a lot of variability in terms of how trials implement so called low carbohydrate diets. There’s not a lot of clarity in terms of the proportion of protein and or fat that is kind of matched or that is implemented alongside the low carbohydrate diet. So very poor reporting, very unclear. Of all of the 23 randomized trials that we looked at, there was very, very scant data in terms of the quality of calories that people consumed. So if one was to recommend to a patient to follow a low carbohydrate diet, we don’t really know. We can’t really tell them specifically based on the evidence anyway, what types of foods they should be eating because it’s not really reported in the trial. So that’s a big problem. We need future trials need to do a much better job of providing information to the readers and to the clinical decision-makers about what did the diets actually entail in terms of the typical foods? And two other points that I’ll quickly hit on are all of the trials, the date are short term. There’s nothing really longer than two years. And so future trials need to follow these patients for longer. And there’s, again, very scant data in terms of quality of life. So if you make a fairly radical change to your diet, but we don’t have any evidence on how it impacts patient’s quality of life or their dietary satisfaction, that’s not very useful. So there’s lots of room for improvement. So I just wanted to share this as an example, because, you know, there’s so much utility to doing systematic reviews to really understand the nature of the data that currently does exist. And with the McMaster kind of ethos finding opportunities for new studies, new primary studies, for example.


Lehana Thabane

It is true. Yeah. See, one of the fundamental things that McMaster has really tried to instill with their HRM program is the importance of reproducibility of findings. So it is not enough that we rely on evidence from randomized control trials as really the best evidence. We also have to look at the quality of the methods of those randomized control trials. So a lot of the things you’ve mentioned really related to not only just about synthesizing the evidence from different trials, but looking at the consistency and the quality of the evidence from those randomized control trials. So we would never at McMaster really do any new study unless we have looked at what is the evidence today and how good is that evidence? Where are the issues with it in terms of deficiencies and methods and so on, and part of it is so that when we go on to do the next trial, we do it in such ways that we try now to close the gap on all those deficiencies. And the idea is that for others who then try to replicate it, things would have been a lot more clearer than perhaps those that have tried to do the same thing before us. So training and systematic reviews and the methodology of system reviews and how to appraise methods of trials as part of the entire system of view. It’s an essential component of really what the program tries to instill.


Bradley Johnston

So that being said, at the same time, systematic reviews have kind of a bad rap. People like to say that they’re easy to do and if some are already done. Why do another one?


Lehana Thabane

Well, it’s always easy to criticize something if you don’t really have the background information as to why its done, and perhaps what are the nuances about it that perhaps may not be well understood by others? Systematic reviews, really, they’re not just about obstructing what others have done and putting it together and pulling. It’s really trying to go a lot deeper understanding whether have things being done across all trials in ways that can be reproducible. And are the methods defensible? Is there something about this method, something about the time or follow up for outcomes that can be standardized across trials in ways that can actually produce evidence that is useful in practice? Bear in mind that the key is really trying to see whether the evidence would be useful in practice. So systematic reviews really provide us a great opportunity of not just looking at the evidence, but appraise in the evidence and appraisal of the evidence is really about methods. One of the things that Dave, even his colleagues really noticed was that for us to be able to do a better job in science and science is all about replication is understanding the methods of how people generated findings, because it’s only when we are clear on how the methods were done that we can actually replicate what others have done. Science is predicated on nothing else but a replication of other findings. You see, the way we do it is we take one study we do it in one patient, and then we replicate the same thing in several patients, and then we observe the outcomes. But we now know that the single study is not good enough, we have to try and replicate the same study in different settings. It is the collective wisdom from all these replicated studies that really helps us to advance science. And systematic reviews is really the best methodology of trying to appraise collective wisdom from all those studies and seeing where the deficiencies are. If we are to learn from them.


Bradley Johnston

Not only replicating, but improving upon, I think it’s fair to say.


Lehana Thabane

Absolutely.


Bradley Johnston

So one example being future trials for low carbohydrates should systematically assessed health-related quality of life. Why that data is not in the trials is a bit of a mystery, but that would help with decision making, for sure, and public health decision making as well. Okay. So moving on, maybe a little bit of history on the McMaster program. So how did it all start at McMaster? Can you tell us a little bit about your background knowledge in terms of Dr. John Evans, Dr. Dave Sacket.


Lehana Thabane

So, the medical school started at McMaster probably around 1966 or 67, somewhere around there. And when the medical school started, Dave was recruited. He was in the U.S. at the time, to be able to lead the research arm of the medical school. So when Dave was brought in, he became the first Chair of the Clinical Epidemiology Department, which was part of the medical school. And the goal was to be able to actually perhaps lead the research aspect of the medical school with the goal of helping doctors to use evidence to make decisions. So it was a very small department. What they did was they wanted, Dave wanted to recruit a statistician. So he advertised a job for a statistician, and two people interviewed for the job who was Charlie Goldsmith and Mike Gent. They interviewed both of them, apparently on the same day. And actually two of them attended each other’s talk. And they went at it during the talk. And then after the interviews, Dave decided, I’m not hiring one. I’m actually hiring two of them.


Lehana Thabane

So, they ended up hiring two statisticians. While that was not part of the plan, but they were so good that Dave said, if we are to do anything in advancing how we actually look at evidence in health, we need statisticians. He understood the importance of statistics in generating evidence using trials and thought, really, the best effort would actually be as if they had statisticians as part of that team.


Bradley Johnston

Okay. So on that topic, there is a real distinction between a biostatistician as a collaborative team member and a methodologist, which I think in many papers that I’ve worked on with people from McMaster, we often refer to ourselves as methodologists. I’m assuming that that kind of came out of McMaster. Can you talk a little bit about the difference between the two.


Lehana Thabane

Yes, so when the department bega they then decided to actually create a training program, and the training program was called Design, Measurement and Evaluation. And the goal of the program was simple, was not to actually turn people into statisticians, but rather to train clinicians to understand the methods used or necessary to actually generate good evidence. And those that were non-clinicians. They were actually taught methods to be able to work well with clinicians to solve clinical problems. So, their goal was really not to graduate people who have expertise in statistics or biostatistics or in any clinical discipline. It was all about research methods. So the whole program evolved to what it is today, because what was coming out of that DME program was people who really understood the methods of how to translate a clinical problem into a researchable question and methods of how to actually then design studies that will generate the best evidence with the least bias and higher precision of estimates of whatever that we’re looking for out of those studies. So it was the first Clinical Epidemology program, which was intended for training doctors about evidence, how to appraise the evidence, how to generate the evidence, and how to apply it at the bedside to make clinical decisions. But many of the people who came through the program ended up actually being a lot better than a general or a typical clinician in terms of understanding the methods.


Bradley Johnston

Yeah, it’s really interesting. And, you know, there’s really two wonderful examples, at least in my mind, of how we can measure or see what McMaster folks have been up to. One is the users guides to the medical literature was the series that Dr.Gordon Guyatt and Dave Sackett and many others, Dr. Brian Hanes, Sharon Sprouse, etc. contributed to, which really helped people learn how to better read, appraise, and apply the evidence. And that eventually, and we talked to Dr. Guyatt about this that evolved into the GRADE working group. And now the GRADE working group is really actually more methods centered. And they’ve now published at least 30 official GRADE papers in terms of giving the world guidance in terms of how to conduct practice guidelines, whether they be public health or clinical, and how to conduct and synthesize and look at the certainty of evidence based on systematic reviews and meta analysis. And I think ultimately, GRADE has probably published over 100 papers over the last, let’s say, 20 years as part of the GRADE kind of working groups. So, it’s not only happening within the program, but it’s happening through publications and through guidance for the world.


Lehana Thabane

Yeah. It’s interesting because the JAMA series, was really about, you know, trying to really bring evidence-based medicine principles to life. And if you look at a lot of things that came out of McMaster or at least out of the CMB Department which it was called at the time Clin-Epi and Biostatistics. You now, you have examples such as large, simple trials that Salim Yusuf actually brought to the world in terms of really designing very large trials, using simple processes and procedures to run and collect data in those trials, but broad populations answering several questions. And then we had examples of the minimum clinically important difference, something that came out of the methods of trying to actually use patient-reported outcomes and understanding how to study properties of responsiveness and interpretability of that information. And then the minimum clinically important difference as a method of really trying to figure out how to interpret evidence from such tools came about. And then you have the Cochrane Risk of Bias. Many of these things about really assessing bias in trials started at McMaster. I don’t remember if you people remember the Haddad Scale. It was one of the first tools to actually come out of McMaster assessing bias. When people do system at reviews of trials, things have evolved. And now we have the new Cochrane Risk of Bias tool. But it started with the Haddad Scale as a way to actually do this.


Bradley Johnston

Yeah, so just a little bit of background on that. So Alex Haddad, who’s currently at the University of Toronto, had done his master’s program at Oxford and developed that instrument, the Haddad Scale, to look at the quality of randomized control trials. I think. Am I right in saying his first faculty position after he graduated from Oxford was at McMaster? Is that the link?


Lehana Thabane

That’s correct, yeah.


Bradley Johnston

Okay, and then, of course, since the Haddad Scale, the instrumentation for looking at the quality of randomized trials has evolved a lot, but that was the bees knees, if you will, for probably 10 years in the field.


Matt Miller

Thanks for listening. If you’d like to hear more episodes of Methodology Matters, please head over to Methodology Matters.podbean.com. Or you can find us on Spotify, Apple Podcasts, and on Google Podcasts.


Bradley Johnston

And if you’d like to learn more about Dr. Thebane and his work, you can find links to his faculty profile in a number of his published articles, including those on pilot and feasibility studies in the show notes for this episode.


Matt Miller

Thanks for tuning in. We’ll see you on the next episode of Methodology Matters, a podcast on evidence-based nutrition.

bottom of page