Today we have Dr. Xia from University of Nebraska
Lincoln, and he'll be sharing some of his experiences about quantitative
research in educational leadership. He's been teaching quantitative
research for many, many years. In our conversation, he shared a lot of
experiences that could be shared with our students and new researchers in our field. So today I invited him to
share his experience with us. First, Jiangang, do you mind briefly
introduce yourself and can you share your journey into the q
uantitative
research in educational leadership field? Certainly. First of all, thank you for the
invitation for the opportunity to share my expertise and my experience. My name is Jiangang Xia. I'm an associatee professor at the
University of Nebraska Lincoln. My research areas are mainly about school
leadership, leadership development, leadership process, leadership
distribution, and those effects. My research approach is mainly
quantitative, using larger-scale national or international data. T
he reason I choose to conduct
a quantitative research, I think one major reason is I don't feel
confidence in qualitative research. The other reason is I received a lot
of training in quantitative methods. Reflect on my own doctoral coursework,
I believe I had over 50, 51, maybe credit hours in quantitative methods. After I started my current position,
I also went to, you know, seminars, workshops, trainings almost every year. Recently I just summarized
a little bit of all of the workshops I too
k over the years. I think I took maybe 16 or
18 quantitative workshops. It's about a two workshops every year. Because of pandemic, I paused
going to the in person workshops, but still, I participate in
some online trainings as well. On one side, I'm not a confident
with qualitative research. On the other side, I received a lot
of training in quantitative methods. A lot of researchers in our educational
leadership, particularly nowadays, many of them are qualitative researchers. From your perspe
ctive as a quantitative
researchers, what are the suggestions to remove some of the barrier for them
to take quantitative research courses? You said that you took 50, 51 credit
hours in quantitative research methods as a doctoral student. What are your suggestions
for students before they even start quantitative courses? For a lot of, leadership programs, their
students are required maybe only to take two or three quantitative methods. If my advisee he or she wants to be a
future researcher, a
professor, I usually ask them to go to other department to
take additional quantitative methods courses, because it's very likely my own
program only require two quantitative courses, which is not enough. A few of my advisees, they went
to department of psychology, sociology, educational psychology. They went to those departments to
take additional quantitative courses. And also, I encourage them to participate
in workshops seminars offered by researchers from other universities. Thank you. Jian
gang, you've been an associate
editor of Educational Administration Quarterly, which is a top tier journal
in the field of educational leadership. Can you share some of the experience
of working as associate editor in EAQ? I know that EAQ recently has a special
issue on quantitative research. Could you give us more details
about that special issue? Certainly, yeah, that was a
very rewarding experience. So I started my role as a associate
editor for EAQ since January 2022. So it has been more th
an two years. And my job was mainly to find
quantitative reviewers for those quantitative manuscripts. Very often I noticed one issue
with those quantitative manuscripts because typically they rely heavily
on reporting and interpreting P values and make dichotomous decisions. That's the issue I noticed. It's time consuming every time as
an associate editor, I have to make additional comments, asking the authors. Hey, why not you follow the most recent
recommendation to report effect size and a c
onfidence interval, right? I try to help them to further
improve their manuscripts. So that is kind of time consuming. So I talked to the editor and we had a
great discussion during last year's UCEA convention, and we both agreed maybe
it's a good idea to have a special issue to call attention and even, we call
it a paradigm shift to a new thinking, which is called estimation thinking. So that's the special issue will have
a theme, focus on estimation thinking. Jiangang, do you mind sharing or
g
ive us a more detailed explanation what estimation thinking is? Certainly. Here I have a few slides and
this slide we can see estimation thinking in quantitative research. It actually represents a shift away
from dichotomous hypothesis testing approaches towards a more nuanced
understanding of the data through estimation and uncertainty quantification. The traditional way of doing quantitative
research is mainly reporting or relying on the null hypothesis, significance testing or NHST. The new t
hinking, which is called
estimation thinking, prioritizes the magnitude of the effect. Usually we ask a question like to what
extent or how large is the effect? So we want people to report effect size. At the same time, we want to
emphasize the precision and uncertainty of the estimated effect. To that extent, we want people
to report confidence interval. Further, we also want people
to emphasize the practical significance of the findings, not
just statistical significance. So that is the thinki
ng or
the concept of estimation. Could you give uss an example of the
magnitude, precision, uncertainty, and how they can be interpreted from the
perspective of practical significance? Certainly. So the traditional way of reporting
on a quantitative results. Most of the researchers, they
already emphasized the first part of the recommendation, which
is the magnitude of the effect. Many quantitative studies, they
already reported effect size. For example, when they compare two
groups, reporting T
test or ANOVA, or even just a general regression, they
have no issue to report effect size. They even label that effect size
as like small, medium, large. However, the majority of the
quantitative studies, they forgot about the second half of recommendation,
which is about precision or uncertainty of the estimated effects. Very rarely we see a
few studies reported it. The precision is very, very
important in that the estimate effects are based on a sample. And we want to use the sample data
to
make a conclusion or we want to generalize to the population. Using the sample to estimate
the population, it naturally has something about uncertainty. We don't know the population at all. We only know the sample. To make sure our estimates are accurate,
we want to make sure we tell people the estimated effect actually has a range. You know, it has the minimum, the maximum. It has a range, which is exactly
what we call the confidence interval. But that is something is
missing from the majority
of the published journal articles. Hope that makes sense. Yes, yes. Is this missing part in a lot of
quantitative published research in our field one of the reasons that
quantitative research findings are not practical in school system. Is it one of the reasons? Yes, that's correct. It's one of the reasons. Without acknowledging the uncertainty,
many findings, they don't really carry the practical significance. That is a big problem. Let me use grant
application as an example. Very often we see
researchers
reporting only the point estimates. For example, beta equal to what? What? Right? It's a point estimate. And maybe it's Labeled as a large effect. However, we don't know the precision
or uncertainty of that point estimate. It could be as small as very, very
small or as large as very large. When the range is big,
the precision is low. The level of precision is low. That means maybe the grant funder
may not be able to trust the program and fund the program. So the precision is a very,
very
important thing for the funders to consider how practical that effect
could actually happen in reality. So when the precision level is
high, the range is narrow, the confidence interval is narrow, then
the founders may feel confident. Okay, I'm very confident. The estimated point effect is very
likely to happen in that range. That's why it's very important. So for a funding agency, they can say
that if we promote or scale up this practice, if we have the evidence of
precision and low uncert
ainty, then we know that our effortand our investment
in promoting this practice will make an influence on the outcome variable. Exactly. Okay. Does that mean that for this is special
issue in EAQ, as associate editor you are looking for researchers specifically
reporting results on the magnitude of the effect, the precision and uncertainty? Yes. Yes, if the potential contributors, they
want to submit their manuscripts to this special issue, they can certainly think
about maybe some application
s of the quantitative methods in their own field. We are specifically looking for
the applications of the methodology that aligns with the two criteria. We want them to report magnitude
and at the same time, the precision of the estimation. Well, that sounds great. Could you tell us about the
timeline of this special issue? And when will the readers
begin to read the articles published in that special issue? Certainly. So the April issue of EAQ will
publish the call for papers. In that call for
papers, we
anticipate the submission will be open between April and September. However, the review
process will start early. We don't want to wait until
the submission window closed. So the review will start in July. As long as we see submission coming in,
we can start in July to assign reviewers. We anticipated to finish the review
and revision maybe by December. If we received enough good quality
submissions early, and it's still possible to publish a special issue
in the last issue of EAQ in
2024. Otherwise, maybe it
will be the early 2025. All right, that sounds great. Maybe after we get all the articles
published, we can get all the authors to share their experience as they go
through their research processes and their quantitative research experience
as they learn to stay away from the null hypothesis significance testing
to the new way of estimation thinking. My next question is, what are the
common mistakes researchers make in conducting quantitative research? Could you share s
ome of the
examples with our audiences? Certainly. So first of all, I want to borrow a slide
from a famous researcher, Rex Klein. In his presentation, he shared this slide. The textbooks are wrong. Teaching is wrong. Seminar is wrong. Journal is also wrong. This slide is really a powerful message. We don't have to fully trust what
we have, like our teaching, our professors, textbooks, even a journal. They all could be wrong. So back to your question. What are the common mistakes? One typical mis
take is really the
misinterpretation of the significance. It's a very, very common. And we talked about it earlier, but
let's just take a look at this one. The traditional way of dichotomous
thinking leads to the simplistic categorization of the results as
either black or white, significant or non significant by looking at the
P value and comparing it to the 0. 05 cutoff threshold. This kind of practice overlooks the fact
actually P value is a continuum and the difference between a P value of a
point. 0. 049 and 0. 051 is very small. It doesn't mean a lot of difference, but
people tend to see they are different. That's one major issue. The second major issue is the
oversimplification of a complex phenomena. As we all know, educational
leadership involves very complex, multifaceted phenomena. However, by relying on the
hypothesis, significance testing and p values, the decision making
could be a binary outcomes, right? So that is the other thing about the
issue of the dichotomous thinki
ng. The third issue is the inflation
of type I and type II errors .
This is a very problematic. Many journals tend to accept manuscripts
that report significant p values. Some researchers could manipulate
their own data to meet the criteria. They could delete some data. It's called cherry picking, right? They cherry picking some good
data and use those data to achieve the significant level. This is also called a P hacking. And the next problem is called
the reduction in replicability and the re
liability. That is the same thing when we
focus on p value smaller than 0. 05 as a marker. That caused a lot of problem in the
replication crisis in social science. I will not talk too much details, but
that's a very significant issue here. The last one, which is also the one I
want to highlight, is the neglect of effect size and confidence interval. Just as we talked about on dichotomy
thinking, they tend to overlook two really important aspects of the effects,
which is about the magnitude, or
effect size and the precision or the
uncertainty of the estimated effect. That's two are really important things
to report, but current practices, they tend to overlook or they tend
to not report the confidence interval for many, many reported effects. For your last point, I think, this is
very critical in our field of educational leadership because it's very contextual. Whatever practices or leadership
behavior or whatever program, whether there's a program is effective or not is
influenced by
a lot of contextual factors. For example, when you look at school
principal's performance, and school principal's relationship with the
principal's supervisor, their relationship could influence how their supervisor
sees school principal's performance. Those contextual factors actually
highlight the importance of reporting the effect size and the confidence interval. That's the thing just came to my mind. Yes, exactly. Yes. A lot of contexts, they
do make a difference. We know that's important t
o take
those contextual factors into account. Thank you for mentioning that. That is basically a summary. We realize those are the major issues
with the traditional dichotomous thinking. And we want to advocate for the. Well, actually, it's not new
thinking, but we want to advocate for the estimation thinking. That is really the right way to
conduct quantitative research. If the missing reporting on the
estimation thinking is so prevalent in our quantitative research, do you think
we should upda
te our curriculum in our field when it comes to quantitative
research, or we should have more seminars and workshops to bring it to
people's attention that this is something that we need to pay attention to when
we report our scientific findings? Certainly. I think we should have more
workshops or trainings. We certainly need to update our
curriculum, particularly in our quantitative methods courses. I personally have already
removed all required textbooks. In my quantitative courses, I no longe
r
adopt any current textbooks, because I couldn't find a good one that is
error free or mistake free, you know. They still ask students to report
and interpret the P values. Some books also mentioned effect size, confidence
interval, but I just couldn't find a really, really good one. I tend to use my own materials now. I totally agree with you. We need more seminars, trainings, and
to let people know why it is wrong to use dichotomous thinking, why it is
important to adopt estimation thinking.
Maybe the quantitative
researchers in our field can get together and figure out a plan. How can we improve our
quantitative research education in our leadership research. That can serve as a springboard
for our next step or advancing the quantitative research in our field. You can write a textbook. Yes, I still remember we had a
symposium during one year's UCEA, right? I remember. Joonkil organize that session. Yes, we should continue. This year's UCEA. We can organize a similar symposium
to gat
her, professors, researchers in the field who are interested
or teaching quantitative methods. We can come together discuss
and share some ideas, you know? Yeah, I think for quantitative research
or practitioners, it's less about numbers. It's more about your logic, your thinking. I think a lot of qualitative research,
or at least my students, when they decide that I'm going to use a qualitative
method to do my dissertation, they think they're not good at numbers. They chose a qualitative resear
ch method. But the quantitative Methods
are not about numbers. It's more about your
logic and your thinking. Yes. Maybe we should also think about how we
can engage qualitative researchers so they can also share some of their insights. How we can attract more researchers
who are interested in quantitative research or mixed methods. Exactly. We should do that. Next question is, in your opinion,
what are the biggest challenges and opportunities facing quantitative
research in educational leadershi
p today? We've already touched some of the points. Could you share some of the
things we haven't covered? Yes. So here is a slide that I
mentioned the two things. For the challenge. I think it's really mainly the mindsets. It's a little bit challenging to
change adult researchers' mindset. The reason is they were taught by the
traditional professors, researchers using the traditional dichotomous thinking
to conduct quantitative research. When I communicated with the
potential contributors for th
e special issue, many of them, they
are not aware of estimation thinking. They still use the traditional way of
reporting and interpreting the p values and they even don't think that is wrong. So that is the biggest challenge of
moving toward estimation thinking. When I was trained as a doctoral
student, report your p value, that's the most important thing in your result. Yes, my professor mentioned effect
sizes, but it was never emphasized in our assignments, in our reporting. Back then when I
was a doctoral
student, I didn't know that was not the correct way to do it. It seems like it's actually
a very prevalent issue in our doctoral or graduate education. Exactly. Same thing happened to me
when I was a doctoral student. I don't remember any professors
mentioned confidence interval. Maybe I was wrong. However, the professors told us
to report a P value, interpret as significant and non significant. Until 2019, that is the year I started. I kind of rediscovered estimation
thinking thr
ough reading. So that is something was so
shocking at that time in 2019. I'll always remember the year 2019 when
I for the first time read about the problem of null hypothesis significance
testing, why it is wrong, what is the right way to do the quantitative research. It was so shocking. Once I realized, I just decided to change
my own practice, including my teaching, my research, and also editorial services. All have been changed since 2019. Thanks for bringing it to our attention. I'll absolu
tely pay more
attention when I read quantitative research articles in the future. You also mentioned about opportunities. Could you give us more details about the
teaching new generations of educational researchers the estimation thinking? Certainly, I want to introduce a
little bit more about my own practice. So I teach two quantitative
courses in our program. We teach our doctoral students the
foundational statistics and also the advanced quantitative methods. In my two courses, I adopted
the
estimation thinking. My students, they learned why it
is wrong to interpret values as binary, and they understand why it
is important to report If effect size and also confidence interval. Students in our program, they
will become the new generation of educational researchers. That is what I am doing. I strongly recommend and also encourage
other institutions, quantitative instructors teaching their own students
the new way of estimation thinking. So all of our students will become a new
genera
tion of educational researchers. Thank you for doing that. As an associate editor of EAQ, what
qualities are you looking for in a quantitative research paper for it
to be considered for publication? In Certainly. addition to the estimation thinking,
in this special issue, what are other things you are looking for? To me, the current problem is
really either the data driven or methodology driven studies. That's the problem. So what I am looking for is really
some studies they are problem driven,
and they have identified a really,
really important problem and they want to solve the problem, right? At least they try to solve the problem. That is starting point of a good study. Based on the problem and the researcher
also develop a really good conceptual framework that support the study
and also helps to frame the study. That is really something I'm looking for
as the criteria of so called a good study. I strongly, recommended that
the researchers not to use the results to drive the study.
So that is my recommendation. This reminds me an article published
in Educational Researcher about the questionable Research practices
in our education research. One of the issues that they point out
is that they collect more data for a study after first inspecting whether the
results are statistically significant. This is more like you're manipulating
your results because you suspect that your result would be statistically
significant without reporting your magnitude of the effect and precisi
on,
and then you begin to collect more data. So basically, you are
selectively reporting your data. I have a question. You talk about you're looking
at the problem driven inquiry. Can you give us an example,
like one of the articles that's a problem driven inquiry. I couldn'tt remember
exactly a specific study. But I want to, say if researchers want to
study, for example, school leadership or principal leadership effects on something, Uh huh. they just go ahead and study it. They review the lite
rature, they frame
the study, they collect the data, and they report the results as it is. When I say as it is, no matter
the results are significant or not, it should serve the question. Because the question is to what extent. Principal leadership has an effect,
for example, on teacher self efficacy. Let's use that example, right? The real question is really to understand
or to know the magnitude of the effect and the precision of the effect. It doesn't really matter
it is significant or not, b
ecause based on the data we collected,
you just report the results as it is. No matter it is large or small or
medium, no matter how precise the estimated effects are, we should
be very honest with the results. We report it and the readers
actually will respect that because that's the reality, even
the results do not meet the 0. 05 significant threshold,. We still should report it as it is. That's my recommendation. Could you elaborate on the second point? You don't want to see the
methodology
driven study. What does a methodology
driven study look like? And what we should do to prevent
to conduct that kind of studies? The methodology certainly is really
important, but a very often we see researchers even mentioned the
method in their article title, which to me is kind of a problematic. I did that. I made the same mistake when I
early on as a new researcher. A good study is really to emphasize
the topic, not the methodology. After all, the methodology
serves the topic. And even the s
ame topic, different
researchers, they could apply different methodology, right? No matter what kind of methodologies we
adopt, the purpose is really to study the topic and report the findings. So the methodology should
not be the real focus. The topic should be the real focus. And relatedly, when I say not methodology
driven, Sometimes people, when they look at the results, and they tend
to decide keeping or excluding some variables based on the results. It should not be that way. If one variab
le from the very
beginning, when you design the study, you already have that variable in your
research design, you should keep it all the way through your study, not
because, oh, it's not a significant. So I decided to remove that from my study. That is certainly results driven. That's problematic. That's another questionable practices
that ER article also mentioned. You selectively choose your
variables or covariance because you want a significant results. In the broad field of science, many
jo
urnals are encouraging authors and researchers to pre-register their study. If that's the research questions you
are asking, those are the variables you want to test, you pre-register it. It can hold you accountable because if
you delete your variables, you will have to explain why you deleted that variable. Or is it because you are driven
to get the result you want? Or is based on the necessity of
methods were running a specific test. I think maybe in our field,
we should encourage that pre-reg
istered study approach, too. , That is totally true. We have our studies open to the field
so people know what you are working on. And later on, we expected to see you
exactly mentioned early on, right? What are you are going to present? Yes, whether you are committed to your
original intent, or you change your data variables along the way, because you
want to see the result you expected. And that is not rigorous science. Yeah. You made me to think about the
upside of the open science, which is
usually we say a black box, right? We don't know how the researchers
conducted their research, right? It's a black box. By now, we call for open science. Right. Yeah. And how you made all the decisions along
the way when you are conducting your research, that's actually very important. It's also very good materials
that we can use as we teach next generation of researchers, right? How the past researchers made
their specific decisions? Why they chose the parameters in that way? Why they selecte
d those variables? You have to understand why they do
research in that way, instead of I want to ask people perception on XYZ. I'm going to ask these survey questions. That's not the rigorous way to do
research, at least in our social sciences. Exactly. My next question is, what advice or
suggestions you'd like to give to our doctoral students and new researchers,
such as assistant professors who aspire to conduct quantitative research in
our field of educational leadership? That's a good questi
on. I do have some advices. I prepared some resources. Let me talk about, a few researchers,
scholars, and I want to briefly introduce them to you and maybe the
new generation of doctoral students, researchers, they can just go to
read those authors' publications. That could be very helpful. The first scholar is Rex Klein, and
he is a professor of psychology from University of Concordia, Canada. I happen to use his textbook when
I was a doctor student, but at that time, certainly I didn't really
pay attention to anything about estimation thinking until quite late. In 2014, he came to my university,
delivered a two presentations. One is called "hello statistics reform." I want to recommend this presentation
to all of us who are doing or learning quantitative methods. He really emphasized the problem of the traditional null hypothesis,
estimation thinking, and he encouraged researchers to consider
adopting estimation thinking. Particularly, he suggested using size
estimation and confiden
ce interval, rather than relying on null hypothesis. That is one great scholar. And here is a book also
published by Rex Klein. And the topic is becoming a behavioral
science researcher, a guide to produce research that matters. My rediscovery of estimation
thinking started since this book. This book was published in 2019. That is also the year we discovered it
and started thinking about why it is wrong to interpret p value that way. What is the alternative? So highly recommended this book to al
l
of the researchers and graduate students. The next person, jeff Cumming, a professor
also in psychology from Australia. Here are two textbooks
and one journal article. On his website, he has this sentence
saying, "I hope my statistics textbooks will change the world." At the very beginning, I thought that was
too ambitious, but now I fully understand. It is possible. And that is also why I am,
taking a new way to my teaching research, even editorial service,
practices, adopt a new thinking. T
he new edition of this book is called
Introduction to the New Statistics, and I already have this book, and I found it did
such a great job introducing estimation thinking, including open science,
which we just talked about, right? And beyond. That is a very, very good book too. The third scholar I want to
highlight is Jacob Cohen. I believe many, many of us,
if not everyone, many of us may have heard his name. He is such a great scholar and I
believe he earned fellows from three associations, i
ncluding APA, ESA, AERA. Maybe. He was so great. I particularly want to
recommend two journal articles. The first one is called
things I have learned so far. That article was based on his
presentation, delivered to an annual conference 1990,
you know, the APA conference. In this article, he shared six points. In summer 2023, I actually went to visit
China and I was invited to talk about my understanding of quantitative methods. So basically I introduced this
article four times during my trip to
China to four universities. Why? Because this article is
so good, so important. And when we look at the year,
it has been so long ago, but not many people follow his advices. Here are the six points he mentioned. Two years later, he published the
other article, which is very funny to see Yeah, the earth is round. If we use 0. 05,, He mentioned after four
decades of severe criticism, the ritual of null hypothesis,
significant testing still persists. I just can't believe,
that's how many years ago
? 30 years ago. yeah, that's 30 years ago. So 30 years, plus four decades. That's 70 years past, right? Oh, right. Okay. yes. It was written in 1994. ago. Yes. So think about it now, 30
years, another three decades. So in total, seven decades past,
we are still facing the challenges. Let me show you an article published
by Nature, which we all believe is a very top tier journal, right? In 2019, Nature published an
article called a scientist rise up against statistical significance. In that arti
cle, the authors mentioned
what is going on right now, because researchers, scientists, they believe
it's time to stop using the term statistically significant entirely. We even should not use any variants
of the term, including statistically significantly different, non significant,
you know, so all these should be banned. That is something I want to
call researchers attention. In 2019, the American Statistician
Special Issue focused on calling moving to a world beyond point smaller than 0.05.
That is a very, very important special
issue that I always go to read. In that special issue, they mentioned
a few suggestions, so I think we can use this to guide our new
generation and junior scholars. Here are some bullet points, but I
really highly recommend here, we should accept uncertainty, and we should be
thoughtful, we should be open, be modest. Further, we should make some change
to editorial practices and even our teaching practice, research practices. But at the same time, I admit
it is going to take work. It is going to take time. I think we should make
the change now, not later. So starting now, I should stop using
the phrase statistically significant. Exactly. Yeah. Do not use it here. Don't use that. Don't say significant or non-significant. And don't use 0. 05. We should really abandon null
hypothesis significance testing and also dichotomous thinking. We should adopt estimation thinking,
asking questions like to what extent or how large is the effect? Not is there e
ffect, but to ask how
large is effect and report effect size along with the confidence interval. I also want to mention, you
know, meta-analysis is well aligned with estimation thinking. So meta-analysis is the
right, research approach. The other thing I didn't mention
it here is Bayesian methods. Bayesian methods is also well aligned
with estimation thinking because they emphasize the uncertainty. And also we talked about
open science, right? A responsible researcher should really
be honest and
follow the initial research design to conduct the research process
until the end, not to cherry picking, P hacking, manipulating the data. That's something we should avoid. I'm just thinking from a
researcher's perspective. It seems like in the past, quantitative
researchers are rewarded for simply reporting significant results
because it get them attention. And they got a lot of publications out. There was a book about science fictions. It talked about a lot of
problems in conducting science.
The journal editors, actually, they
are not incentivized to publish non significant results, because what would
be the point for readers to read, your result is not statistically significant. Yeah, that's a good question. But also to me, I wish to be
honest with what we found. If the data has already been collected
and if the results are not significant, it is still very important to just
report what the research found. The results don't have
to be significant to me. They don't have to. Well,
you are a good researcher. That's for sure. Anything else you'd like to add before
we conclude today's conversation? Yes. I think I still have a few slides
that are recommended reading. And particularly I highlighted the years. The reason is I want to show
people the so called estimation thinking is not a new thing. Those. are really outstanding research. They have been advocating for
estimation thinking long time ago, even here in 1942, 1960. And these are long time ago. Many current researcher
s, new generation,
graduate students, they may not realize we have these important publications. So I want to highlight this
publications and here are a few more, more recent publications. So these are guidelines,
important readings for them to take away and to learn, to read. It just reminds me that in our
education field, we needed more practitioners, leaders, policy makers
to make evidence based decisions. This just adds one more layer of what
piece of evidence you are looking at. Are you jus
t looking at a research
study that simply report statistically significant result, or are you looking
at a research study that also reported effect sizes and confidence intervals. For practitioners, they also needed
to be able to judge what's the quality of the evidence that they are
looking at when they make a decision for policy or for the school system. Particularly when you
look at the meta-analysis. I remember in medical research,
they have a pyramid of evidence. So at the top of the
pyrami
d, that's rigorous. The evidence has higher quality. At the top is meta-analysis, because
it's not just one single study. At the bottom of the
pyramid is case studies. You look at five cases, right? And then you look at 10 cases. It's just still evidence, but there
is a difference between the quality of different research studies. I think what you just talked about
today just adds another layer to our conversation of evidence based decisions
for school leaders and policymakers. That's good to kn
ow. I haven't read pyramid of the quality. I'm thinking about the meta analysis. If a meta-analysis is based on a lot
of original empirical studies, what if those empirical studies are problematic? I agree. Yeah. I remember we had a meta-analysis
plus network analysis. I can't say half, but there is a large
percentage of studies that had a lot of missing data or empirical studies
that we could not include in our meta analysis simply because we don't have
enough data to conduct our analysis. I th
ink that's also a big problem, as
you mentioned, that what if the original empirical studies are problematic, right? So that will influence the
quality of the meta-analysis, too. Exactly. To me, the best time of
meta-analysis is not here yet. I think it is really important
to understand meta-analysis. But again, I hope the new generation
or next generation researchers will adopt estimation thinking, producing
quality studies, then we are able to conduct meta-analysis based on the good
quality o
f original empirical studies. Right. Yeah, we have to have a sufficient pool
for high quality empirical research for us to conduct a meta analysis. Otherwise, there is no meta there. That's exactly true. Yes. All right. Well, thank you so much. Thank you for your time. Maybe in the future, we can have a session
just to talk about estimation thinking, and we'll bring a lot of examples to
help our audience to further understand how to apply it in their practices. I also want to thank you for the
o
pportunity to share my thoughts, my learning, my understanding. Thank you.
Comments