[DONALD MASTRONARDE:] Good afternoon I'm
Donald Mastronarde, Emeritus Professor of Ancient Greek and Roman Studies and president of
the Executive Council of the campus chapter of Phi Beta Kappa Alpha of California. On behalf
of the chapter and the College of Letters and Science I'm delighted to welcome you to the
second in a series of four public lectures we have arranged to celebrate the 125th anniversary
of the founding of the chapter. In this series in accordance with PhiBeta Kappa's ded
ication
to Liberal Arts education we aim to highlight the vast range of intellectual curiosity and
research excellence for which Berkeley is famous. Let me begin by saying our Phi Beta Kappa
chapter and the College of Letters and Science acknowledge that UC Berkeley
sits on the territory of Huichin, the ancestral and unceded land of the Chochenyo
speaking Ohlone people, the successors of the sovereign Verona Band of Alameda County. We
recognize that every member of the Berkeley Community h
as benefited and continues to benefit
from the use and occupation of this land since the institution's founding in 1868. Consistent with
our values of community, inclusion, and diversity, we have a responsibility to acknowledge and make
visible the university's relationship to native peoples. As members of the Berkeley Community, it
is vitally important that we not only recognize the history of the land on which we stand,
but also recognize that the Muwekma Ohlone people are alive and flour
ishing members of the
Berkeley and broader Bay Area communities today. Before beginning the program. I want to thank
Executive Dean of Letters and Science, Jennifer Johnson-Hanks, who is herself a member of Phi
Beta Kappa inducted at Berkeley, for co-sponsoring these lectures. I also want to thank the staff
who have assisted us with publicity, particularly Michelle Phillips and Lauren Miller of Letters and
Science and Jane Fink of the Graduate Division. I'm also grateful to our chapter's pr
ogram
administrator Chris Carlile for his manifold assistance in getting this series organized.
I'm delighted that Chancellor Carol Christ has found time in her busy schedule to attend
this event and say a few words to us. She is also a member of Phi Beta Kappa. I want to
acknowledge here that throughout her career and particularly in the unusually challenging
years of her chancellorship she has embodied the Phi Beta Kappa ideal combining scholarship
with public engagement and public servic
e. I know we all wish her well for the remainder
of her term and for whatever lies after that. [Applause] [CAROL CHRIST:] Thank you. I'm really glad to add
my welcome to Professor Mastronarde's to this Phi Beta Kappa lecture. Phi Beta Kappa was founded
in 1776, which is of course the same year as the Declaration of Independence. It was founded at
the College of William and Mary in Williamsburg, which is was at that time the capital of Virginia
and really a cradle of the American Revolution.
Both George Washington and Thomas Jefferson lived
there. It was founded as what we would term now a student org. It was a group of young men in those
times who were fed up with the dissipation and drinking and partying of the current student
clubs and decided they wanted something more serious. So they founded a secret society—
it was originally a secret society—that was a debating society around philosophical issues.
Perhaps the most famous Phi Beta Kappa lecture of all times is Ralph Ald
o Emerson's The American
Scholar, which was delivered at Harvard in 1837, and it was a kind of American Declaration of
Independence from Europe. In it Emerson argues that there are three elements that go into
the education of a scholar: books, nature, and action. He says books are the least
important, nature is far more important—of which we'll be hearing something today—,and most
important of all is action. So I am so delighted to join you today in this wonderful tradition
of the Phi Beta
eta Kappa lecture Thank you. [Applause]
[DONALD MASTRONARDE:] Thank you. In the spring of 1898, at the urging of Martin Kellogg, who was
then president of the University of California, several members of the Berkeley faculty who had
been initiated into Phi Beta Kappa elsewhere petitioned the National Council of the Phi Beta
Kappa Society for the establishment of a chapter at the University. In response a charter was
granted on September 7th of 1898, and Alpha of California was organized at B
erkeley on the
14th of December of that year. So this lecture is the closest in date to that actual anniversary.
The name Alpha of California designates that this is the first chapter established in California,
and it happens to be the first created west of the Rockies. The chapter is managed by a small
Committee of Phi Beta Kappa members on the faculty and staff. Each year we select for invitation to
join Phi Beta Kappa several hundred high-achieving graduating seniors, and a few juniors, w
ho
have completed a broad program of studies with distinction. We also raise money for graduate
fellowships to be awarded to Phi Beta Kappa members who are completing a PhD at Berkeley. We
also use our funds, supplemented by a contribution from the College of Letters and Science, for
induction fee waivers for students with financial need. More details about the history of Phi Beta
Kappa and of the chapter can be found in the 100th Anniversary booklet compiled in 1998, which can
be download
ed at our website. The history there was compiled by Emeritus Professor Basil Guy.
During our anniversary year, we are also in the process of compiling an Honor Roll of Phi Beta
Kappa members who are currently part of our campus community, whether faculty or staff or graduate
students. If you were inducted into Phi Beta Kappa at your undergraduate institution, we invite you
to self-identify by visiting pbk.berkeley.edu and using the link on the homepage near the top to
access the brief brief
form to add your name. Now the second speaker in our series
of anniversary lectures is Professor Saul Perlmutter of Physics. Professor Perlmutter
graduated from Harvard and received his PhD from Berkeley. He's an experimental astrophysicist
and shared the 2011 Nobel Prize in physics for the discovery of the accelerating expansion of
the University——Universe——well we sometimes—our expansion has been but mostly it's our deficit
that has expanding that way. Among many other leadership roles,
he is now the leader of the
international Supernova Cosmology Project. I'm delighted to announce that as of today he
has been inducted as an honorary member of Phi Beta Kappa Alpha of California. I could go
on much longer about his achievements and his research publications, but it is better to proceed
to the lecture so that we'll have plenty of time to enjoy the question and answer period and the
reception that follows in the lobby. Professor Perlmutter has had long-standing interest in
t
eaching scientific style critical thinking for scientists and non-scientists alike, and
that is the inspiration of his lecture today. [Applause] Well, good afternoon. and hearing the history
of Phi Beta Kappa I realized that the talk I was going to give today is actually very very
appropriate because I think it's—I was hoping to begin the conversation with with the audience
here and I guess those who are on the web on the question of what should we be teaching
nowadays when we teach science
in schools and in universities. And I'm uh I'm wanting to
suggest that there's several motivations for a sort of New Direction that we should be thinking
about nowadays with respect to Science Education, and the first motivation is something that I
think we're all very familiar with, which is that science is advancing so rapidly nowadays, there's—
and each of these new fields and subtopics has the potential of actually making a very big difference
in lives of citizens, you know, whatever f
ields they end up going into themselves. But there are
way too many topics that you should be teaching that could be just as important for students to be
learning about as physics, chemistry, and biology, I mean just, you know, relatively recently
obviously, AI and CRISPR gene editing, quantum cryptography. You know, these. these are all
things that could end up becoming as important to teach as the traditional subjects that are being
taught in a— as a part of standard education in science.
Now the good news is that nowadays there
are many ways to learn these topics. You know, I watch my daughter going on the web and taking
a whole course in any topic that she cares about, and so in some sense you could argue that, well,
people can find access to all these topics and they can find all the, you know, facts and figures
that go with them as well, so what's the problem? Well, the second motivation here is
that although we have this, you know, riches of what you can find on on the
web
nowadays, it's very difficult for people to figure out what's important, what they should they
should actually be believing. And we do realize that we live in a world that people become then
very vulnerable to all sorts of fads, at best, and conspiracy theories and just plain lies, at
worst. And this has clearly in our society led to this degree of, at least partly led to this degree
of polarization, that just gridlocks almost any advances over almost any topic that we care about.
And
at times when I've been giving talks about this in other parts of the world I felt a little
bit embarrassed to be, you know, showing our dirty laundry in of the US. But of course what
people say to me is that they're seeing the same things in other parts of the world as well, and
so this isn't isn't just a local problem. Now, you know, we really, I think, had this
highlighted for us during the last, you know, years of of when we were going through the
pandemic, and we were watching a period
in which, while we had just discovered ways to deal with
with pandemic, people were uncomfortable getting vaccinated, and it was, you know, a lot was due
to information that they were apparently finding, you know, on the web. Even maybe more surprising
is that people then start believing these weird bizarre connections between the possibility
of, you know, COVID had something to do with, I don't know, the cell phone, you know, 5G. I and
and seriously, but taking it seriously enough that th
ere were attacks on, you know, on 5G on, cell
phone masts in several countries around the world, and you just wouldn't have thought that that would
be the world that we would have advanced to as we had a science, you know, science had reached this
level that we were also able to, you know, produce a vaccine in that record time. And so you can see
that in some sense things like the pandemic are not just a crisis of scientific knowledge but
they're clearly just as much, if not more so, a cris
is of public decision-making, you know,
in this kind of, this highly uncertain polarized world. And of course it's not just pandemics
that we worry about. There's, you know, all sorts of major issues in the world that we want to be
able to deal with, and, you know, each of them is something that we struggle over if we don't have
a way of making rational decisions together as a society. I've sometimes been pointing out that,
you know, for me all these very scary existential threats, I don't
think that they would be scary
to me if I felt that we were even reasonably on the same page. I think that we actually have the
capability, perhaps for the first time ever, of being able to take on these kinds of challenges,
and I think they're the kind of challenges that are just about at the size that we're pretty good
at, if we're on the same page together and if we have ways to work together. So this seemed, in the
end I was describing this is, I think, this may be the greatest challeng
e of our generation, if we
can figure out some way to get past this period and get to the point that you can have some kind
of collective conversation. I think, you know, that would make me feel like we have solved
the major problems or that we will solve them. All right. That brings me to the third motive,
then, for reconsidering Science Education, which is that I actually believe that we
have the possibility today to work on this puzzle using the intellectual methodologies of
science th
at have allowed us to make all these great scientific advances. But I think they
also now could be applied to this problem of a larger Collective decision- making and and
collective thinking, and if that is the case, then this has got to be the most underutilized
part of what our science offers in an education. And so that's the I know perhaps provocative
premise that I'm going to try out on you, and and I'd like to hear, you know, what
people think. All right. So typically I give a talk ab
out the expansion of the universe,
and as you know, I've been, you know, interested in the question of, you know, why is the
Universe apparently now speeding up, and it used to be slowing down, and University apparently
too. [laughter] But, so, I won't be talking about that today, but I think, and I'd be very glad to,
you know, answer questions about it afterwards if you were curious, or come back someday and give a
a talk on the other one. But I just want to point out that this has in some
sense made me very
aware of how as scientist you do spend a lot of time thinking about what is the reality of what
you're, what you're trying to study and, you know, what what what does it mean that you're, that you
think you understand this about the world. And I've, you know, come to think that in some ways
the difficulties that we have in trying to figure out what is credible on the web, on the internet,
has interesting parallels with trying to figure out what is credible in our picture
of scientific
reality, and some of the the parallels are sort of obvious. I mean, first, in both cases we have a
tendency to look for famous authorities, and to see whether we can trust trust them. There's also,
we tend to look for indications of popularity, what's high up on the rankings of the of the web
pages. But, and it's not that those are, you know, those are bad ways, in general, to try to learn
something about what to trust. It's just that you don't want to stop there, and you cert
ainly
would never have wanted us to have stopped with, you know, Aristotle's view of of physics
as, you know, where we've finished, right? That was a beginning point, and you don't
want your latest fashionable rumor to be the basis for choosing your medical treatment.
Clearly those are things that we've learned: that there's better ways to do the job, and
science has dealt with this problem over, well, centuries, probably back to ancient Greeks,
by developing different ways to start asking
more methodically about, you know, what is
it that we— how do we approach the problem, but not with a single method. And I think that
we all were at one point, or at least, you know, a generation of us were taught in high school
or earlier that there is a scientific method, and I think that that's not really what I'm
talking about here, you know. That's more of a whole collection of methodologies and tricks
of the trade that, I think, together make what it is that I think that science has
taken on as
a way to approach this problem of what to believe about the world. There are trends, and there—I
so loosely hold them together with some themes, like, first, we are amazingly good at
fooling ourselves, and that, you know, much of what we've been doing in these techniques
of science is developing approaches to how to find and recognize when it is that this sort of
situation leads us to fool ourselves in this way, and then also develop tools and and tricks that
will keep us from
(not) avoiding that particular way of fooling ourselves this time. And then,
you know, many of them have this characteristic of recognizing where we have our strengths
and weaknesses physically with our senses and where and how to shore up the weaknesses and
build to the strengths. And so the techniques I think have a lot in common in this sense. So
if you ask them what do these tools look like, and this was a question that I found myself asking
now it's over 10 years ago, when I was, I thi
nk, at one point aware of watching our society try to
make a decision about something very practical, like what should the appropriate debt ceiling be,
and I—you'd be shocked to hear that at that time people were having a hard time deciding what was
the right level for the debt ceiling. And looking at the form of the conversation and the discussion
about it, I was struck by the fact that the debate sounded nothing like the conversations I was
hearing over issues like this, at a lunch table,
discussion with a bunch of scientists, at,
you know, at the lab and you would you would—I realized that the terminologies and the ideas of
how people approach the problem just felt very different, and I was wondering, well, where did
these scientists learn this, these approaches and this methodology, and I realized it was not
in any science course that I had ever taken, either in high school or in college, and that
it was really being taught currently as sort of a osmosis process, while pe
ople went through the
going through graduate school and postdocing and maybe as young faculty members, and a culturation
and that you picked up from the people around you and from your advisers, and little by little you
learned a lot of this almost terminology as much as everything else that people then would use to
approach a problem and ask questions about any topic, and it could be a recent Supreme Court
decision, it could be a, you know, a medical issue. But you would just assume that p
eople
would all have these ideas in their mind. And it seemed like a real shame that, first of all,
I didn't learn any of them until I got, you know, started going through graduate school afterwards.
I was not getting any of this as an undergraduate in courses in sciences. And for that matter, why
shouldn't you even have learned it much younger than that? And then if you didn't go on in the
Sciences, it would be you— it would be more hit or miss whether you would ever happen to pick
up som
e of these particular ideas. So I thought, well, shouldn't we find a way to collect them
and see whether you could articulate them and could they be possibly taught at at a much
younger age. I then realized that it's not good enough just as a physicist to try and come
up with that collection of ideas; that, you know, a physics faculty meeting can go as irrationally
as any other faculty meeting. So it seemed to me that it would it would be good to bring others
into the game, and so I found a
professor— other faculty in, well, social psychology,
because that seems very relevant, philosophy, public policy actually also, and also put
up a sign for others asking saying are you embarrassed watching our society make make these
decisions, come help invent a course, come help save the world, and about 30 mostly graduate
students, some undergraduates, some postdocs, started showing up. And we start meeting at
the end of Fridays, and I don't know why, and and we we begin like at four o'
clock in the
afternoon we go on until, you know, eventually people get starving and they go home for dinner,
but and we were run— we would meet every week for like nine months, as this group just tried to
figure out what would a minimal set of ideas be, and the ideas— that they shouldn't be necessarily
comprehensive. Obviously, we weren't going to get everything. But what would make a nice interesting
starting point? And are there ways to teach it? So that led to the Big Idea course here at
Berkeley,
Sense and Sensibility in Science, which made a lot of sense as a name back when students were
reading Jane Austin. I gather it's—I've had a harder time trying to explain it lately.
And over the years now we've taught it now, what?, nine times in ten years or something
like that. And it's always, we've always had one natural scientist, one social scientist, and
one humanities faculty member in the room at any given moment so that we would model conversations
that we thought were i
mportant for the students to have a chance to so see us, you know,
discussing things and debating things with each other. And over the years we've
swapped out different social scientists, different philosophers, and so it's
now a product actually of a number of Berkeley faculty over this period and cohorts
of students who've become very involved in it, many of whom stay and then help teach
and help developing the course for the following years. So in that
sense it's been a real, you know,
labor of love for a lot of people at Berkeley over
this time. And what we've ended up identifying was maybe, at the time that that group was beginning
to meet, we ended up with almost two dozen topics, that every class teaches one of them, and we
worked on trying to develop ways to make each of the classes experiential in the sense
that you wouldn't just lecture about it, but you would actually do activities, games,
discussions, where ideally you might fall into one of these mental traps a
nd then try
to figure out how you get out of them, with the goal being that you would have enough
of a visceral understanding of the problems that when you saw it in your day-to-day life, or when
you heard about something in the news, you would recognize that "Ah, that's that issue coming up
again," so you would transfer it to these other situations You can roughly group these into these
topic areas, but let me try and rearrange this a little bit to capture the structure of what we
were doi
ng. So first, there on the upper left are sort of the cautionary aspects of science,
the things that you have to watch out for, and that's then balanced by some
of the "can-do" aspects of Science. And I'll give maybe one or two examples
of each of these different categories, just to get a feel for the kinds of things we'd
be teaching, you know, beginning with the —oops going the wrong way, wait here a
second— beginning with the underpinnings. They're typically the sort philosophical
underp
innings of what we're doing as scientists. One that seemed very important was to get across
the idea that even though we often think of the world as "well, you have your beliefs, I have
my beliefs, you know, we just have to, you know, follow what we believe. Scientists tend not to
have worked that way. The way that we've tended, we've made progress, is to think that there is
some reality out there, some common reality, and there is a point in trying to argue it together
until we figure out
what that reality is. That it's not good enough just to go off to my corner
and say "well, I guess you believe your thing, I believe my thing." That isn't how we've
actually made our advances, and so that is something that surprisingly changes over the
years as to whether the undergraduates find that surprising or find that "oh yes, of course."
And I think that that isn't a (given), but is something that has to be spoken about
and thought through. We've also been using the model of knowledge
of, you think of the—
it's not really so much a pyramid of knowledge that science is built up on, where you begin
with some base elements that you can now trust, and then you build on top of them and you build
on top of them. But more of the model of the raft of knowledge where there's many woven
pieces that together make a coherent whole, but that any one piece could be brought out
and tested and asked: "well, maybe this one's wrong," and you might have to put something else
in instead,
but at any given moment you're still floating on your raft with, you know, most
of the other pieces in place, but you do take very seriously the possibility that, you know,
something as fundamental as, you know, gravity, has to be reconsidered. And it's not that you then
ground all the airplanes while you wait until you figure out, you know, why Newtonian gravity is,
you know, is or isn't right, and that Einsteinian general relativity you know might be the answer.
But you have enough of the
pieces that are all in place that you can trust that you'll float
meanwhile, while you're reconsidering any any individual element. So these are the kinds of
elements that we thought is just helpful for people to understand—are going into the background
of what we're doing when we're doing science. Causal reasoning, of course, you know, everybody
knows that you shouldn't take correlation to mean causation. But we had to discuss what was it, what
does it mean when you can't do a fully rando
mized controlled trial, and that there are other ways
to establish causation. And so we teach a little about, you know—I mean, obviously, my field of
of cosmology wouldn't exist if you had to only do randomized control trials. And so you do need
other mechanisms, right? So that was one of the other elements that we thought was important to to
get across: a clear huge part of what scientific thinking involves is probabilistic thinking,
that we don't think of things as binary. you know. true-
falses, yes and nos, for the most part.
We're typically using probabilistic styles in all sorts of ways. We begin—(in) parts of the course
we're teaching the reasons, the ways in which we've learned that we fool ourselves by, you know,
looking when we're looking for signal in noise. What do we mean by signal? What do we mean by
noise? In what ways will you think you see signal when what you're really seeing is fluctuations in
noise? And that, you know, then ties into the fact that we see pa
tterns in random noise much more
easily than that you might expect. And so a lot of what scientists have often developed is a bit of a
sense for how often will you actually see a whole run of of heads in a row, and that it's not as
rare as you might imagine. And therefore you might need to use statistics. So in a course like this
we don't actually teach much of Statistics at all, but we're teaching what it is that you're
watching out for that makes you go and learn some statistics, or what
situations do you
want to not be trusting your gut instinct for, and think that I know what's going on here, but
that you actually need to do something that's a little bit more rigorous than that. Now, maybe
even more important from probabilistic thinking is just the idea that almost any I proposition
that you hold you hold with some probabilistic confidence, so we assign credence levels to this,
and that's very important, because, you know, it could be that you're 99.99999% sure and you'll
bet
your life on it: you'll get on that airplane and you'll be pretty sure that it's going to
fly. But just the idea that you take all these points as having some degree of confidence
or not makes it much more possible to be open to being wrong and to possibly changing your mind
about something. And so that seems like a very key part of what it means to be doing the style
of scientific thinking, that you have to be able to consider the possibility of being wrong.
Now, this is an example o
f where the kinds of games that we play and the kinds of activities
that we did in the class was aimed at giving a bit of experiential feel for this. Here
we came up with a game where we organized little small discussions between groups in the
class, where they would take a proposition like "has the increased use of standardized testing
in the US elementary schools improved the quality of education or made it worse?" But during the
conversation any statement you make that could have a truth
value, you give a number at
the end of your sentence. So, you know, "Most teachers have now begun to teach what they
think will be on the test rather than what they think is the most important material to teach,
82," indicating your degree of confidence on a scale from, you know, 0 to 100. And it's,
you know, it's a silly game to start with, but what we find is that when the students
play the game and work through it, that afterwards they find themselves realizing
that they were not as con
fident about things, that they were, they would have ordinarily
argued with some, you know, great conviction, and that they, you know, they only had
82% confidence in sign that they were — and therefore you might have to consider the other
18% of possibility that the world is different than they were originally stating. In which case,
there might be a different way of thinking about the problem, so that the students found that that
is actually a very interesting way to make them aware of wh
ere their standings are in these
discussions, and that it could be actually an interesting way to have real conversations with
people if you get anybody to stick around long enough to to do it with you. All right, so these
were some of the sort of cautionary parts. Now, you can't, you know, drive a car with just brakes
of cautionary elements. You do need, you know, some of the accelerator pedal of the "can-do"
aspects of science. And so we thought it was important to teach some things that
I've never
seen actually expressed and taught specifically. One is the thing we were calling scientific
optimism, which we introduced by asking you: what is the second longest time you've ever
spent trying to solve a puzzle? That was second longest because we didn't want the one
time you got obsessed, maybe, but we want some more typical— and so the question is, you know,
you guys, you can all think about this as well, most of the students, you know, were answering,
well, you know, sometim
es they spent as much as, you know, several hours, you know, maybe even more
than a day, you know on it. But it's very rare that people find themselves thinking about "yeah,
I guess I remember spending several, you know, months or years, you know, on some problem." And
the idea here to highlight the fact that a lot of the way science works is getting comfortable
with the idea that most interesting problems don't solve themselves very easily, and yet if
you stick to them long enough you actu
ally can get somewhere, if you have that confidence
that you will be able to solve problems. And it's more than once that I found that when you
talk to science faculty, a lot of what they're doing, and I think maybe in other fields as
well, a lot of what you're doing as a mentor for grad students is getting them past the point
where they say "I tried it, I gave up, it didn't work," you know, and get them to be comfortable
with the idea that it won't work, you know, for a while, but you stick
to things much longer
than it feels like you should stick to them. It's not that you never give up on something, you
don't want to hit your head against a wall when when something's not doable. It's just that most
of us aren't aware of how much a problem requires to actually make some progress on it, and that you
have to be fooling yourself into thinking you can solve it long enough to actually solve it, and ,
so that was the element, you know, here in play. Other aspects of the "can-do" a
spect of science
we were teaching having to do with, you know, techniques for estimation that allow you to get
a grip on a problem rather early without having really solved the problem. And so there's a few
of these things in play. Now in the end, we want to bring together these "can-do" aspects and these
cautionary aspects and do something with it. And so we come down here to where you want to actually
make some decisions to do something with it, and then of course you run into all the iss
ues of what
happens when humans think through problems and the problems of human cognition. Now many of you will,
you know recognize the you know the availability representative anchoring heuristic biases from the
kind of "Thinking Fast and Slow" discussions that Danny Kahneman, you know, would have raised. But
we, and we do discuss some of that in the course. Here I was just going to focus in on one of them,
the confirmation bias, since that's become a very visible one, of course, in our w
orld today. It's
very easy to look through the web until you find something that you agree with and then think you
found some evidence for your point. In particular, science has come across as a rather interesting
way of fooling yourself with confirmation bias, which is that, if you are doing any complex
analysis, you will be trying to figure out where you're making mistakes, and I don't think
most people have thought about it this way, but I'd say probably 90, whatever, 98% of the
time as
a scientist what you're doing is trying to figure out where you're making a mistake
and where you're going wrong. And so you're, you know, often trying to figure out
"is this data point, you know, good?", is this analysis right?" There's a confirmation
bias danger, which is that, if anything that's at all complex, you will hunt for all the things that
are wrong when you have a result that just looks weird. And, you know, you tell your grad student,
"well, you can't go to the conference and
show that plot. It's going to just be embarrassing
unless you know, you really check for this bug on the computer program, check for this thing that
may be wrong with our scientific setup. But if you see a plot that looks like what you expected, and
you need something to show at the next conference, and perhaps it even might back up some new theory
that you have, there's a temptation to say "well, I guess we've found all the bugs. We're done.
Look, it looks great." And there is clear evide
nce when you go back through the scientific literature
that that's going on. and that people are—, and it's done by great scientists. I mean, so
you see excellent work done by excellent people, but you go back and you analyze the data
afterwards in retrospect, once you know, you know, what we now know let's say 25 years
later, and you can see all sorts of evidence for people having stopped looking for bugs at
the point at which they got the answer that they wer— thought they might get, or t
hey're
interested in getting. And I've now gone and analyzed, you know, my competing group's work
and seen this thing, and I felt very smug, and then I analyzed— then went back and
looked at our own work which we'd done, and I found errors that clearly look like the
same kind of error. So I've come to the point now, and this is something that started in particle
physics first, of using new techniques, that a technique that's called blind
analysis, which was just in the last 20, 15 years or
so becoming more and more the accepted
norm in certain areas of particle physics, and the idea being that you have to figure out ways to
check all of your analyses without getting to see the answer. And It's tricky. There's clever ways
to do it. But I've now come to believe that if I don't do it , I don't really trust my own results,
let alone anybody else's results. And so we've started, we've introduced it into cosmology. It's
starting to be taken up by many of the cosmology teams as wel
l. It hasn't yet spread to the rest
of the sciences necessarily. just to give you a little more feel for this, I'll just give you one
of the stories from my own field of of cosmology, which was the measurement that was done over the
last 100 years of the current expansion rate of the Universe. So what is its current speed of
expansion? And so, this isn't the whole thing I was describing earlier, of accelerating and all
that, but this is just what is it doing today? And you can see the numbe
rs, you know, were way off
and then starting moving around since the 1920s, when they first tried to measure this, and then in
the more recent period, let me just blow up ,this region over here, and you'll see something rather
interesting going on between like the 70s and the 90s. And this is that if you look at the
papers and the values that we're getting, there's this unusual trend, which is that one
team's work always got numbers around 100 and another team's work always got numbers arou
nd 50.
And these actually were both excellent teams, and you look at the papers and they've been correctly
identifying errors in the other team's results. And then. you know—but the unfortunate thing was
during that period you knew who wrote the paper based on what the value was that you got .
And clearly there's something going wrong with this science, even though when you look at
each thing they're doing, they all sound right individually. And of course I think what's missing
in this case
was clearly the blind analysis. So as I said, I now have come to think that you have
to be using it. When I was talking about this in this class at Berkeley, the social psychologist,
Rob MacCoun at the time, said "you know. we're having this replication crisis in our field right
now, and we're not able to replicate, you know, experiments. We should really be using this
blind analysis technique. Could you help write an article?" So we wrote this article together for
Nature, trying to discus
s, you know, how you could try to bring it into the other field. And, you
know, the other one that's being used for similar purposes is pre-registration, for those people
who've been following the story. And just a week or so ago, I was very happy to see that there was
a paper out in Science magazine where they showed that the during the replication crisis they were
getting like 30% to 50% replicability, that you could actually believe a result when somebody
else did it, that they get the s
ame result that another group had found. When you use techniques
like this blind analysis or pre- registration, they were now getting like 97%, or something
like that, replicability, that you actually were apparently now beginning to get results that you
might be able to trust. So that was nice to have seen. All right, by getting a little sidetracked—
this is, that was individual human cognition as well. There's the next question about what happens
when you need to make a decision as a grou
p, and here one thing I thought was, you
know, there's all these interesting discussions about wisdom of crowd versus herd
thinking, optimal ways for groups to make decisions. But what what we really, what I want
to bring up here is that in the end of the day, if you've taught all this other rationality, it makes
no difference whatsoever if, when people actually get into a decision-making mode with others it
gets all overtaken by everybody's fears and goals and ambitions. And of course fears
, goals,
and ambitions, and values are what brought people into a room to make a decision. So those
aren't going to go away. The thing that you could lose is the rationality, and so unless you have
principled ways to weave the values and the fears and the goals together with the rational elements
that underpin a decision, then the thing you're going to lose is the rationality. so although at
one point I thought this wasn't the job of of. you know, teaching scientific techniques,
to get int
o decision-making approaches, in the end I came to conclusion that, you know,
if we don't take that seriously and we don't think about how we do that in principled way, then
we're not really doing the job of teaching, you know, rational thinking in any way,
or critical thinking. So we've introduced into the course a lot of the different best
practices, different experiments that people been trying. There's some very optimistic
ones, techniques that's called, like, deliberative polling—I thi
nk is a very interesting
one to to look at if you ever get a chance to look it up— ways that bring in expertise and bring
in a representative, statistically representative body to make a decision using those two together.
So you get the values and you get the facts, you know, of the story in the decision at the same
time. And then finally, I was just— I won't spend extra time on it today—but I'll just say that we
do spend a little bit of time on this topic of when is science suspect because
we don't—you
know, by this point in the course, you might start to think that we're—it's basically science
boosterism: if it has name science, it must be good, you know. And so we want to at least have
that moment where we remember and look at the ways in which science goes wrong, you know, including
some of the terrible ways in, you know, in history where, you know, studies of human subgroups
have been a nice tool for oppression of one sort another, and we thought it's just important that
people have their antenna up, and recognize the ways in which— that you have seen very serious
failure modes of science if we're going to be describing, you know, this whole thing as
a ,product of science. So this is the feel of the course. Now, you know, there are many
more, you know, topics underneath all this, and so, you know, if you're curious, I'll
point you later to the to the web page, and you can go and see, you know, what all these
different topics are that we ended up covering i
n the course. The question I want to ask now,
though, is: with this vocabulary of science, if you have it in hand. how would it make
a next generation of citizens act differently? What could this do for us? And I, first, you know,
would like to imagine, and I've sometimes talked about it in a grandiose way to the students in the
course, that, you know, someday they will all be in Congress, and they'll be making decisions,
and you'd like the congressmen to all have this vocabulary because the
y've all taken these
courses, and they will be using this, you know, these ideas, as they try to decide what's the
right, you know, course of action. We want them to be, you know, interested in sharing a realistic
shared picture of the world together. We want them to be using probabilistic thinking, not just you
know these black-and-white, you know, "I'm certain I know the answer"—be aware that they could be
mistaken, and that the way to figure out if you're mistaken is to actually find oth
er people to help
tell you where you're going wrong, not to find other people to yell at them until they believe
that you're right. and just. you know, those as a starting point, I think, would already be
salutory. We would like citizens to feel like they have a way to have what
they're looking for in an expert, when they go for expert suggestions,
they you presumably want the ones who recognize all the ideas we just talked about
in this, in developing this course, so they should understand
probalistic things themselves,
they should be open to others' contributions, they should perhaps have this "can-do" spirit
that might allow you to consider enlarging the pie when there's a problem that might be
otherwise a zero-sum game. So you have—there's something that you want to see in the people
that you're going to trust to give advice. And you would like these to be shared skills so
that people can use them together constructively, So that was the, you know, somewhat the picture of
what kind of society you might be trying to build if you were successfully educating with these
approaches. All right, so with that motivation, you could ask: well, where and how can we get
these ideas out into a world, so that someday, 20 years from now, we all feel safe when we look
to see what is our Congress, you know, voting on, you know, today. We're— we have taken the material
of this course, and just in the last few weeks finished proofreading the gallrys of a of a book.
So you sh
ould see this sometime in bookstores. in the spring, I guess, or something like that. So
there might be one way that people will get it, for those people who still read books. I guess.
actually, we're also auditioning the people who do the audio, so that, maybe, you know. some
people listen to the books. So we're hoping that will be out, and that will be a route.
But we've been, obviously, working very hard on this question of can we teach this in an
educational context, in courses and scho
ols. So there's this political problem, that trying
to teach anything in a, or taking anything into a high school in our country, and probably most
parts of the world, it's very difficult to see, you know, how do you get something into the
curriculum. And in one route that we realized does get through to schools all over the world,
without going through the political problems, is the Nobel Prize websites. They, every year—
teachers download their material and just use it. So we went and spok
e to the Nobel Prize
Foundation, asked them, would they be interested in helping to develop this, and they've joined us
in starting out to work on a high school version of this course, and developing modules for high
schools that could be going into existing courses, or brought all together to teach a new course
in critical thinking. And so, that work has been going on with actually the help of Lawrence Hall
of Science, so it's another Berkeley product. The very first unit just come out a f
ew months ago.
Out of six units, two of them are in field tests and one is being written for the next round.
And so we're hoping that within a year or so we'll have this full set, and we've been— and
so if anybody wants to go look for it, it's at the— nobelprize.org/scientific-thinking-for-all/
is the current name of this website. And there's a big teacher's manual with, like, you know,
200 pages, for help(ing) the teachers. because they have never taken a course like this, so it
takes a li
ttle more to explain what is it you're doing if you're trying to teach this material
in a course. You've heard that we've developed the University course at Berkeley, but it's now
starting to spread to other universities, and so it's now been taught for a few years at Irvine
and at Harvard um and Humboldt State has tried it. So far, we know that University
of Chicago is picking it up for this coming year. And so we're
hoping that there might be some life that will start coming through the
universities to spread these ideas, and we've been working very hard to make all
this available for any faculty who want to teach it anywhere with the website that has
all the teaching material on it, including, with appropriate password, all the quiz
problems and the exam problems. And then we've been starting to look at online education
ways to approach, you know, maybe animations, maybe YouTube videos, working with some of
these usual YouTube influencers, you know, the Veritasium and ot
hers, to see whether
we can start spreading it through more informal routes. And we've also even had a chance to work
a little with the Exploratorium, to see whether there could be Science Museum, you know, galleries
of this kind. I still like the idea of having an escape room where, after you've learned a number
of these techniques, the only way you get out of the room is by using some of them, you know, in
a problem, so that'd be kind of fun for a science museum. You don't know whether yo
u've taught any
of this unless you come up with ways to test it, and so we put a lot of effort— we got a
grant from the Moore Foundation to develop assessment tools for this, and we've ended up
coming up with with five different categories of questions, and so we've started doing the
pre-course/post-course comparisons every year, and we're seeing what looked like surprisingly
good, you know, motion of this kind, where we're getting, you know, a a fairly broad distribution
at the beginning
sharpening up to where you want people to get to in understanding these ideas.
So at least in principle, it looks like we can tell whether people are learning this. We have not
yet been able to do what I would love to do now, is start doing the longitudinal studies to see
whether any of this stuff sticks. Do students continue to use it? We get, you know, anecdotal
things like postcards from former students saying "I'm"—— uh one student was saying that
he was now running for State Legislatur
e, he just wants us to know that he's,
you know, he's remembering the course. But this, I think, would be
very interesting to do. The other group that became interested is——I
don't know if you have just followed recently, the latest Pisa results came out. These are
the international comparisons of educational attainment in countries around the world, and
they were very interested in this as well, and they're planning for the 2024 edition to start
to test these kinds of items, and we hope t
hat that might drive interest in teaching signs of
critical thinking, if it's starting to be tested and compared between different societies. And then
of course, we really treat— see this as just the first attempt at doing this, or a first attempt at
doing this. Now if we get the example out there, we're hoping that others come up with other ways
to teach this, and other approaches. You know, you often will find people will say that
you should be teaching something like this, but we felt th
at there is—it was usually
very vague, it didn't have any of the of the specificity that you have in a curriculum
for chemistry or biology. And we thought that if you have a list of these very specific things
that you can teach and these things you can test, it would make a big difference. And so
that's what we're hoping to exemplify here, and that it might stimulate others to try these
ideas, try to develop other curricula of this kind. So let me stop pretty much there, and just
say that,
you know, what would—can you imagine what it would be like if basic education equipped
everybody with this, and in some sense maybe, is that what we should be now thinking about when we
talk about Science Education, either as much as or maybe even more than "do people really understand
these concepts of physics, chemistry, biology?" , much as I love the concepts of physics
and chemistry and biology. I will leave it there with maybe the list of topics and
just see whether there's any questi
ons, if we have time
[Applause] [AUDIENCE MEMBER ASKING QUESTION:] These
are wonderful criteria, tools to really apply critical thinking to any thought. But what
do you do when you are confronted by reality, where in academia several students
are distorting reality real-time, and there's no chance being given to
synthesize or to resolve the actual truth, and the falsehood is presented as truth?
How do you [UNINTELLIGIBLE] this please? [SAUL PERLMUTTER:] So I I assume I should repeat
for th
e camera. Okay ,so the question, as I'm hearing it—— this all sounds very well if people
have the time and the thought and are learning it that way, but what do you do in a world or
even a university where people are presenting, you know, positions without pausing to ask, you
know, is this right, is this wrong? And how do you work with that? And people are presenting
realities that are clearly distorted, and they're presenting it as if they're truths. So a couple
thoughts in response to tha
t. I mean, clearly this whole educational idea is something that you want
to be doing all along the way up to this point, so that by the time you get to the point where people
are going to then start an argument, they at least all know they all have a shared understanding
that they have a reasonable chance of being wrong and that that is part of what we learn about
the world, and that and so that— and then if somebody says to them, well, you know can we start
discussing, you know, where you
r ideas come from, and have you considered, you know, considered the
alternative you have, with what confidence do you state these different elements of your story?
That at least there'll be a common vocabulary to have that conversation. Now it doesn't
help, right?, when you jump in and somebody's, you know, loudly declaring a falsehood, and you
say, well let's stop and teach you ,you know, and discuss 23 different, you know, concepts,
you know. Now's not the time, that's probably a little
bit too late in that situation. On the
other hand, you know we've been teaching this course in classes that have students with very
strong points of view. In fact, I've at moments, you know, at the very beginning of the course,
you know, somebody will come up to us and they will describe the thing that they really care
about most, and which is a very strongly held political conviction or something, and they're
really hoping that that comes out in the course, and it made me a little bit nerv
ous.
I found myself thinking, well, how's this all going to work, you know,
as we work our way through this course, are they going to just keep wanting to bring that
up and hammer on that point over and over again, and once or twice there was a little bit of that
that I saw in a question that would come from the students, but in general that wasn't the effect.
In fact, I found that at the end of the semester I was surprised that those same students who I
thought would have been kind of ann
oyed at the course would come up afterwards, and they say,
thank you, that was one of the best courses, you know, I've had, and they were recommending it
to their departments that they—because they were starting to hear it as these are the things they
wish their opponents understood, and you know, to some extent. And that's what I want, right?
You want them, you want people to to feel like, you know, if only everybody else understood this,
then the world would be better, because then they m
ight find there's at least the beginning of a
place for somebody to then say to them, okay, but, you know, have you thought about this for
yourself, you know, on this particular one. And so I was surprised to discover that so far the
students in Berkeley at least, where there are some passionate, you know, viewpoints, seem— well,
I also said they're a self-selected group, right?, so this is the 300 students who chose to
take this course. But nonetheless there were students who would come up
and declare their
their prior convictions about, you know ,about a strongly-held political belief ahead of time and
yet they seem to be responding after the whole, you know, after seeing through a course like
this. But it's a good question, you know. [AUDIENCE MEMBER ASKING QUESTION:] Thanks
very much, Saul, for this great lecture. I was really struck by your examples of something
like confirmation bias within modern scientific practice and intrigues by what you said
about the prospects f
or blind analysis, and I just wanted to follow up about
that. Could you tell us a little bit more about how blind analysis works in
the scientific, and then whether there's something analogous to that that might be
more broadly applicable outside of science. [SAUL PERLMUTTER:] Absolutely. Just for for
those who couldn't hear: the question is, can I say a little bit more about how the
blind analysis story works in both in the science case and also in the more general way.
So in the science
case, a lot of what you need to be able to do when you're doing these scientific
analyses is be able to get— see the data in a form where you can recogniz where mistakes might be
in your data collection. And so what we often do is, then, intentionally, let's say, add some
random constant that we don't know to all the data so you can see the distribution of the data,
but you don't know whether it's going to favor one result or the other. That's sort of an example of
the kind of thing. Differe
nt problems of course have different needs of that kind. Many of the
social psychology experiments that we were talking about in the the other paper I was describing—
what there you might need to do is swap the labels randomly of the different groups that you're
comparing, so you don't know which group is the one that had this effect, which group had that
effect, and then afterwards you get to reveal it, but you still need to show yourself something
so you can debug your experiment and figu
re out what's wrong. But the example—I started to think of this
as a little bit like—it's a fancy version of the game that we—or the thing we're all taught
as young children, which I always thought was so clever: when you're trying to divide a piece of cake between two people, one person, you know, chooses, and the other person cuts, and that— the idea being that you blind yourself. you don't get to know ahead of time which one is
going to be your piece of cake, and so in some sense we've a
ll been doing things like a blind
analysis for most of our lives, we just didn't know it. And I think it's something that, for
example ,if you're trying to choose, you know, what medical advice, you know, to take from different
web pages, you might first try to get somebody to help read the pages for you and tell you how
the pages are approaching the problem before you decide which one you're going to take seriously,
because otherwise you'll tend to read the page that you like the answer t
o, you know, and take
that one more seriously. And so there are a number of situations in which you may intentionally do
the equivalent of, you know, basically a ,wine tasting, you know, where you hide from
yourself which is the one that you were hoping the answer is. And until you decide,
well, which one do I trust the methodology more. [AUDIENCE MEMBER ASKING QUESTION——not intelligible on the recording] [SAUL PERLMUTTER:] The situations that worry me, you
know, probably most are situat
ions where people have gone beyond rational discussion, right?, where
where it's, you know, it's, you know, often reached violence, you know, of them sorts. Because,
you know, there it feels to me like you're now no longer even in the game of a
conversation, and so, you know, this is—that's part of the reason I see this as big educational
project, as a starting point rather than as an immediate conflict-resolution, you know,
approach. I could be that it would help with conflict resolution.
I don't know, but the goal is to
start building a a background, the society of citizens and and people who can recognize these
ideas, before you get to the point where people are now in the heat of irrational, you know,
fear and anger. And as much as possible I think there might be ways in which these could be
used in also mediation and in in situations where, you know, where it's a hotter situation, but that,
you know, for today's conversation, I'm thinking much more from the point o
f view of: let's start
getting ahead for the next 20 years and and try and get ourselves into a better position, just
because, you know, you have to start somewhere, and that's, I think, where the start—the
starting point is here. [AUDIENCE MEMBER ASKING QUESTION——not intelligible in recording to transcribe sensibly] ]SAULPERLMUTTER:] Every year when
we teach this, we have at some point, we have a moment where we have to decide, you know, how
do we want to handle that discussion of how does
this relate to people's religious convictions.
And it's interesting. I mean, I'd say that, in genera for most of the student by the time
we've discussed all this, even if they come in with a strong religious conviction, they feel
like almost all this is still valuable for them to understand the world, that isn't—where they
don't already know answers from their religious convictions, and that in some situations it
makes them feel like, well, not every, you know, not every religion is one t
hat has to
take a position on on all the ways in which we understand reality. And so it's, I think,
only in certain corner areas of religion that this ends up becoming more of a potential
threat to the thinking. One of the things I usually bring up in a course like this is
the fact that religion is providing for many people something that has a slightly
different goal than the goal of, you know, how well are we using reality to make decisions. It
has a goal of a certain kind of comfor
t with the world, and that our job here isn't
to take away people's comfort, right?, that's not, that's not the reason for doing all this stuff.
And so, you know I say to people, you know, that they really have to decide, you know, for— if they
want to ask a question, you know, is this, say, where it's going to really upset them if the
answer comes out to be something different than what they they expected? Or is this a place
where it's appropriate for them to, you know, challenge their t
hinking, and
because you know it's too easy, you know, to feel like you know the answer
is this, you know, and of course the whole point of this is not to come as people as if
you know answers it's to come as people as if you are open to questioning the world. And so if, you
know, if somebody has strong religious conviction, I have to be open to questioning the world,
you know, just as much as I would like them to be. [AUDIENCE MEMBER ASKING QUESTION:] Hi, so your blind analysis is
[unin
telligble] part of the things that you adopted, essentially in the ranges of science that have very large [unintelligible]
at work, thousands. I'm just wondering if you think that you will face more resistance in adopting these techniques.
Science is conducted on a smaller scale where people are personally involved in the analysis
and their funding might depend on how those [unintelligible] go. [SAUL PERLMUTTER:] Yeah, yeah, I mean we have discussion about
this. Of course, some of the work th
at we do is also done in a small group, so that we've
had a little bit experience as well. But what I've been pointing out to them is that, you
know, in these situations, is that in the end as an individual, as a scientist, it doesn't really
really help you to fool yourself either because you're going to follow up your own work and if
your—if the work turns out to be wrong, you're going to waste another three years tracking down
something, which is where it's not based on anything that's a
ctually really out there, it's
based on some, you know, misunderstanding of the analysis, and so that you have probably
as much motive as anybody else in getting it right. Now, maybe not that week, you know, that
week, maybe it would be better for you to be able to publish something, and put it on your ,
you know, on your application for a faculty position or whatever it would be. But that
it's hard for it not to hurt you in the end, if you're only learning
things that you think that you
convinced yourself are true rather than that
actually have to do with what the world out there is telling you. And of course most people
who are spending all that time getting involved in doing a science, they do sort of want to know
what the answer is, what's really out there so I think once you convince
people that this really protects them from coming up with conclusions that will not
hold up, you know, the next time around, with, apparently, you know, that non-replicability
crisis o
f, you know, what? 30% being able to be replicated. I think people start to say: okay, if
they can do this in some reasonable way, they should. It's not always easy and so we've been, you
know, trying to help demonstrate how you can build it into systems that currently they all use, so
they don't have to invent it each time themselves. [DONALD MASTRONARDE:] We should try to have some time to enjoy the
reception before the building closes at 6. I want to thank you for this very engaging and
informative talk, and also for your handling these very good questions from the audience.
So thank you to the audience too for those good questions. [APPLAUSE]
Comments