Main

Beyond Good or Bad: AI in the Context of the Liberal Arts

A panel moderated by Sarah and James Bowdoin Professor of Digital and Computational Studies Eric Chown and featuring eminent scholars and leaders in the field of AI: Rumman Chowdhury, Daniel Lee, Carlos Montemayor, and Peter Norvig.

Bowdoin College

4 months ago

(no audio) - This is a big moment for Bowdoin College and I'm honored to be part of it. The subject for today's discussion also reflects a big moment for all of us. Artificial intelligence is technology that is genuinely revolutionary. It will change, in fact, it has already begun to change in fundamental ways, the ways we work, the ways we live, the ways we learn. Many of us have been reading and thinking about how this wave of technological change will break, in our world, higher education. Th
ere are many unknowns, but what is clear is that generative AI is here and here to stay, and that the rate of change and advancement is exponential. We must equip ourselves and our students with tools and with context to engage with this sweeping technology. To ask the kinds of questions that lead to imaginative and humane ways of reaping benefits, exploring and containing problems and advancing just futures. We have four panelists with us today who are experts in this field, as you can see from
the descriptions of their work and backgrounds in your program. So I know this will be an excellent and enlightening discussion. We have Rumman Chowdhury, a data scientist and social scientist who specializes in artificial intelligence and its responsible development and use. Daniel Lee, a world leader in robotics and autonomous systems, whose work centers on building AI systems that can approach the performance level of biological systems. Carlos Montemayor, a lawyer turned philosopher of mind
and psychology who has just published a book about the prospect of a humanitarian artificial intelligence. Peter Norvig, a pioneer in the field, known for his work at Google and at NASA, and for writing the definitive and leading textbook on artificial intelligence. And our moderator from our own digital and computational studies program, Professor Eric Chown, whose research spans artificial intelligence, cognitive science and robotics. So thank you all for being here and for collectively becom
ing part of what I'm sure will be a fascinating conversation about this transformative technology. So please join me in welcoming our panelists to Bowdoin. (audience applauding) Thank you. (audience continues applauding) - Good afternoon, everyone. Before we begin, I'd like to address the importance of maintaining a respectful and courteous atmosphere. We value open discussion and diverse perspectives, but it's essential that we engage in a manner that promotes a positive learning environment. I
personally am super excited to hear from these four people, so let's get started. One of the things I'm excited about is you all come from four different corners of the artificial intelligence world. So I'd like you each to introduce yourself, talk a little bit about what got you interested in that corner of the world and what you're excited about today doing in that world. We'll start with Rumman. - Sure. Hi, everyone. My name is Dr. Rumman Chowdhury. I'm a quantitative social scientist by bac
kground, so my PhD is actually in political science. My interest in artificial intelligence, even though political science sounds very different, it's actually a very similar vein. I love using data to understand patterns of human behavior, and ways in which we can improve people's lives through these insights and understanding. It is deeply, deeply fascinating to me. And I think that maybe it's exciting me right now about artificial intelligence is the increased accessibility of the kinds of mo
dels that have been generally housed in Silicon Valley and you know, in kind of exclusive tech environments. So last November ChatGPT coming out was, yes, it was a revolution in transformer technology, et cetera, but really what it was a revolution in was the interface, the accessibility, the way it looks like an easy to use chat feature, so that everyone from my hairdresser, you know, to, like, my mom, to my dad and me, were all actually interacting with the same advanced technology in the same
way. - Hi, my name's Dan Lee. I'm currently a professor of electrical and computer engineering at Cornell Tech. My particular interest in artificial intelligence is how can we get AI systems to really work in the real world and by the real world, I mean, you know, all the grungy physics, the messiness of what we live with in our daily lives. So how can we interface, you know, this kind of new intelligence that we're building so that it can actually improve people's lives, you know, with their n
ew devices or the way they interact with things in the physical world. And you know, how can we get these devices to actually better understand what you want them to do and not to screw up your lives, and you know, so things I've worked on in the past are autonomous driving. Eric and I have done robotic soccer for a long time. You know, various types of robotic manipulation. And at Samsung I also worked on how we can embed AI into, you know, cell phones and smart TVs and those types of devices.
So that's what I'm very interested is how can we actually get this kind of, what we have, this cloud intelligence disembodied into our actual daily lives. - Hello, everyone. My name is Carlos Montemayor. I teach philosophy at San Francisco State University and the ethics of AI. I recently created a program there. My background is in law. I worked originally in human rights and then I changed to the field of philosophy. And my interest in AI is the use of psychological language in developing arti
ficial intelligence since the very beginning. Terms like attention, consciousness, how they're reduced. And then just the phenomenon of artificial intelligence as a social change with respect to knowledge production and the status of agency, with different forms of agency within society and how artificial intelligence may be a promise but also a risk. So that's my main focus. Thank you. - Hi, I'm Peter Norvig. I'm a hipster. I was doing AI before it was cool. (audience laughing) Going back to th
e 1980s. In the mid '90s, my colleague, Stuart Russell, and I had the opportunity to write the textbook, which has remained the most popular textbook. 'Cause I think we saw a change in the field from kind of the good old-fashioned AI, where it was done by painstakingly writing down rules and the blood, sweat and tears of graduate students, so as what produced the small programs of the time to an approach based on having lots of data and machine learning and producing the big programs that we hav
e of today. And so I've been able to contribute that in teaching at Stanford and Berkeley and other places, and at companies, small and large, including Google and in government at NASA. So I've got all the top level domains and I'm interested in what the applications of AI are across those domains. - All right, so we're here for the inauguration of a new president and Bowdoin is a liberal arts college, and the foundation of the liberal arts is to prepare students to be productive citizens produ
ctive in the world. So AI is obviously, as Rumman, you just mentioned, it is impacting more and more, it's impacting everyone in lots of different ways. What should a Bowdoin student, when they graduate from Bowdoin, know about AI? - Are we gonna go down the line? - Let's start at the other end and come. - Yeah. (panelists chuckle) - I don't know if the purpose of liberal arts is to be productive. Maybe your purpose is more to produce people that can think and then they can choose to be producti
ve or not. (audience laughing) - Different ways. (audience applauding) - Okay? - I am officially embarrassed as a Bowdoin professor, but that was the original Greek construction of the liberal arts. - So what should they know? So they should know how to use the current tools, right? And so it used to be, you had to know how to use a pen, now you had to use a computer, now there's all these great new tools that you can use on the computer. So they should be aware of that. They should be aware of
what's coming, you know, sort of what's possible and what's science fiction, and don't get taken in. They should be aware of the various threats and promises as well. So, you know, if you're thinking of my medical future, how likely it is that there's gonna be some amazing discovery that's going to cure my problem. If you're thinking of what's gonna happen in politics, I should be worried about disinformation and so on. And just be aware of all those possibilities. You don't have to know all the
technical details of how everything works behind the scenes, but you have to know what the impact is going to be. - I can build on that actually, if you don't mind. I'm glad you brought up this idea of being able to critically think about the output and just this idea of, like, being a thinker. I think a lot of the language of artificial intelligence by design asks us to give up thinking and that's how a lot of these products and tools are often framed, right? And frankly, I think that's what d
rives a lot of the fears people have about the technology, right? That I will no longer have agency, I will no longer be able to do things. But I think you're right, like, the critical thing here is being smart about the outputs you see and the inputs you are giving in. What is the data you are giving up and what is the output that's coming out? What is being put in front of you? And as it becomes more, like you mentioned misinformation, as it becomes more realistic, more hyper-targeted to you,
more, you know, hitting all the things that make you feel emotional, how can you as a thinker take a step back, look at this information and say, "Is this real? Is it driving the kind of change I want to see? Or is this somehow manipulating me?" And that ownership is really what you should have when you graduate. - So I can also add to it, I guess. So Bowdoin students are younger than I think all of us, and everyone in the audience. So I'm sure they're actually using it right now in ways, in bet
ter ways than actually the designers who actually built these tools. I mean, you know, already in classes I think we're seeing students using AI tools for their homework, to help with their writing assignments. So that's actually gonna happen. Now I think I agree with, you know, Peter and Rumman that, you know, they should understand what those tools entail, right? What are the implications of using that? They should also know it's not magic, right? That's also, I think, critical that you have s
ome basic fundamental understanding that this is not just something that fell from the sky, is magical, there was engineering behind it. You know, what it entails to properly design something. What does it entail to think about how it interfaces to society? All these ethical questions that we, you know, that have been brought up. So these are the things that I think liberal arts students should have in the, you know, at least a basic understanding of. - Yeah, I agree with everything that has bee
n said and I will just add that in the context of the liberal arts and the humanities as a whole, the emphasis, I mean, if artificial intelligence becomes all that it's being promised, it's gonna be a really profound change in our society and who we are. From the very beginning, the very idea of artificial intelligence was extremely revolutionary. So we should be careful with it. We should understand it. I completely agree that we need to understand the technology. We don't need to know all the
details as humanists or students in the liberal arts, but we definitely need to understand, precisely because it's not something that fell off the sky, that it's a technology that is designed with certain purposes and is good at some things, limited at other things. And because it's gonna shape knowledge production, a very old topic in the humanities, is there a deep relationship between knowledge and freedom? So it should be asked constantly, how is this going to help everyone? How can it be, h
ow can we keep it democratic? How can we use the whole potential of AI for the benefit of most people, or ideally all people? And in terms of education, to educate as many people as possible. And AI definitely offers all these possibilities in spite of the risks that, of course, exist with all technologies. - Can I add? - Of course. - Add one thing just to respond to what Rumman said, 'cause I thought it was a great point. This idea of giving up control. I think if you're giving up control, it w
as a poorly designed system, and I think we've seen too much of that, right? So the Society of Automotive Engineering has this five-point scale for self-driving cars from full human-driving to full self-driving. And I think that's the wrong way to think about it. It's not, "I get more autonomy and I give up more control," rather we should think of two dimensions to say, "If I want, I should have more autonomy." But that's a separate dimension of saying, "I can also have more control." So there s
hould be some systems where I have both high autonomy and high control, and if I wanna give up control I can, but I shouldn't be forced to make that as a trade-off. - Carlos, you mentioned democratization and Rumman, you mentioned your hairdresser using ChatGPT. - That was a real thing that happened, actually. - I very much believe it. - Well, no, she was asking me how to improve her website and I was like, "You know, there's this thing called ChatGPT and you can just put in your details and it'
ll kind of write some copy for you." And she did it right there. So, like, I wasn't making that up. Yeah. (audience chuckles) - So one of the promises of AI is that it can improve those kinds of things and more people can use it, but the history of technology and inequity is not fantastic, right? And there are lots of people who don't have access to AI and won't in the near future. So how can AI be a driver that improves inequity instead of yet another technology that kind of makes more of the h
aves and have nots? And I'll throw that open. Carlos? - Yeah, I think part of the project of understanding the technology and make it more available is that, because it's produced by a industry, at least at this moment. I mean, you can say that is is not the case around the world exactly, but it's a, all technologies can be used for, I mean, they're not good or bad in themselves. Even nuclear energy, which is a technology that is typically, rightly demonized, is a technology that could have been
used in different ways. So one big question is what should be the level of regulation? And what are the risks? And for that we need to understand the technology and its potential. So I think, I agree that there's a, the debate about AI and how to control it has been polarized between people that think that it's a grave danger to humanity and people that think it's a wonderful thing. And I think it's important to understand that some aspects of it can be very wonderful, but we won't be able to a
ssess that in any realistic way until we understand the different kinds of things we can do with AI. So for example, with respect to control, it would be good to know what kinds of risk AI can produce in different domains. Not only a legal domain, as companies producing this technology, but the moral and the epistemic domain where these are technologies that will be increasingly used to monopolize knowledge production. And we want to keep that democratic, so that's a huge risk. But we need to ke
ep those risks separate. We need to understand that there are different things. And that needs to be a dialogue that needs to be, the more the public is involved, the more universities are involved in this dialogue, the, I think, the more, the better the chances that these technologies are used correctly. And, of course, that's. - I can go next. So just, I suppose thinking about AI as a tool for equity, it's a little bit putting the cart ahead of the horse, right? Because we still have vast swat
hes of the world that don't even have internet connectivity, and we needn't even talk about the world. There are, rural broadband connectivity in the US is actually pretty abysmal. So we have entire parts of the country and entire parts of the world that actually aren't even participating in what we would consider the rudimentary digital world, like the basic internet. So I think a good starting point is how do we give internet connectivity to wide parts of the world? There may even be parts of
the world that people don't want that kind of connectivity. I know, for example, on the Solomon Islands, they have no internet connectivity, nor do they want it. So it's interesting to think through what are the, who are the communities that aren't part of the digitized world? Do they want to be part of the digitized world? I think frankly most of them do. And then once we get there, then we can start thinking about artificial intelligence as the technology for equity. But otherwise, if we just
jump to saying how can AI improve equity for folks who aren't even on the internet at the moment, then we would be sort of enforcing our perspective of equity on others. - Yeah, I mean, one thing I think that, as Peter said, you know, all these new modern day AI systems are built upon data and massive amounts of data. And I think the danger, right, is that, you know, if you have a data set that's basically biased towards whoever's in power, or has the most voice in society, then, you know, whate
ver comes out of the AI systems are just gonna amplify those kind of biases. So I mean, there's a lot of current work in trying to think about how we can eliminate those biases, in both the training sets and how you actually do the learning algorithms, and how you actually then vet the systems afterwards, but it's still a very nascent field. No one knows how to do this properly yet. And so I think, you know, as people developing, as well as people using it, we have to understand what are some of
the risks associated with these systems and then to try to be on guard for, you know, what could possibly happen with, especially with bias in these systems. - So I agree with you. You know, a good thing is a rising tide lifts all boats, but the tech tide tends to rise the yachts more than the rowboats. So that's traditionally been a problem. There was a study recently by the economist Erik Brynjolfsson that, looking at, like, call center workers and people like that and giving them access to t
hese AI tools, and there they found that the less skilled workers got more benefit than the higher skilled workers. So I think here maybe there is this potential to lift up more people than in most cases with technology. - All right, there's an important theme in AI, I think in a sense grows every day, and it connects to all of the work that all of you do, which is trust. We, the users of AI are asked to trust that the AI will do good things for us. They're asked to trust that the companies and
the people deploying the AI have their best interests in mind. And furthermore, especially with generative AI, the things that AI produces is helping to make us trust each other and the information we get less. Could you all speak to trust just from your own point of view and where you're coming at it? Dan, why don't you start this time? - So I'm not sure if I even trust my students, so I'm not sure if I trust AI. You know, it's, you know, how do you build trust, right? In a societal way, it's t
hat, if you demonstrate that you deliver something for me that's, you know, I agree with or I can validate in some way, then that's when you build trust. And I think, you know, if you build an artificial system, it will also have to go through some sort of protocol like that where humans, expert humans can actually then, can validate that this is doing the right thing at the right time. And it's this kind of more relationship. But I don't know how I would build or design a system that has trustw
orthy features that, you know, everyone in the world would just be like, "Oh, I trust it because, you know, UL whatever Laboratories put this brand on it," right? - But at the same time, you're working in robotics, right? And humans and robots, when they interact, it's very important for the humans to understand the robots are not going to hurt them. - That's very important, yeah. - And so- (Daniel laughs) So what has robotics taught us about that? - Well, as you know, I don't know if I trust th
e robots that much, but at least I know they're stupid, right, still. That's the one thing, is AI hasn't really made robots that much more intelligent, at least the physical robots that we have today. So, but definitely, you know, there is, you know, how can, the interaction part is really hard because we don't know how to build an AI system that has kind of a mental model of what humans want or, you know, understanding of humans. And that's really important. That's something that we haven't fig
ured out, how to kind of train a system to do that yet. I mean, humans don't even have a good mental model of other humans too, so it's really hard to write this down or to train it, or what data set you're gonna use to try to get these things to self emerge. That's, I think, a critical issue. And that's, it's still a huge area of research, is how do we get this kind of human, you know, AI interfaces, robot interfaces to really work well. - So, like, I don't think we should trust the technology
blindly or should trust the companies to automatically do the right thing. I say this having led Twitter's machine learning ethics team, and not that people at Twitter were bad, right? It's more that, you know, the company's incentives are not necessarily your incentives, right? Not necessarily the incentives of what leads to human flourishing or a better society, right? You know, I think a lot of us are familiar, you know, and again, I've worked with a social media company, we are familiar with
the fact that engagement metrics drive revenue for social media companies. Engagement metrics have also led to, like, social media addiction, radicalization, et cetera. So I, and I don't think we blindly trust any technology, right? The things we trust, we trust because they've gone through some sort of assessment and vetting process, right? You know, the brakes on your car will work not because you just automatically trust Ford's gonna go do the right thing, or Toyota's gonna go do the right t
hing, but because there are laws and if they break the laws that these companies are fined, there are recalls, there are methodologies that have been built. And all of these things lead to you trusting the technology. And one of the things I think we were kind of chatting about early on during lunch is that so much of generative AI remains a party trick, right? It's like this cute thing that you kind of do and you have a laugh and then you close it, right? So for this technology to really be int
egrated into the everyday systems of our lives, there is a massive trust barrier to be overcome, which is good, right? It is good that the bar to being trustworthy enough for this technology to be telling you what you should think about things, or making decisions on your behalf, or writing things for you, whatever it is that people are going to build for it to do, the bar is high. But that bar only comes when there are standards to be met, when there are accountability systems. And frankly, the
re's also systems of recourse if something goes wrong. And right now none of that exists in our field. - So I wanna describe a possible future where our trust exposure surface might change. So like Rumman said, now there's a lot of issues in trusting companies, and I've got a phone with a bunch of icons on it, and when I press on one, I've now given over complete control of my phone and all my information to Uber or Domino's or Dunkin', or whoever's button I press, and I have to trust them to do
good for me. And I know they want to keep me as a customer, so they have some incentive on which they're on my side, but mostly they're on their side and not on my side. What if we replaced all those buttons with one button that says, "I'm your agent and I'm on your side." Now you only have to trust this one agent program, and then it's gonna go out to Uber or to the competitors and make a car arrive, or make a pizza, or a coffee arrive. And now the amount of trust I have to divide is less. We
still have the problem of they have to earn that trust, but I've made it easier, and I have to make fewer decisions. Because right now it's really hard to make all those decisions. Every time you install a new app, a screen comes up and it asks you 20 questions. How many here ever look at those questions? (audience chuckles) Me neither. And I should know better, right? But we don't. So this possible future world, where it's more agents on my side, not on the company's side, I think that's really
interesting. - Yeah, so trust is a very important, interesting topic because it's deeply related to responsibility and agency. And it's something that in a typical human interaction is always very difficult. And it's, to give an example, that kind of trust that we give to companies is mediated by regulations and laws that make them responsible, if they violate those standards. That kind of legal framework, that doesn't exist yet for AI, is different from trusting. For example, if you're, we wer
e talking about mathematics, trusting the proof of an AI, suppose that there's a proof or some scientific discovery, trusting that discovery is very different from trusting a company, right? And we do that through communities, through scientific communities, through universities, through, and that's another level of trust the AIs could get at. I mean, that would mean they're getting closer to a kind of agency and responsibility, similar to what we have, but we're kind of far away from that. But
then even if they got there, their situations of trust, for example, caring for a patient or caring for a loved one, where the kind of trust is not epistemic, it's not about where the system knows, where the system understands the needs of this person. And so we may want to layer the kinds of trust that we give AIs in roughly those ways, right? So sometimes the kind of trust is too demanding for any kind of mechanical device to be given full control. And in some other cases it won't. But I mean,
again, as Daniel was saying, we're kind of far away from these scenarios, but we may need someday to make those decisions and then think whether there are some areas of trust, of human trust where we just need humans. Not for anthropocentric reasons, but just because the kind of value that is at stake is just much higher than the commercial or some other kind of word. - All right, I wanna return to the hypothetical new Bowdoin graduate. Rumman, you mentioned flourishing, which is a word that I
like, and the problem of alignment between what companies are producing AIs for and maybe what would help people flourish. Peter, when you were here a few years ago, you talked about the difference between what people want and what they need. So how will AIs in the future help those Bowdoin graduates, the people in the audience, flourish? Maybe emphasizing more what they need than show me another cat video that's entertaining for the next minute and a half. - Was that a question for you? - (chuc
kles) Right. Well, I mean, on one area that I think that we think AI has big potential as, you know, as Peter said, is tools for improving our lives. So, you know, all these AI assistants that are coming in. Right now, these first generation AI assistants are really limited, right? They've been very scripted, they only do certain things. I mean, now what you're seeing is a much more general purpose kind of agent or tool that's emerging, that you can ask it, you know, to help with schedules, to b
e able to, you know, answer questions, to be able to remind you of things in a much more intelligent way. So definitely, I think, you're gonna see productivity, you're gonna see kind of, especially with the younger generation, their kind of ease of use of integrating with technology will increase and this will improve, I think, their lives. But again, the question is, you know, we wanna make sure that it doesn't, you know, distract them into even seeing more cat videos, you know, every day. So,
I mean, I think it's interesting where we're gonna go. We don't know yet, and it's gonna be driven by the market, right? I mean, it's gonna be what the kids wanna do with these things. - But I mean, isn't the market cat videos, right? How do we bridge that? - Well, actually the market really is, you know, going back to the initial question of productivity, and actually, I know you were joking, Peter, but I appreciated what you've said. So over a hundred years ago, we actually had very similar co
nversations to these, about the last industrial revolution. And you know, the name has sort of fallen out of fashion, but you know, a few years ago people were calling this the fourth industrial revolution, right? And I think that framing was very fascinating, because again, over a hundred years ago, we actually, people dressed in old time-y clothing, but no women or people of color sat and had conversations about where we will take all this product and where it will all go, how it will improve
our lives. And there were essays at that time where people talked about only working three days a week, and you could spend four days, you know, learning to play an instrument, writing novels, taking care of your families, and like literally zero of that happened, right? Because when market forces drive things, we drive people to more productivity. We already saw this happen with things like email, the ability to have digital telephones and accessibility, didn't make any of our lives faster or e
asier. Did it just make us work more? It makes us work more. I was literally taking a call with people in the green room and like, this is the life we've made for ourselves because we let the market drive where productivity should be. So I think in order to have this sort of flourishing and actually productive life, first is like defining productivity as something more than just contributing to the GDP, you know? And second is being very intentional and, you know, this is like, this is the probl
em, this is a collective action problem. We'll have to collectively agree that with this freed-up time and this, you know, ease of getting things and doing things, what we will not do is actually just spend more time doing more work and we will spend that time doing something else. And it is so very, very hard because so many of us, and like I'm talking to myself when I say this, have been hyper-trained to like reach for that golden ring, and then the next golden ring is just a little bit higher
up, and all the digital devices in the world help us get closer to the golden ring, but then we forget everything that we have left behind. - Yeah, I would say that for flourishing what Rumman said is exactly correct. I mean, that's why I think technology as potentially disruptive as AI, and definitely the last industrial revolution, the recent industrial revolutions, they didn't turn out to be great things for all humanity. They created their own problems. And I think with this technology, we
at least have the opportunity to think how it could help some of the problems that the previous revolutions didn't change. Of course, there's been political revolutions too, and there has to be a political discussion about how to use this technology. There's enormous potential for automation that this technology makes possible. Most of the instances that we see of AI right now have to do with how to make more, how to optimize and make things easier for everyone that can help liberating people fr
om hours of tedious work that we design societies to be an essential component of who we are. But it has more potential. I mean, at least in the general description of what the revolution is supposed to be, it has more potential than just helping workers and releasing us from constantly chasing the next opportunity. But we need to think exactly what that means, right? We need to see, I mean, these are questions that were never asked with respect to the previous revolutions. And that was partly b
ecause they were the result of a very different kind of political and social force. I mean, we are in a different situation now. And I think that's what is interesting about universities, right? These are places where we should think about this and then see how, what of the tools we have, and of the understanding we have across disciplines, how that can help this larger political and social dialogue that we need to have. - Peter? - I guess one way to look at it, is we want our systems to be alig
ned with what we really want, and I see a change over time, in how we build our software, right? So originally it was, "Well, software is all about algorithms and so we're really gonna concentrate on that." Then we had this era of big data. We said, "Well, maybe you should concentrate more on the data and you don't need to optimize the algorithms as much." Now I think we've seen a third shift to when we say, "We've got great algorithms to optimize things, we've got great data. What we don't have
is great ways of specifying what it is we're trying to optimize, what it is that we want, what's fair across all of society and so on." And it's just like, we didn't have the mechanism to do that because every engineer assumed somebody's gonna assign that to me so I don't have to worry about it. It comes from outside. Now it's saying, "Well, maybe we should pay more attention to that, both at the societal level, at the governance level of saying what should be legal and not legal, and we've tal
ked about that. At the engineering level of, "I wanna build a product, how can I make good decisions for that product?" And at the individual level of saying, "Do I really wanna spend my time watching the cat videos or going out and getting drunk, or maybe I should be doing something else with my life and can I have some support systems, whether they're software or friends or therapists and so on, in which we can talk through what it is I want and help me get there." - I would be remiss if I did
n't point out that a lot of the things you're talking about sound like the purview of the humanities. Right? Talking about values and so forth. So how are the humanities connected to these projects of determining the alignment of AI systems and so forth? There's a kind of, probably bad stereotype, but with some reality to it, that AI is about white men in Silicon Valley trying to be billionaires, making decisions that impact the rest of us. How is, and how can, the humanities play a bigger role
in all of this? - This is like my favorite things to talk about. So, you know, absolutely. So first of all, all of the white men leading the companies doing, or they're already billionaires. So this is just like their funsies side project. So the billionaire status has already been achieved by many of them. I think one of the poorest ones is Sam Altman at a paltry 120 million a year. Sadly. - Poor guy. - We can all cry for Mr. Altman. But, you know, I think though, you know, so then why would a
billionaire who could be doing literally anything spend their time on artificial intelligence? I think there's something very compelling about the technology. And it's not just about, like, making a fancy technology. It is a way to have a legacy, I think, for a lot of these individuals. I think it is also a way to, like, create the world that they want to see. And to your point, that the world they want to see may not be aligned with the world that others want to see, but they have the wealth an
d the power and the connections to shape this world in their image. And in some way, you know, it is, that is actually kind of frightening. But also it's interesting to see how humanities get integrated into this kind of work. So, as I said, I'm a quantitative social scientist, so I live on both sides of the fence. So I've taken, one of my subfields was in political philosophy. My dissertation was on the concept of social capital, but I also was an engineering director in my last job. You know,
I build technology systems, so it's very strange to be sort of one foot in both worlds. One common thing I see is actually the quest for what Peter has said, is sort of this universal optimization function, I suppose is one way to put it. And the humanities will tell you that there is none, there is no universal optimization function. One thing that engineers really struggle with is the fact that there is no answer to some things. So like, you know, anybody who's a philosophy person in this room
will find it hilarious that engineers genuinely think that there's an answer to the trolley problem. (audience chuckling) I appreciate the laughter because you do not understand how many people have told me they know how to solve the trolley problem. I'm like, "Oh, (laughs) that was not the purpose." So I think one of the things that was the most important in my career, 'cause I was a quant who came into philosophy later, as I mentioned one of my subfields, and philosophy just roundly beat me u
p. It was so hard for me to understand how to do philosophy until I realized that the purpose of philosophy was to teach you how to think, not to teach you what to think. And I think that is sometimes lost in Silicon Valley, which is a world of hyper scaling, hyper optimization, arriving at an answer, having the right solution. You know, like, mind you, you have to understand, like, and I'm one of these kids too, right? You're really good at test-taking, you know, you like, there's a goal, you g
et there, you get there, you beat it, you know? And it's very, very hard to tell these people that the journey is the purpose. So I think one of the most interesting things about AI that humanities can bring to it is reconciling the fact that, number one, not everything has an answer. And then number two, it's the thought process. It's the unpacking of how you answer the question that educates you about your values and the systems that you are building. I'll give you a very specific example that
is coming up all the time now. So there's this, a huge movement in existential risk within the artificial intelligence community, right? And it's a whole school of thought that talks about, you know, AI will annihilate us in the future, so our time, money, and energy should be spent stopping this future rogue artificial intelligence. And the alignment problem, which is a problem people are trying to actively solve, is if we build a sufficiently competent artificial intelligence, will we be able
to control it, right? And so this is a question that hundreds of millions of dollars, if not more, have been poured into people trying to solve. My take on it, again, like, I'll put on my humanities hat for a second, my philosophy hat for a second and say, the purpose is not to answer the alignment problem, the purpose of the alignment problem, it actually is the trolley problem of today. There are three things one must ask when addressing the alignment problem. What is competence? What is cont
rol? And what is consciousness, right? So how do we define competence in a world in which we have AI systems that can pass the bar exam, that can pass medical licensing. Are we scared of ChatGPT taking over the world and killing us? No, but ChatGPT can pass the bar exam. So how are we defining competence? 'Cause that's how we've defined competence for somebody who has, you know, a license to practice law. So we've not defined that. Can we define consciousness? Because if we've achieved a level o
f competence that I'm now scared of this thing, is this thing conscious? Because then, who are we to say we can control it? And it's not that there's an answer to this question, it's in that answering those three questions, will tell us about ourselves and maybe tell us more about our values in which we're trying to drive in these AI systems. - Yeah, I mean that, I completely agree with that. - We're done. (laughs) - That's it for today. (panelists laughing) - The idea comes up. But I would also
add that this is really important, these things Rumman just said, because that's the kind of the definition of the humanities as you said, right? So it is what are, how do we align our values just in general, and what does that mean about our dignity, and our freedom, right? That was, that's a very romantic enlightenment way to put it, right? But the idea is that that's where our dignity resides, you know, these values and how we protect them and like what they are, and who has this dignity? An
d, of course, since the enlightenment and the contractualists and all those guys, but also since ancient Greece, since that's the origin of the idea, but throughout any spiritual tradition, Buddhism, et cetera, is everyone. This is not, oppression and inequity have been a constant in human history, but also the idea that we have dignity, right? So however we end up using this technology, it would be for us to connect it to the larger project of the humanities. We need to think very deeply about
how we see it that we think, not only which goals we're trying to reach in an optimal way, but why should we care about certain problems and not others? And what issues are gonna be more central than others in connecting, well, in this romantic way, the project of enhancing freedom and dignity with the enhancement of our technologies and our knowledge. - So I'm at the institute at Stanford called Human-Centered AI. So human not quite humanities, but close. And what we mean by that is learning ho
w to build systems that are of the people, for the people and by the people. So bringing in from the start people with different points of view, people from different demographics, from different fields of background, ethnographers and sociologists and so on. And then building a system that, in traditional software you build systems to serve the user, and yes, we wanna do a good job for that, and we wanna give them the right degree of automation and control and build a system that satisfies them
, but then look more broadly and say, "Who are the other stakeholders who are not users, but are affected by this system? And what's the effect on society as a whole? Is this gonna cause mass unemployment or whatever?" So looking at that across that broader spectrum, I think is an important approach. And it means involving people who are not just engineers, but have this broader background and broader outlooks. - So I was actually a hardcore science and engineering major, but you know, I'm also
a big fan of history, and I think that's very important as we look to the future, is to look towards the, to understand the past. So for example, right, we talked about the industrial revolution. You know, if you know what happened in previous industrial revolutions, right, you know, the robber barons of the railroad industries. I mean, that's what we have today, right? Is that the high tech has become the modern day robber barons. I mean, who, you know, if you understand history, you know, that
that's what probably would've happened. You know, also too understanding what are the kind of societal drivers of technology development? If you look in history, right, the biggest number one in application of a new technology is military wartime, right? That's dynamite, that's nuclear weapons, that's, you know, every new technology. And I'm afraid, you know, we haven't overcome that with, you know, some of these new technologies are going to be used in the war field, right? And we haven't, you
know, as we know from current events, that we haven't overcome that, right? So these are things that are happening and understanding humanities and understanding the human condition, the sociological drivers, the historical antecedents, all these things will just repeat themselves with a new technology, I think. - So related to all of this, what has AI taught us about humanity? What it is, if you think of AI as a, usually we talk about AI learning from humans. What have we learned from AI about
ourselves? - Can I start with that? I mean, I think an interesting way to look at this question is the concept of AI versus the technology that we have right now. The concept of AI is incredibly interesting, right? It is like a mirror image of who we are, but without the anthropocentric drive that we can only be the only intelligent creatures in the universe. That there are, at some point, not machines that are produced by us in a simplistic way, the way we produce coffee, but very complicated
machines could reach a level of intelligence that is human, or superior to human. Of course, with all these incredibly big questions about value alignment and what gives humans their dignity, which I don't think is necessarily high degrees of intelligence. So that concept is gonna teach us a lot as we move on because we have to grapple with how our machines approximate that concept. But the technology has already taught us very different things, right? So the technology has polarized the debate
between people say that this is a grave threat, and people say, "No, it's not a grave threat, it's actually a very good thing." We already saw the reaction of the public to ChatGPT. This is, we're learning that this is a technology that is going to rapidly increase its power. And that is rapidly changing the way we interact with each other. It has been changing the way we interact with each other for quite a while, but it's in the recent years that has completely changed. So we need discussion,
we need discussions at all levels. We need, I mean, and again, as Rumman says, like we need discussions at the philosophical, theoretical level, abstract of what this means for humanity. We also need like good sociology, how the technology's already changing the military, how it's changing commercial relations. Is our diplomatic relations getting better or worse because of the, I mean, there's a lot of topics that need to be addressed and they don't necessitate the real abstract idea of artifici
al intelligence. They just need more sociological empirical work. - So to me one amazing development that tells us about ourselves is with these large language models that have been so popular over the last year or so, and everybody agrees there's an amazing technology there. But to me it's not so much that the deep learning networks are amazing, or that the data centers full of GPUs are amazing. The most amazing thing is we wrote all this stuff down and there's so much there, right? So our abil
ity to create language is doing so much more than I ever thought, right? So yeah, we know we're better at language than any other animal by a long shot, but I don't think anybody really realized how much is there. You know, we thought language was just a transport mechanism and you really need a person at either end to make sense of it. But now a computer that doesn't have any direct connection to the world can only look at language and take so much out of it. So that tells me that we did someth
ing amazing in inventing language. - I'm still actually still amazed by how great our human brains are. I mean, if you think about kind of, you know, what we had to do to create something that could play chess, or to be able to write some, you know, an essay. I mean, these are huge machines sucking, you know, megawatts of power, you know, having the world's kind of silicon basically being invested in building all these transistors and you have in your head, you know, a two pound biological set o
f neurons that are operating on a timescale that's, you know, millions of times slower than what these computers are doing. And you're still able to outdo machines in still a lot of activities. So that to me is, you know, we still don't understand ourselves, right? And that's, it's still, I think, it's a great question, for research for the future is, you know, what's special about our brains that we still haven't been able to replicate in the AI? - I think building on that note, one of the most
interesting things we're learning from AI is what are the things that it's good at, what are the things that it's bad at? And the things that it is good at are the things that we will end up basically automating away. And what's fascinating is all of the things that we give monetary value to are the things that AI is very good at, right? So even when we think about all the articles and the headlines that come out and say, you know, AI can do X faster than profession Y or, you know, this profess
ion's gonna be out of a job. To me that doesn't mean that we should be worried about our lives, or AI will take us over. It actually makes me think that it may be time to reprioritize what we consider to be of value, right? So there's this really great book about the creation of the GDP, right? Gross domestic product was not something that naturally occurred. Some guy sat down in a room, created it, right? And it is specifically about the things that were decidedly not included in a measurement
of value. And it's interesting because to turn all of that on its head today, as I mentioned, all of the things that we consider to be drivers of productivity and GDP, are the things that AI can do, and all the other things that we thought that we literally gave no value to, are the things AI cannot do well. It cannot care for a baby. It cannot calm a crying child, you know, it cannot, you know, create the most amazing songs someone's ever heard. And it's deeply fascinating to me, then maybe it'
s actually what it's teaching us is there's a moment that we can actually have today for recalibration. Like, why not revisit that optimism of over a hundred years ago and say, "Maybe all of us only really need to, quote unquote, 'work' three days a week. But that four days doesn't mean you're wasting time. That is what it means to be human. That is what it means to provide value." - All right, those are great answers. And I just wanna say, I'll interject myself a little bit here. One reason I w
anted to have a roboticist is if you've ever tried to make a robot play soccer or drive a car or something, the notion that AI, we were gonna hit AGI any day now is ridiculous, because the hardest things we do are recognizing each other, are walking across this stage, and I think the biggest thing AI has taught us is really how incredible things we take for granted are in terms of in achievements as intelligence. All right, I would- - I should also say, I don't know how many of you know that Eri
c's team actually won the World Championship, Robot Soccer Championship here at Bowdoin, so he knows what he's talking about when he says there's still a little long ways to go. (Daniel laugh) (audience applauding) - It was a miracle. (all laughing) All right, so I would like to have a little time for audience questions and then at, but I would also like to save some time at the end to give our panelists a chance to have some concluding remarks. But before we could begin the question and answer,
once again, I'd like to address the importance of maintaining a respectful and courteous atmosphere and share a few ground rules. First, if you could identify yourself when you ask a question. I would like to prioritize student questions. We are a college after all, and my own kids told me years ago that students ask the best questions by far. So we would like to prioritize student questions and please keep your questions and your comments relatively concise and relevant to the topic. And becau
se I'm a professor and I can't help myself, I might restate your question a little bit for clarity. So. - [Sydney] Hi there, my name's Sydney and I'm a senior in Professor Chown's AI Ethics course. I wrote this down, so hopefully it's concise enough. (panelists laugh) - You taught your students well. - [Sydney] In the public sphere of consciousness, AI is seen as being detached from the physical realm. However, AI is in fact very material and energy intensive. And as the world races to adopt thi
s technology, many in my generation worry about techno optimism, blinding society of the real world it impacts. Like Dr. Chowdhury said, there are many people building this tech for the purpose of ego rather than for improving people's lives. Like for instance, those in the underdeveloped nations being exploited for their natural resources who never actually get to reap the benefits of this technology. How do you personally grapple with this and how can we potentially mitigate AI from exacerbati
ng inequalities globally? - I can start, I mean that's pretty much, I mean, I'm sort of cheating, right? (giggles) That's pretty much what I do. And I think to add to your list, one of the things I'm also increasingly concerned about, especially with generative AI models and these general purpose models that require massive amounts of compute are the climate impact. So what I'm working on now with my nonprofit human intelligence is a, this concept of red teaming, this concept of improving AI mod
els by giving regular people access to them with the purposes of testing and improving them. But one of the things we are doing at an upcoming summit in the UK, are designing a few demos for people to interact with. And one of them is by my friend and colleague, Dr. Kristian Lum, who's very well respected in the responsible AI world. She's done a program that generates cover letters based on resumes. So are there embedded biases that are picked up when, you know, if the model, the large language
model picks up that it's a woman or it's a person of color, or they have a different kind of a major, or background or went to a different kind of school? Another one that she did was on how AI generates stories for children, which I thought was a very fascinating one. So she's trying to create them in context. So to your point about like AI seems context-less, she's designing these evaluations based on like what are the ways in which people might use this technology, and how might harms manife
st? But to the point about climate, Dr. Sasha Luccioni, who's at Hugging Face, she actually is very passionate about the climate impact of AI models. And she's actually assessed the energy usage of over a hundred different AI models. So we're making an interactive visualization for technologists to grapple with. Like, you know, you think about these systems as living in a cloud. A cloud is actually a physical manifestation, like you are borrowing compute off of somebody else's. It's like somebod
y else's machines. But now those machines don't have to contribute to creating energy loss or using water where you live, they can go be put somewhere else and make somebody else's water more expensive and harder to access. So, you know, hopefully that will help educate technologists about the impact of the tech that they're building. And in particular now, like I said, just helping Sasha's work about climate impact be more visible, 'cause that goes under-discussed in tech. But thank you for you
r question. - I think a lot of the research is on these climate issues and how to make the systems more efficient, how to mitigate it, how to place them in the right place, how to shift the share of the load over time, as weather and other occurrences occur. Last time we looked at it at Google, we tried to compare the energy costs of using these computerized systems versus a substitute. And that's a little bit hard to do because like if the substitute is driving to the library, you're less likel
y to do that at all. So it's hard to calculate an exact substitute. But we came up with a number that it's six times more energy efficient to use these computerized tools than to do without them. - Yeah, I would just like to add to something that Rumman said that is very important, which is that, so the, in a way your question is about how technology, how technologies are affecting global patterns and the climate change and all that. So we're just bad at that. And so we need to, hopefully the te
chnology, we can't really blame a specific set of technologies, but it's true that we just don't, we just haven't developed the political framework to do that. So in a way that's why I think it's important to keep the issues about the technology itself from the idealized notion that the AI lives in the cloud, because the technology has to be a very down to Earth financially competitive thing. But one thing that I really want to emphasize is that this should invite us to reevaluate our values, ri
ght? So reevaluate how GDPs are, why do we care about those things? Even how do we, what are the metrics that we're using to assess the impact? I think that's a huge question. And we need anyone who has something to contribute to this question to participate and, yeah, it's a tough question. - Sajel. - [Sajel] I'm Sajel. I'm a junior in Professor Chown's class. I'm also a CS major, so I'm reading your textbook right now actually. Something I was wondering, and I was really inspired by your quest
ion Professor, or answer, Professor Norvig, was like, how do we teach software developers to think about what they want to optimize? Because like in my CS classes we're taught about algorithms and the optimization and all those things, but we're never asked about why we wanna optimize those. So how do you see that playing out in the classroom? - Yeah, I think it's a question of training, and looking at examples and saying, you know, here's a case study, here's how it was built and here's what we
nt wrong, and here's how you could have corrected it by upfront doing a better job of surveying what everybody wanted, and am I serving everybody fairly? And I think some of these questions are hard and some of them should be outside the hands of the software engineer, and they really belong in society as a whole, right? So there was this big controversy over software to do parole decisions, or to recommend to judges whether someone should get parole or not. And the critics came in and said, "Lo
ok, by this measure you're hurting black defendants twice as much as white defendants." And the producers of the software said, "Yeah, but we have this other measure in which the two are equal." And so the question is not did somebody do a bad job? Was somebody intentionally trying to be biased? The question is, which of these two metrics is the one that society cares about? The one that really should matter or should we trade, how do we trade the two off against each other? And we've always had
this issue, but kind of before we had computers, it was like in the hands of the judges and so nobody had to quantify it, right? And there have been attempts to quantify it. There are some, I've forgotten the name now, some old time English jurist who said, "Better to let 10 guilty men go free than one innocent man be jailed." Right, so he was setting a numerical value of 10.0 on those parameters. I don't think he really meant it, right? I think he meant it more metaphorically. But with softwar
e today, we have to set that number at a precise number. And the, you know, your training as an engineer should make you aware of that, but it's not gonna tell you what the right number is. That's gonna be done by decisions at the societal level. - I really appreciate your point. So Sajel, I highly recommend you look up what Peter's talking about. So that's the ProPublica articles around the COMPAS algorithm. And ProPublica is a, to your point, like, I appreciate that you bring up that it takes
a wide range of people. So ProPublica's a media outlet and they hire data scientists to be data journalists. So this was uncovered not by computer scientists or engineers, it was uncovered by people working in media who were incredibly data literate and trained as data scientists. So ProPublica publishes the first article, and Northpointe, which is the company that made the COMPAS algorithm, publishes their rebuttal, and so they're going back and forth, and as like a stats person, I'm like nerdi
ng out over like, "Oh, but our like, you know, efficiency rate is blah." But here's the thing, the very fundamental end of that story, Julia Angwin who founded both ProPublica and The Markup, who, I tell her she was the founder of data journalism, she is way too humble, but that profession would not exist without this woman. There was a court case in Wisconsin where actually a white defendant sued Northpointe for saying that he was unfairly impacted by the algorithm, which led to the fundamental
questionnaire that led to, that created the metric being released into the public. I highly recommend everyone go look at that questionnaire. 'Cause at the end of the day, it wasn't even about the numbers and the debating back and forth. That is genuinely the worst questionnaire I've ever seen in my life. So imagine this scenario. This person's gonna be up for parole. They're sitting across from like some sort of court officer and they're asking them things like, (clicks tongue) "Did you grow u
p in a broken home? Did your parents have to feed you with food stamps? Have any of your friends ever been in gangs? Have your friends been arrested?" Why should any of that matter to an individual who is up for parole? Why should that? And yet all of that was in the questionnaire. So to your point, Peter, the fundamental problem was the concept of the questionnaire, which is a human decision. Human beings sat and they wrote these questions, how we quantified it and stuck it behind a system. And
, so Reva Schwartz is an amazing scientist, she's at the National Institutes for Standards and Technology. She says we wrap things in data to homogenize and to make it seem more objective. And that is like the perfect example. The starting point was this incredibly incorrect and wrong questionnaire that has all the wrong questions, but then when we wrap it in data, we stick it behind a computer screen and we throw a bunch of math behind it, and again, like, I am a stats person, it seems as if we
're arguing about numbers, but actually what we should be arguing about is why that questionnaire existed in the first place. - And I think the other problem, and I think that's exactly right, that's a great point. The other problem was though, you know, they're trying to predict recidivism. Are you going to commit a crime or not? And so you could only do that if you have some ground truth, but they don't have any ground truth. - Correct. - Right. There is no record of who commits a crime and wh
o doesn't. All you have is who was arrested and who was convicted. - Correct. - And those two things can be very biased. And if they're biased, then the predictions are gonna be biased. - I will grant you student status, Evan, even though you graduated recently. And I would remind you that we're asking for concise questions. - [Evan] Yes, so Professor Chown already knows I'm pretty techno pessimistic, but I'd like to put a caveat there where I'm actually for the first time in a long while optimi
stic about the transformer model because it can de-silo the way information interacts with one another in the different modes of information. But we were talking about looking back in history, and if we look back, we look at, you know, Web 2.0, the invention of centralized search then brings up the advent of surveillance and suddenly we're in a race to have the biggest surveillance machines in the world. Then we look into the tech sector trying to convince us all to live in a metaverse where eve
rything is commodified, and the only response is everyone and their uncle tries to buy the next coin that will replace the global reserve currency, so they themselves can be the rich kings this time. We talk about a collective action problem. How do we reckon with the fact that, if the transformer model is not as transformative as it promises right now, we're looking at a huge AI winter, which means the economic impacts on the tech sector, which has already seemed to be in turmoil. How do we rec
kon with the collective action problem that time and time again, we don't take collective action to stand up against these tech firms and then we all pursue our own individual gold rings? But this time we're looking at platforms that have access to deep fakes, disinformation machines, surveillance and targeting. And then we also have a race for individuals to try and utilize these to the maximum so that they can themselves get ahead. How do we reckon with collective action? Because if we look ba
ck at the industrial revolution, we had great social progress, but that social progress was very flawed in many ways and was very absent of certain groups. And now we're on a globalized scale against a even shorter timeframe. So how do we think of collective action in that sense? - Is this one of your students? - Can you take your prerogative to summarize? - That was like, that was like a great, like thank you, literally, thank you for that question. So like I'm increasingly thinking that honest
ly we're not the people that are gonna figure it out, because we're in the midst of, you know, one of the most important things happening. One of the ways in which AI is clashing with society right now is the writer strike and the screen, the writer strike, and the actor strike, right? So I like somehow fell into a crew with some people who were like trying to do good stuff there. And so I've been learning all about how writers have been mobilizing, and actually one interesting trend we're seein
g is actually the resurgence of the union, right? And the rise of actually collective action movements. So what happened with the writer's strike is, you know, and it's like, it is the, like, the perfect, like small encapsulation of exactly what you're talking about. These studios in Hollywood have all the wealth, they have all the power. And traditionally when, you know, actors and writers needed to create guilds, because there's no way they could individually push for better pay, non exploitat
ion, et cetera. And it reached a point with many things, but AI being one of them, where these people who know, they don't build technology, right? Technology is not what they do, and frankly, many of them never interact with AI. They realized this thing is going to come for us and these studio, it's not that technology's gonna come for us. The studios will use this to extract everything they can out of us and throw us out in the street, right? So how do we fight back against it? And whether or
not, like, you know, like I said, I don't, there's a lot of drama, a lot of politics as there is with any sort of very power-imbalanced world, but I think there's something very beautiful in what they were able to pull off, right? They actually kind of won, question mark, right? Like, they were successful in negotiating for protections. And what was the most interesting thing about it is, you know, they're not tech people. So I think what happens with tech people in some of the rooms I'm in, we
like spin forever on like, maybe we should limit the number of parameters, and like, you know, we actually, our brains are now wired that we become tech focused. But these people are not tech focused. They actually are saying, "Okay, well, this is a world where we think AI is going to be used, but I want to imagine what my place in that world is. So how do I design protections so that the things I want to do are the things that are protected and everything else AI can do. I don't really get what
AI can do, but I know what I want to do." So what's really important about how they negotiated their rights is that they just designed it around what they thought was important for their profession. And it didn't matter what the technology was. So, you know, maybe we're not the ones to solve it because we're too in our own heads and we think too much about the technology. And frankly, it's almost been handled like the, I was joking with somebody that this, that the resolution of the writer stri
ke is the New Hampshire of, you know, AI governance, right? Like New Hampshire is the first state that gets to vote primary and sort of the vibe New Hampshire sets influences the rest of the country. It is good that it ended up with the writers winning in a sense and getting concessions, because I was very, very worried that if they didn't, then it would bode very, very poorly. So I guess, you know, unions, go unions, I guess. (audience chuckles) - Carlos. - Yeah, if I, and I like, it's, yeah, I
completely agree. And there's a, it's a very good question because I don't think the crisis, the political crisis that we have now, is something that we need, that needs to be solved politically. It's not a problem of a technology, it's not a problem of imposing or using or optimizing a technology. It's, you know, union organization is as old as the industrial revolution. It was a response to the industrial revolution. But solving a political crisis, reorienting our political compass is somethi
ng way more complicated than how to really understand the technology and its impact, and that. Although that's already very complicated. But I'm actually much more worried about that kind of problem, how to reorient ourselves politically. I think that is gonna be very important to really get a grip on all these other questions concerning the technology. - (giggles) This poor guy has been waiting for like forever. - [Steve] I'm not a mature student. (all laughing) Just because I have, it's age bi
as. (panelists laughing) - Go for it. - Thank you. My name's Steve Pollock, I'm not a mature, I am a mature student of sorts, class of '79, when the typewriter was still king. Thank you. You know, very interesting discussion, but also disturbing in many ways. I've worked in healthcare for about 30 years. AI has a lot of, we'll call it pattern recognition, whatever you wanna call it. A lot of great value, potentially. AI, and I hate using the term AI because I don't know what that means. There's
machine-learning and other things, but we use this term AI just as we use this notion of surfing the internet. And I've never gotten wet doing that. But I have a couple of immediate concerns. First of all, to your point, all technology is in fact a social construct. You can't divorce it from the context in which it's emerged. It may be applied in another context, have a different effect, but it is a social construct. And the notion of responsibility is critically, critically important. Probably
more so with so-called AI and computer technology than anything in the history of humankind because it permeates everything. Provided there's electricity. You don't have electricity, we have some problems. Okay? We have to have power. My question is really, there are two things. Number one is this issue of trust. Not about trust the algorithm, that's another kettle of fish and trust about the data, but trust about how I interact in the world today. And that's very germane to Bowdoin and to the B
owdoin education. I grew up with a typewriter. I will swear that I write and thought much more clearly having gone through that process than with the computer system. Cut and paste has been a really bad thing for the way I write. My writing is much worse. But in any case, we now have email, which is now becoming more and more clever in gobs of it written by AI. We have photographs that are manipulated by AI. I've just read now that Google has a new feature built into its phone where, if people w
ho are frowning, can suddenly be made to smile, right? What do I trust anymore? I don't trust my email. In fact, many of my colleagues don't use email anymore. They do chat direct. Okay, they stopped using email because they get overload and they can't trust it. If I send an email to somebody who doesn't know me, they absolutely don't trust it, and it gets trashed. Now I look at a photograph that somebody sends to me, I don't trust it. Somebody writes a paper at Bowdoin. Well, I think we're here
, at least when I was here, we're here to learn to think, that means using your brain, not using a computerized brain. If we don't use it, we lose it or we don't develop it. And I've seen this in medicine, it's a critically, critically important thing. So the question I have for you is, we live in this world, unfortunately full of lots of bad actors in manipulation. What do we do about this trust factor? And then the other question which is related to that, who's liable? It was the algorithm's f
ault. Well, somebody made that algorithm, right? So we have to really look at that issue of liability. Okay, so really trust and liability. - I can't answer, I'm not gonna answer all the questions. - Yeah, I can answer that one quickly. Yeah, I think that's, we need, for example, a comparison for the AI industry. Do we wanna treat the industry? I mean, it does so many things, right? They're into automatic, the automatic cars and things like email. But there are standards, legal standards for lia
bility that could be used, just that it's very hard to implement them. And this is a big problem for lawyers, how to protect privacy, how to protect consumers. We can't treat the AI industry in terms of strict liability because the processes that are performed by algorithms look very different from just giving food to the public, or something like that. So it's a very tough legal question, but I think the wider question that you're raising is, again, a social question of how we want to interact
with these technologies. You mentioned you may, for example, not use them at all. I think that's gonna be very difficult because of the way we communicate, and we better, it would be better if we have some social way of developing trust of this deeper kind, trusting the system because it's not going to manipulate us or constantly give us wrong information or distort our messages, for commercial purposes or any other purposes, but. - So specifically in the issues of fakes, fake photographs or fak
e news, you have two options. One, you could fight this detection war where the bad guys make better fakes and you make better fake detectors. And you go back and forth. I don't wanna play that game. I would rather say let's switch to trust-based in providence rather than in content. And so when you were in college, there were three TV networks and one or two newspapers you had access to, and they might have been a little bit biased one way or another, but you kind of trusted that they weren't g
onna completely make stuff up. Now instead of having that small number of information providers, we have millions and millions and many of them are making stuff up. So my hope is we go back to this world where there's a smaller number that you trust, and then it's their job to make sure that the stuff is real. And now we can cut out a large portion of that junk and just restrict your diet to things that's more trusted. And so, in some sense, maybe this introduction of more fakes is a good thing,
because it will force us to confront that issue. And I think right now, too many people are just way too gullible. And maybe if it gets worse, then it can start getting better. - Let's have. Just one quick comment. I do agree though that the legal framework hasn't caught up yet to the advances in technology. So things like autonomous cars, right? Where the liability is, with an OEM supplier of an AI algorithm versus, you know, the sensor maker versus the car manufacturer. I mean, who's gonna be
liable for a system like that? No one really understands fully yet. Or like copyright law. Those were developed, you know, when we were doing, you know, cassette tape recordings, right, in terms of copyrights. Now you can have an AI that can listen to music or train on a author's set of complete works, and regenerate it slightly differently. Is that violation of copyright? Right now it's not, right? So these are all things that haven't been resolved yet. - I'll just put in a commercial for the
digital and computational studies program at Bowdoin, which is exactly designed around these kinds of issues and where we talk about accountability and agency and all of these things with regard to tech. We have time just for one more question. I wanna have just a couple minutes for the panelists to kind of wrap up. Sorry about that. You've been waiting patiently, so. - Hi, I'm Ruby. I'm a sophomore. I'm also in Professor Chown's AI class. - You guys are amazing. - [Ruby] We have a lot of good q
uestions. I will keep this really quick. So Bowdoin's a college that's dedicated to serving the common good. Oftentimes regulations do benefit that common good and add to it. Have any of you thought of regulations that the government should be working on? Or have you looked into what the European Union's doing with its AI act and thought that that's the way the US should follow? Or is there another idea you have about types of regulations? - So I will go first and then probably you have. I mean,
one thing that I think a lot about is there are- - Glad to have you. - Regulations that are international and AI is not gonna be a US technology. It's gonna be an international technology. It's already being used by everyone. So I wonder if the framework that actually needs to emerge should eventually be an international framework. Just the way there's an international framework for regulating nuclear weapons. I'm not saying that that's a good framework for regulating AI, I'm just saying there'
s reasons for thinking that an international framework would be a good thing. That's it. - And let me say that in addition to government regulation, there are other forms, right? So there's self-regulation and all the companies now have sets of AI principles and they have new techniques for training and release internally. I get a lot of complaints from old timers at Google saying it's 10 times harder to launch a product now than it used to be, 'cause there's all these new self-regulation involv
ed. So that's one. We could go more to professional regulation. So I can call myself a software engineer, but I never got certified from that. And anybody can use that title. But I can't call myself a civil engineer and say, "Hey, I'm gonna go build a bridge." That's just not allowed, right? I need to be certified for that. So I don't know if that kind of regulation would be good for the software industry or not, but that, but it's at least a possibility. And then there's third party regulation.
I think Dan mentioned Underwriters Laboratory. I think that's really interesting, because the last time there was a technology that the public thought was gonna kill everybody, it was electricity, and Underwriters Laboratory stepped in and said, "We're gonna put a little sticker on your toaster that says it's probably not gonna kill you." And that was great. And so the consumers trusted it and that meant that the brands trusted it and were willing to undergo certification. And as it turns out,
Underwriters Laboratory now has a new program in AI safety and I joined their board. So I thought it was a good story. I don't know where it's gonna go, whether it's gonna be an important part of the future, but I'm willing to give it a shot. - Dan, do you wanna? - No. - Yeah, yeah? All right, I would just like to give each of you just a minute or so just to wrap up. Maybe some folks have expressed a little concern about the future with AI. Maybe say something to allay those concerns or. (all la
ughing) One of the problems we're having in our course on AI and ethics is trying to end the course in a way that's a little bit positive, so everyone doesn't walk out of it all depressed. Just any concluding thoughts that you all have. Let's start at Peter and work our way back down. - Huh, I'm still thinking, I don't know what the concluding- - Or someone else can jump in. - I just need a quick minute. You know, we're here for the, you know, inauguration of Safa, so I think she's a cognitive s
cientist by training. We haven't mentioned much about cognition and I think that, you know, it's an interesting area, it's like, you know, thinking about whether AI, how do we understand it from a kind of cognitive systems level. You know, these are all kind of things that I think maybe you're teaching in your program here to the students. And I think that it's a bright future for people, who have some technical background but have these kind of broader perspectives, and I think that's hopefully
what Bowdoin is trying to establish. And I think that'd be great. - Thank you for the commercial. - Maybe I wanna touch on like this idea of optimism, right? And especially 'cause I work in the field of responsible AI, and I think sometimes we focus a lot on harm reduction and assume that good comes from harm reduction, but actually it doesn't. So doing good and not doing harm are actually two different things, right? So if we're, and it's not to say we shouldn't focus on harm reduction, we sho
uld, but I think sometimes what we lack in responsible and ethical AI isn't what's called an affirmative vision, right? What is our goal for the future? What is the thing we want? What is the change we want to see? So for example, like some of the CEOs of some of these big AI companies will say things like, "AI will cure poverty." And it sounds like a joke, right? Because it's as if it's like a positive externality of creating this sentient cognition machine that's just gonna go, "Cure this, lik
e, deeply entrenched societal cultural economic issue that has plagued humanity since the dawn of humanity," right? So no, right? But you know, in a sense, could AI cure poverty? Maybe if someone decided I want to spend a lot of money, time, and effort towards using artificial intelligence to, you know, tackle some of the issues that lead to inequity and poverty. And it's not to say, you know, and I think sometimes those sentences sound so simplistic to those of us who live in the responsible AI
world, 'cause we know that these issues are socio-technical. But I think what we do need to have and what we can shape, maybe my challenge to the students here, is what is our affirmative vision, right? So we know what is wrong. Now let's figure out what is good. - So let me give a positive note that's a little bit smaller than solving poverty. So last week I was talking to a friend who's a conservation biologist. He gets to do cool stuff like go to the Antarctica and study the penguins and the
ir life there. And he said, "Well, I'm a little bit of a data scientist, like I know how to read in some data into a spreadsheet and make some statistics and some small plots and so on," he said, "but I couldn't do anything more than that." He said, "But now I'm using this copilot system and I wanted to make a program to make an interactive map where I could click on it and see some things and zoom out and something. And I asked all these questions and it showed me the code. And look, here it is
and it works. And I never could have done that without that kind of help." So I think that's the kind of future that I'm looking for, is empowering people to do much more than they ever would've tried to do before. - Yeah, so I think that positive notes is actually going to, I mean, being positive about a technology, as potentially disruptive as AI, but as helpful too, is gonna need a reevaluation of our goals as a society. And that's a really good thing if you think about it, because we need t
hat anyway. But also a kind of historical reexamination of other technologies and how we dealt with them in the past so that bad things don't occur again. And definitely the potential for good is there. If we're careful with these other aspects. If we are clear what problems emerge from the technology as such, and which problems emerged because of the societies we've created and the injustice that already exist in those societies. I think if we're clear on those questions, we can be much more po
sitive about what we can do with this technology. But we need clarity and also like a big debate about what is it that we're gonna, what is it that we value and we want from this technology. - All right, first of all, I'd like to thank the students from my class who asked better questions than I did. (panelists applauding) (audience applauding) But more so I would like to thank our panelists who have been just fascinating and I thank them for coming and joining us and sharing some of your insigh
t and wisdom. So thank you very much. (audience applauding) (no audio)

Comments