Hello and welcome to tonight's
Commonwealth Club program. My name is Deepa Seetharamen. I'm a tech reporter
covering air for the Wall Street Journal. Before we get started,
a couple of reminders. Tonight's program is being recorded. So please silence your phones
for the duration of the program. Also, the Commonwealth Club
would like to welcome any JCC, ESF guests who join tonight. If you have any questions
for our speakers, please fill them out on the question cards
that were on your seats. And
if you are joining us online
through the YouTube chat now, it is my pleasure to introduce tonight's
guests. Dr. Joy Buolamwini is the founder
of the Algorithmic Justice
League and author of Unmasking A.I. My Mission to Protect
What Is Human in a World of Machines. Dr. Buolamwini has been at the forefront of AI research
with her lifelong passion for computer science, engineering and art,
influencing her groundbreaking work. And as the poet of code, Dr. Joy, as she likes to be called,
creates art
to illuminate the impact of AI in society
and advises world leaders on AI, harms and biases. We also have Sam Altman,
the CEO of Openai, the AI research company
behind ChatGPT and Dall-e. Maybe you've heard of OpenAi. I was founded in 2015
with the mission to build safe and beneficial artificial general
intelligence. We'll get into what that means to benefit all of humanity. More than 100 million
people use chatbots every week and more than 2 million software
developers are using its AI models.
Sam has flown all over the world this year
discussing AI and safety with world leaders, and he's proposed at one point
the formation of a global or U.S. agency to license the most powerful
AI systems and enforce safety standards. All context for the conversation
we're about to get into. So let me start off
just by asking you, Dr. Joy. In the book's intro,
you write about the coded gaze. What is that? So I don't know if anyone here
has heard of the male gaze, the white gaze, the postcolonial gaze
. Okay,
so if we look at the notion of the male gaze, this came from media studies
and they were looking at film and the portrayal of women
through a men's eye and what that rendered
and what that left out. And so it was a discourse
in terms of who's positioned as worthy and who has the power to decide
what's worthy. And so when I borrow that term
and I'm thinking about tech with the idea of the coded gaze, it's
looking at power again. Who has the power to shape the priorities,
the preferences o
f tech, and also whose prejudices get in bedded. And so that's the notion of the coded gaze
which I came across when I was literally coding
in a white mask at M.I.T. as a grad student
working on a project around Halloween, which is why I had a mask
that was supposed to track my face. And it wasn't getting my face that well. So at first
I actually drew a face on my hand, and when it detected the face of my hand,
I was like, okay, Now anything is up for grabs. So that's why I grabbed the mask. And
when it detected the mask,
I was just as shocked because I was looking at something
that's less than human being detected,
whereas my dark skin human face wasn't. And so for me, that was my first
recognition of facing the coded gaze. Do you think, you know, we've had this broader conversation
about existential risk in the field? I mean, what kind of attention
do you think the industry and the field is paying attention
to those kinds of harms, those ongoing harms
that are happening to people tod
ay? It really depends on the groups
you're speaking to. So when I talk to social justice groups
or tech justice groups, these are the issues
they're thinking about. People like Porsha Woodruff,
who was arrested eight months pregnant due to a
AI powered facial misidentification. She reported having contractions
while in the holding cell and had to be rushed
to the emergency room after. And this was three years after Robert Williams was also falsely arrested. Facial recognition misidentification
i
n front of his two young daughters by the same Detroit Police Department using faulty facial recognition. And so those in the communities
I mean, those are the types of conversations
that I'm embedded in. Now, when it comes to the media
conversation there, you certainly see a lot more
focus on risk. How about in the industry, Sam? I mean, the circles you run
in, are they primarily talking about existential
risk, long term harms, potential catastrophes, or is it risks
like what? Dr. Joyce talking
about? Most of the conversation
is about the risks we face today and the impacts these systems
are having today and in the near future. But there's definitely some of where is this going to go? And as these systems progress
over the coming years and decades and we get to intelligence,
a level of intelligence that we think we make it to, that can
surpass human intelligence in some ways, how do we make sure that humans are still at the center of the story
and that we don't have some sort of catas
trophic risk? So I'd say both are part of it. And I actually think that's healthy
because it is this one continuous curve of the technology and the we have to make it work
for people all the way along. But it's certainly more of the short term. Are those ideas competing the long term risks versus the short term? Can you pay equal attention
to both problems? We we and I think the field in general pay
more attention to the short term things. And I think that's as it should be,
because you can do m
ore and understand more the things that are in front of you
that you can actually observe. And the theoretical stuff is further out. But I think it is important to think about all of the risks
and potential harms and benefits of A.I. along a continuum. And not like we try not to have people
at least now, who are like, I only think about the long term. I don't think about the short term,
but but say like we are trying to build and deploy continuously iteratively into the world, that's
going to ge
t more and more powerful. And we need to make that work
all the way along the way. Dr. As I see it. Now, so the way I see it is with a Duma is right. The sky is falling, put a pulse on it. It's, it's, you know, it's, it's a higher risk
than climate change, etc.. There is certainly a group of people,
not everybody feeding into fear. And that fear does drive dollars. So in the book, I talk about air safety,
you know, getting hundreds of millions of dollars of investment with the framing of X risk
versus harms research,
maybe getting millions right. And so I do think the narrative shapes
the flow of resources. And I do agree with Sam in terms of there
being a continuum of risk, but where I see the most investment is
where there is the most hysteria. Interesting. I actually think there's been almost very little productive investment
that's gone into existential risk. In fact, if any of you know
a productive way to invest, we'll do it. That be great. It's been a lot of like think pieces
tha
t I think have had somewhat limited impact so far. That said, I think although it's very important
and it is our focus, I can't speak
for other people in the field, but I think it's a lot of their focus to who are deploying products
which are now pretty widely used to minimize the harms and maximize
the benefits of these products. The thing that I think is so hard to at least wrap my head around is we are building a technology
that if it keeps going, if it continues on this curve, I think can ei
ther be the greatest
or the worst thing humanity has yet built. And this is not our original observation. You know, I think the first time I ever
heard this was from Stephen Hawking, but way before that, I heard about it
in a lot of different ways. And I think
a lot of people can intuit this. And we have got to hold space for thinking about the future
while we think about the present too. Like, I think it would be a mistake
to only focus on one or the other. And to that point with X risk,
I talk
about lethal autonomous weapons. In fact, when I was first getting into computer science and tech,
I got into tech to avoid people, you know, and humans
and the messiness of social situations. I wanted to build robots. That's how I ended up at the Media Lab. So I wasn't I was working on an art project
when the White Mask thing happened in the first place. So for a while
I really didn't even want to engage. And as I was looking
at some of the conversation at that time, there had been another typ
e of letter,
and this one was around lethal autonomous weapons. And my thought was, wow, this isn't
something I should get involved in. But then as I saw integrations of pink drones, guns, facial recognition technology, I had an opportunity
to serve on the EU global tech panel and so you had EU defense ministers
thinking through what the future of peace. Another way
of talking about the future of war. Pick your, you know, angle at going at it. And so there I was thinking through
what does it loo
k like when we're applying
AI to military applications? And so then got familiar with the campaign
to stop killer robots. So we don't put the kill decision
in the hands of an automated kind of system. But then I also start thinking about
the ways in which I can kill a slowly. So I was thinking about this notion of structural violence,
and so we know the gun, the bullet, etc., in terms of acute
immediate violence. But what's the violence of not having adequate health care,
not having adequate opp
ortunity, and how that also can lessen a life
and the quality of a life. So that's the other X risk,
I think about the risk of being X coded, excluded,
exploited, condemned or convicted. I think we've had this conversation about how we are going to war
and artificial general intelligence. And Sam, you've talked
about making progress towards AGI, but it being conditioned
on being both safe and beneficial. Could you talk a little bit about, I
guess, what that means, how you determine what is safe,
what is beneficial,
and I guess what is AGI. Easy questions. Make it safer. I think AGI used to be something
that we could kind of hand wave at and we just sort of say, it's
going to be like when I gets really smart. And people had all these
like more precise definitions, but they were all sort of sloppy. And now as we start to get a little bit closer,
I think the definitions really matter. And so there are some people who say,
well, it'll be AGI when it can, you know,
do some jobs somewhat aut
onomously. And there are some people who say it won't be AGI
until it can like solve all of physics and a lot of things in between
and a lot of things closer further. So I don't know what it means anymore
because I think people use it in such different ways. But let's just say
it means very smart A.I. like GPT ten kind of level thing. If it stays on the current trajectory. I was thinking when you were saying
that you got into tech because you wanted to avoid people,
that I both had sympathy for
that. And I got into tech
because I wanted to connect with people. I had a really hard time with it. I, I was like a very, extremely socially. I mean, I guess I'm still out,
but as a kid, I was even more socially awkward,
if you can believe it. And finding the Internet was a way
that I was able to connect with people. And for me, that was that was this example to me of of how incredibly
beneficial technology can be. I had gone from someone
with like not a lot of friends and not super kind of con
nected and feeling
very alone to something very different. And I'm always grateful for that. And it is a reminder to me
of what we should aspire to technology for technology to do. So when we talk about benefits, you were talking about healthcare earlier. I totally agree. I think we should find a way to make sure everybody on Earth gets
great health care and great education. And more than that, we should just like cure diseases
so that people need less health care. And I think this is possible.
I think the two major trends of this
decade will be that the abundance
and capability of intelligence increases to a degree that we have not we probably still can't quite wrap our heads around,
even though we get a little little preview with GPT four and we'll get more
of a preview with GP five and six. But think how much less like humanity deserves much more than we have
now. Like we are all deserving of of more and better and a higher quality of life
and better health care, better education, b
etter happiness, better entertainment,
better connections with other people. And intelligence is fundamental
to a lot of that. There are a lot of people right now
because the the price of cognitive labor is quite high. There are a lot of people
who just can't get all they like. And as we make that more available
to people, we can do much more. I think the other big trend of this decade
will be energy. As the price of energy falls and falls
and falls and the abundance increases ideas and the abil
ity to make the things happen,
that's a huge increase in quality of life. And doing that in a way where people
are in control and get what they want. And of course, you know, we have to live in a society, so
there's got to be some agreement on that. But that's I think that's like some of the benefits will have
safety is a great question. You know, safe for who
and if I think something is safe and you don't or if I want to be able to use the system
in a certain way, you don't think
I should get t
o use a system in that way? Who gets to decide? A general principle
that I think has always been right is the people that are going
to be most impacted by technology deserve the most say in
how it's going to get used and deployed and what the rules are going to be. But this question of safety,
we can agree on a lot of things like don't let someone make a bioweapon
that they're going to use to create a global pandemic. That one's easy. Don't let someone sort of figure out
how to, like, launch nuc
lear weapons. That one's easy. But there's a lot of other stuff where people can say, okay, we want safe and then have complete disagreement
on every other specific case. So that's going to need to be a global and a complicated conversation. We do have a new tool
and we don't. Exactly. This is going to go, yeah,
but the fact that I can learn an individual
or a set of individuals value preferences means we can do something that we've never been able to do
with any technology before in terms of gi
ving people input over what the rules
and what the value system are in a way that the technology can learn. But I can make a lot of mistakes,
and I'm not just talking about hallucinations or confabulation
or whatever you want to call them. I mean, Dr. Joy started the whole this whole talk
with the example of the coded gaze and the masquerade. That's also a mistake. So what I mean, to
what extent can you trust GPT four to police to be tougher
and to think about? Not very. Much. Yeah. I don't thin
k. I mean, we've certainly never said
you should do that. Yeah, I think that'll be a mistake. But can I adequate
Leigh check or police other eyes. I mean is that the kind of
thing you're talking about. I mean
I think humans should set the rules. Yeah. What do you think of all this. When I think of air safety
or even the term air alignment? These terms have been evolving
since I started the work. So I tend to ground the work
of the Algorithmic Justice League with ideas of AI harm
or air discrimin
ation. And then we've done quite a bit of work,
you know, with the harms taxonomy and so that it's concrete. And so I think it is worthwhile
thinking about the potential of AI, the ability to maybe cure this,
or maybe do that. And I think we should put energy
towards it. Absolutely. I think the breakthrough with Alpha fold
and what that can do for medicine is huge. But I get worried
if we're not able to specify not just what potential risk could be, but what the landscape of harms look like
and
what the emerging harms look like across an entire AI life cycle. And so what
I tend to see is conversations. If you look at the lifecycle from design
development deployment, many people are at what happens
when you're deploying the system, right? So to your point,
don't create the bio weapon or looking at who's ideas animate
what's made in the first place in design and what kind of prejudices
might be embedded. Right? Even the type of AI system
you create in the first place Safe from who survei
llance is often saying safe
from the other. I tend to be among the other,
you know, So when I hear the safety frame,
I question it with safety and surveillance and who is being actually looked at
and under what terms. When I think about development,
this is so much of what I grappled with with my research because I was very
complicit in the processes critique. I was scraping data online to build
my data sets to then test
the other facial analysis system. And so this notion of data, provenance
an
d also the data sources, do we have a sense of what
we're actually putting into these systems? Are we in a mystery meat
sort of situation, which can be the case? Then we finally get to the point
where we're like, We hope it works,
and there's some feedback. Yeah. And and in that whole conversation
I put that under people
thinking through preventing AI harms, What I almost never hear discussed
is the idea of redress. So what happens if something goes wrong? I mean, we tried our best,
but somehow
someone that harmed. Right? What what happens to that person
not just in the future, but also people who have already harmed, right, Artists, creatives
who are talking about the ways in which their data
has been used to power, very powerful AI systems and saying about us. So I think about that entire lifecycle. Sam just mentioned that the people
who are most affected by AI systems should have the most say in how they're
being built, how they're being deployed, but that isn't happening right now
and it hasn't been happening right forward. I think there are always ways to make
more of it happen and we need to do that. I think there are a lot of important ways
in which it is happening, and I think that I think it's always great to hold companies accountable
for not doing more. So I don't want to push back on that
strong plus one, do everything you said. But I think as we learn
what people want from these systems, the users, artists, data providers,
I think it is our responsibility and we
happen to listen more and more
and continually make changes. So we may also one, there were
a lot of things people were excited about. They said it was not going to be this,
not going to that. That's fine too, that it did things
that we didn't think it was going to do. Now we have different feelings. Now we want you to change how you do it.
We want these new features going to be able to opt in or opt out
or want this data. Now, you know,
we want a new economic model. We say, great, we'll get tha
t in
for the next version for a dog. Three,
we address a lot of problems with data. Now we say, okay, you know, you have all these things now
I want a new thing. And that that contact
with reality and society and continuing to listen to people to figure out
how to adapt it, to figure out new models for everybody involved in the ecosystem
that makes all this work. That's I think,
the only way to get to the right answer. It's got to be a dialog and it doesn't. You can't talk about it in an ivory t
ower. You've got to put things in people's
hands, figure out where there are benefits,
where they want things to do differently. It doesn't mean
you just like build something and ship it. When we finished Train and Ship for, it took us eight months of safety
testing of conversations with people who are going to use it in different ways
or whose data people wanted not in or in or where there was like bias in the system
that we had to do research for, you know,
even some of our biggest critics. We
appreciate this with from the move
from GPT three to GP for would say like man opening. I really listened on these biases
in the system and how they addressed it in these situations where we told them it was breaking or not
working for one group or another. So I think that's the way that we all figure out how to build systems. It's like not just the technology
but the whole system, technology, society, people, everything, economic flows
that sort of work for all of us. So I have a question
in t
erms of a daily three, right? Because one of the options,
as I understand it, and of course, you're closer to it than I am,
so let me know if I'm off, is that now
artists have the option to opt out. Is that the case? Well, you, an artist can like put a block
and say never generate I actually don't think will generate
in any living artist style, but I'm not totally sure that's
what I believe is the case, though. Got it. So how do you respond to artists
and creators who are saying, given that some
of the training data
came from their work, they feel they are owed something or that it should be deleted? Well, again,
we're trying not to go like the it won't there there is a block
because in some cases there's, you know, something
that like gets to the training set or like information
was improperly tagged or whatever. But we're not off there
like trying to train on artists work. Like there's other systems
that do do that. But if you use ours, there's like a reason
it won't generate in the
style of an artist. The reason I ask is as a new author
and also someone who recorded the audio book,
I'm now a member of the Authors Guild and the National Association
for Voice Actors. So I went to this FTC convening
and they were talking about the impact of generative
AI on the creative economy and different representatives
were talking about the forces of consent, compensation control and credits, right When it comes to not just openai
the entire A.I. ecosystem. And I was curious
how that mi
ght be operationalized. So you mentioned voice. I think that's a great example. We built a system
some time ago that can generate audio, has all these great
benefits and it helps with accessibility. It does a lot of other things too. One of the things that could do
was take a 15 second, 32nd sample of anyone's voice
and make a super convincing A.I. voice
that sounds just like, you know, there's obviously
huge challenges with that. So we thought about it
and decided not to release it because we k
new that putting out into
the world would have a bunch of misuse. So we just said,
You know what, we're not going to do that. Not every technology you build
is something you should release. And as you said,
you've got no experience with this. And like voice is a very personal thing. Other people have since released things
at different levels of quality there. But I and I
think the world is going to have to get I mean, they're going to be open
source tools that are great at voice cloning,
I think
pretty soon. But it doesn't mean
we're going to do it, of course, And I think what's going to happen
more and more is companies like us and others will just build some things
and say we shouldn't release this releases at all, shouldn't release this
until we can figure out safeguards for it. But that's kind of how I think it goes. Can you talk at all about how you see
AI evolving over the next 12 months? We're 12 months away from a presidential
race in this country that, you know,
the last sever
al elections have been contentious. I mean,
there have been a lot of fake news. It's been a lot of kind of arguing back and forth. Polarization has only accelerated. Do you think AI plays a role
in amplifying those problems or and or fixing those problems, Like
what can be what's AI's role next year? Well, even this conversation
about voice clothes in the introduction, I talk about a woman who hears her
daughter's voice, Help, Mom, help, Mom. These bad men have me
and it's a voice clone. And eve
n if it's not the best voice clone,
algorithms of exploitation are exploiting more
than just the voice, right? They're also exploiting
your emotions as well. And I'm also thinking with disinformation
and elections coming up, this notion of algorithms, of distortion. You know. And I'm thinking of an evaluation
that Bloomberg News did of stable diffusion. So text to image generator. And what they decided to do
was give prompts for high paying jobs, CEO, writer, architect, low paying jobs, fast foo
d worker and so forth. And so probably not surprisingly,
the higher paid jobs were lighter skinned men, white men, and the lower paid jobs
where there was certainly some diversity in the training, said people of color,
and then also stereotypes. So whether it was criminal stereotypes like terrorists, that inmate drug dealer. Right. You saw the faces of men of color
being generated. So here you might say, okay,
here is a mirror I reflecting society, but this is where what I'm seeing is
more of a
kaleidoscope of distortion. So, for example, where architects
I believe in the U.S. women make up about 34% of architects. This particular model
represented women as architects around 3%, less than 3% of the time. So what I'm also thinking
about the next set of generative image making, or it could be audio,
it can be multimodal, right? What does that look like
when the technologies that are meant to take us into
the future are regressing on? We weren't on parity, right. But even those slow gains
made. And so that is something I continue to be worried
about with these algorithms of distortion. What's the line between representing
having these systems represent
what is true today versus aspiration? Women are I don't know what percentage
of American CEOs are women. It's not a lot. Do you want the response
for CEO to look the way it really is or do you want it to look a little bit
more diverse and to reflect an aspiration? I definitely know I don't want to see the regression,
which is that
kaleidoscope of distortion. So I do. But I also think we should be doing
better than the status quo, and that's the struggle
with general general purpose systems. I tend to look at how do you have more diversity, smaller, bespoke systems? So it depends on what you're trying to do. Is this a historic analysis? Right. Even in that historic analysis,
our own understanding of the contributions of many people, if you think of hidden
figures in the faith space race and all of that isn't even accurate
in the first place. But I do think we should aspire for better and certainly push back worse. Sam, what do you think of that? I mean, general purpose models,
can they address these issues or is it really just this is why
you need smaller, more specific models? Because, I mean, do you have to force
general purpose, Right. Yeah, I think people there's
so much benefit from General purpose models that we need to find a way
to make them usable and useful. And I would say I would go further
than say
I don't want a regression. I would just say like,
let's have this be aspirational. Now there's definitely some complexity
there. Yeah, we made the decision
to make some of our products aspirational and then we would
get into things like where, you know, one example I remember is with,
with kindergarten teachers when we were displaying
those as more gender balanced women who are disproportionately kindergarten
teacher would say, this is like if you gender balance us
out of the picture here and ma
ke it look more like 5050,
that that's a racing lesson. So it's not as easy as it sounds
in every case. But I think aspirationally
we should be aspirationally better. And I think technology can be a force
for that. I also think that, you know, you can you can read a bunch of different research papers out there,
but there are like a lot of there's like a lot of research
that say the GPT four model that is out there is like less bias
on an implicit bias test for whatever than almost any other huma
n,
which is no huge surprise. But this technology I think, can help. We don't always
we don't like we do need to be vigilant about this technology
making problems of bias worse. But I think we can see lots of reasons
why it can make things better, too. Can I just briefly touch on the election
topic, though, because I think that's super important. I am definitely worried about the impact
that is going to have on the election. But the main worry I have,
I think it's not the one that gets the most
airtime
when everybody talks about is deepfakes. And deepfakes I think will be a problem politically in society
in a lot of other ways. But in some sense,
that's fighting the last war. We we know about deepfakes. People
have some degree of antibodies to them. We have some societal vigilance. People know if they see an image or
a video or whatever, it might not be real. And, you know, everybody thinks that
they're not susceptible to it. It's just somebody else that is. We're all probably somewhat
,
but it's at least a known issue. The thing I'm really worried about
about this upcoming election and future ones
and more broadly is the sort of customized one on one persuasion
ability of these new models. And that's an that's a new thing
that we don't have antibodies for. And I, I think we'll all
be more susceptible to it than we realize. What does that look like? Like chat up,
trying to convince you who to vote for or. Will probably not chat. I mean, if you're asking
like Chatty Beatty, who
should I vote for in the election? I have some other questions for you. But but it means like systems on the internet
powered by AI to varying degrees that are just like subtly influencing you. So how is open
AI trying to address the problem? Address our problem
because you guys are uniquely positioned to kind of address
this concern that you have. Right. So, well, even. I mean, we on our own system,
we're doing a lot to address it, but there will be other models
that people are using that are
just out in the world that I don't think we we are really
any other single model can detect. There's this there's like a single company
can detect. There's this question of like, can you use an AI system
to detect the outputs of other AI systems? Can you watermark it? Can you do something even for systems
that aren't watermarks? Can you like detect the right patterns? And I frankly think those capabilities
are somewhat overstated. There's there's the social media platforms
can maybe do something
because they can see
across the whole thing, but and see how sort of information
is flowing to the network. But to look at any short piece of text
and say this was AI generated, this wasn't,
I think that's going to be harder. So are you working with the other AI companies
to think about some of these problems? Are you working with
the social media companies? What's the plan? I think the companies are all going to do The AI companies are all like
very focused on this and the companies are mostly
going to do the right thing. But it is like a foreign adversary that has trained their own AI system
that none of us even know about. That's the kind of thing
I'm worried about. I would say I'm extremely worried
about synthetic media and deepfakes with upcoming elections,
even if it is understood or known by some within the tax space and even what we're seeing
with the current conflict. Right. That's going on right now
with what is happening in Gaza and the proliferation of misinformation
there
, I don't I think we're already getting a preview of what it looks like
when generative AI tools are made more readily available,
whether it's through a company or even now more perniciously
with open source models. So I do commend some of the efforts around
content credentialing, which is to say we're not trying to say
this is right or this is wrong, A.I. versus A.I.. Instead, this is approach of verifying. This is coming
from a trusted media source. I don't think that gets at everything,
but I
certainly do think it's an important starting place. And I think this is going to be
an even deeper problem as we go into next year. I just have one more question for
I actually have a lot of questions, but I will just ask one more question
and then we're going to get to the audience questions. Why is any of this inevitable? You know,
we've been talking about this for a year. That's the air revolution. Air is going to be in everything,
all your products, in your grocery store, in your fridge. W
hy? Why do we need it? So to me, I don't subscribe
to technological determinism because I always think of human
agency, right? And so I don't think a AI is inevitable in the way
that any technology is inevitable. But I do think there is momentum, right? And there is inertia. And what I fear is the stories
we talk about and tell ourselves
with AI and its capabilities. For example,
I think of this nonprofit, NADA National Eating Disorder Association. And so I think it was May 20th. Yeah, May 25th.
I see a headline there. Workers had unionize. Management said no. So they fired
the workers who were doing the chat, kind of call in for help,
and they replaced it with a chat bot. The chat bot was then giving advice that's known
to make eating disorders worse. And so then the next headline
I don't think it was even a week is like maybe May 27th, May 25th, but on the May 27 chat butterfly right. But what I'm getting at here is it wasn't
the A's capability, it was the stories we were telling our
selves
about the ACE capabilities. And it also makes me think of this notion
I've been thinking of the apprentice gap. So what happens when we're using AI for entry level
jobs or entry level processes? How do you then gain mastery? How do you build
your professional calluses? I was in Rolling Stone a little while ago, so I was inspired
to go get a guitar, right? So I got my guitar. I was really excited. My Les Paul appetite verse slash. Anyways, for those who know, you know,
and as I was playing
it, I realized, my calluses from other hats
when I was an always a researcher, right? Those calluses were still there. And that's when I started thinking
about professional calluses, because if we don't replenish the seed,
are we then living in the age of the last experts or the last masters? And so that's where it's not even just as I going to expand or evolve that. Yes, because we are expanding
and we are evolving as just people, but it's the narratives we tell ourselves
and who shapes those
there it is because those beliefs then fill the type of air systems we create. Why is air inevitable? It's not. It's not. It It is. If people find it useful and if people
like the new vision of society, the narrative and the actual results
that we deliver more than the alternative. But I am not a believer
in technological determinism either. And I certainly think
that if people decide they don't want something or it's
not helping or it's not a better tool or it's not giving them a better life
or
society is not better off, then you know, sometimes we kind get trapped into a local maxim of
that can happen for sure. And then we have to correct and I'd point
to some of the ills of social media as an example of that,
where we kind of got sucked into this dopamine loop,
or some of us did without without really one in
it, without realizing it. But I think fundamentally humans are tool builders
with better tools. We do better and better things,
but there is a long graveyard of things that we t
hought
were going to be technological revolutions and better tools that either
weren't or people didn't want or people did for a while
and they stopped. If I can help people have better lives, it'll happen because we all kind of want that. And like I do believe in the magic of our society
to get successfully there over time. And if it doesn't, I will say all of I want to happen,
but certain parts of it, I won't happen. And we'll just people will decide they don't want that
or society will decide
. We collectively don't want that. And I think people are very good over time at figuring out what, what tools are helpful, what tools don't. And and again, there's like I was laughing when you said in my head
that in my head when you said that because I remember sitting in a set up very,
very much like this and someone said, Is cryptocurrency inevitable? And, you know, I've been in other conversations
about previous technologies where there were like some people were like,
this is going to happ
en. That's the only thing
that matters in the future. And then people said,
or our leaders said or whatever, this, you know, just doesn't work
for this reason, that reason. But the future happens
because people work hard to invent it and we learn and iterate along the way
and nothing has to happen or is, you know, has like
a divine right to happen. I think one piece I think about is who gets to decide what happens and who has the power to shape
what is adopted. Right? And so sitting in a privile
ged position, it can be easier to say
we don't want that. And then when we think
about the X coated, right, certainly those who are being harmed by AI systems, if you're denied housing because of a tenant screening algorithm,
if you're fired or you don't get promoted because your company
adopted an AI system, oftentimes you don't even know
the coded gaze is at work. The example with coding in a white mask. Part of why it was effective
was because it was visible in that way. But there's so many w
ays
in which those who have power can adopt AI tools behind closed doors. Unless if we have measures
right, and laws and legislation that one, the tools are fit for purpose
in the first place. And we're not just experimenting
and using people as guinea pigs when we're talking about life
opportunities. And that took me a while to come to,
again, the kid who wanted to build robots and maybe deal with people
sometimes, right, But not all the time. And then understanding the responsibility I had as
somebody within the tech space
and then later as an academic, because so many people don't have access to make decisions about these tools. So I think it's important that we have,
I think a lot of laws about when people can
and can't use AI for these reasons. I think those are super important. And I think as time goes on, we'll find that,
you know, housing decisions as an example. You just start with saying
we have a lot of. Laws. We do, Yeah. On AI
about where you can and can't use A.I.. I mean
, I was yeah, the conversation I was in right before
it was about new things where you couldn't couldn't use A.I. in medicine. So I think this is this is a good thing. Well, I would push back because part
of what we've been pushing for, for the Algorithmic Justice League
is that we have laws in place. So we did a lot of pushing to get,
for example, with in Brooklyn and Cambridge
and half a dozen other American cities. We have laws on the book
that say you can't use facial recognition in law enfo
rcement
for specific things. We've had proposed legislation for things
like the Algorithmic Accountability Act. Right. But that isn't yet a notice. Disagreement that we don't have
all the ones that we should. Yeah, I just meant that I think the fact that we have some
and that we will have more. Are you saying laws like anti-discrimination law
that could then be applied to AI more. Specifically mean things like saying
you can't use AI systems that don't pass a certain like explainability threshol
d
for lending loan underwriting or something like that. I think
right now what I'm concerned with is we have executive orders,
which I think are good. We have the next risk management
framework, we have the A.I. Bill of Rights,
but we don't yet, from my understanding, we're in that conversation. Have the mechanisms actually in place
that would compel companies or would compel government agencies
to take specific actions. Office of Management and Budget. They just released some guidelines,
I thin
k, last week. So were were there. But it's you know. I think we can probably agree on is
we don't have enough or they're still big pieces we're missing the the point you know we put out chatbots is this thing that we thought
was going to be a little research preview. We did not expect it
to be really a product at all, much less a very successful one. But people started using it
for like really valuable things. And we hear these stories
every day of how people who had didn't
have access before it
and even maybe think about something before
are now able to use A.I. in their lives
in these super positive ways. And then on the other hand,
there's a bunch of ways that people are going to use
AI and all these negative ways. And the trick and I'm sorry, because I. I can see you trying to stop us. You're making my life easier. I just keep going. I question. I had a poem to read. And. There are a lot of questions, though, so. Let's wrap it up in saying
I think the trick in front of all of us, t
he Technology developers
society people, is going to be now that we have this
amazing new thing in the world, how do we get more of the good
and less of the bad? And I think that is the story
of technology and society. So there are a lot of questions about the safety,
the safety question. And so, I mean, they're all excellent. I'm I'm this one is just very simple,
responsible and explainable. A.I. What is it? What does it look like? What's your take as people use these words a lot, they talk abo
ut,
yeah, responsible and explain away AI. But what does it mean to me? This framing of alignment
and then this framing of beneficial AI, and then this framing of responsible
AI has to be grounded somewhere. I like the grounding of the A.I. Bill of Rights
released by the White House last year. Right. Which is talking about safe
and effective systems systems where you have protections
from discrimination systems, where there is privacy and consent. And I think also importantly,
but doesn't get ta
lked about as often
alternatives and fallback works. So I think, for example,
when Algorithmic Justice League, we were talking about the Iris's adoption
of ideally a facial recognition vendor that then became part of the gateway
for some people to access benefits. Right? It could be veterans. And then also with the IRS as use access to basic tax information. And there is algorithmic bias involved
there as well. So I do think it's important to be really concrete, right,
about what alternatives lo
ok like. So, for example, with the pushback on biometrics to access government services,
now there's an exploration of what does it look like to use post offices and to give people jobs. And part of their job
is actually verifying people's identities as humans either. And so
this is an example of an alternative, but we don't hear that as much
what those alternatives would be. Instead, we get more of that. I will happen. And so now we have to adjust
and that takes away our agency of choice and sa
ying where we can use it,
where we can't have an option. Yeah, this is an interesting one. Since those who are most impacted
or harmed by AI are not the ones designing the systems, how can ensure
these voices are represented without relying on disadvantaged groups
to make these systems accessible? And I guess I would also add without
disadvantaged groups being the ones to raise the alarm, especially
if they have no interest in contributing. Do you want to go? Yeah. I mean, I think for that
the r
esponsibility will be on companies like us to make sure that we are doing
everything we can to get truly global input from different countries,
different communities, the entire socioeconomic stratum,
and to proactively collect and do it in a fair
and just and equitable way. What people want from
one thing that people have talked about, which I think doesn't quite work
but is an interesting framework, is can chat, CBT chat with all of its users
around the world, figure out what they all want out
of what it should
do, what it shouldn't do, what the defaults are,
what the broad bounds are, where the cases it's not allowed to be used, and
where can you use it in other systems too. And then can chat about figure out
how to like adjust that for the fact that different groups
have wildly different levels of interest and access to this technology
and aligned to that thing. Now again, I think a bunch of reasons
tyranny of the majority and others that that doesn't quite work. But I think it's I
think it is on us and companies
like us to figure out systems that can help with this problem. I think companies have a role to play. This is where I see governments
needing to step in because their interest
should be the public interest. So I do think there should be penalties,
right, and disincentives for if your system is harmful,
because I do think there would be a more cautious approach if it cost you something, for example, to translate somebody who is posting about their faith and then l
abel them as a terrorist,
which is happened. Right. And in that case, it's a my bad,
we didn't mean to, which is true
I don't think they did this intentionally. But if there were penalties. Right. For making those sorts of breaches,
how might that change the culture of design and deployments? Yeah, because even the executive order
that you guys just mentioned, I mean, it's not binding right. I mean, not well, actually it's binding,
but you should. But what's the legal ramification
for falling sh
ort? Do you get sued do you look at it? Yeah, you can. Yes. But an executive order, right. I mean, you told me not to draw you. You look like you've got something to say.
Am I wrong? I might be wrong. So I think we're the government
has teeth on the executive order. And I commend the executive order
because it is a comprehensive, full press approach is where federal funding
is linked. Right. So to the extent that they are requiring
agencies when they're adopting systems. Right. To undergo partic
ular checks,
they hold the purse strings. So there's something to be done. What we've seen with
the companies are volunteer commitments. You have the G7 agreeing to volunteer
commitments, etc. And as much as I love to rely
on the goodwill of companies, you know, I still want the assurance
of legal protection. We yeah, of course, we have been calling for government regulation here. I think the first
and the loudest of any company, we absolutely need the government
to play a role here. It does not
excuse the companies
from not doing what we can to I think voluntary commitments
are a good way to start the legislative is slow. The executive order process we see can be a little faster,
but you should want all of those things. You should want the companies
to make voluntary commitments, but you shouldn't trust us
on. Those like you should. You should want the government
to sort of set the rules here. And I think the government didn't do
enough of that in the last tech cycle. I think has lear
ned a lesson. We'll do more now,
but that's what's going to happen. Like that's no one should just
trust the companies here. I think this is really important. Interesting. We have a lot of questions
about the economy. But before we get there,
I want to read this one. I fear manipulation by I ease my concerns. Do you have anything that ease people's concerns here? Because you said even beyond. Right, synthetic media. I fear it, too, a lot. I think this is a place where, speaking of things
where w
e need the government to set some guidelines and we also need
to hold the companies accountable and also the non companies that just are going
to deploy things here like I am. I think I'd be lying to you
to ease your concerns here. I'm quite nervous about this myself. That's great to to offer some hope, though, and part of why I wrote Unmasking
I was between fear and fascinate
Nation was to actually think through how we use our collective voices and also our individual stories to push. And so in
that case,
I think one of the things to do is to share your story of manipulation
or exploitation. That's why I share that story of Jennifer
hearing her daughter so that other people know
that could happen to you too, so that you are more of vigilant. So I do think the storytelling is a start. And then as we saw with the Brooklyn tenants who said,
you know what, facial recognition and get into my apartment, No, thanks
And then they kept pushing back. I think all of these are examples
that recog
nizing the problem, which I don't even always take
for granted. I remember when I started this work and I would say algorithmic
bias and people are like, what? What is even before we get to algorithmic so that we're at a point
with an executive order, that we're at a point with a safety
summit happening in the UK. And this is a priority agenda item. It is immense progress
and I do think it's up to people like us, people like you in the room
to say what your concerns are, right? So if manipulatio
n is what
those concerns are, then we start thinking through
what we can actually do about it. So I'm very much like, let's name
the concern so we can get to work. Actually,
I thought of something positive to say. I think people are really smart
and really resilient and we have had a long history
of facing challenging technologies before that bring new challenges
to the world that we hadn't seen before. And we may stumble a little bit, but We find a way to address
the challenges and still get th
e benefits. And even though right now it looks a little daunting,
I'm sure we'll figure it out. Sort of example, you know,
what do you think? Is there a historical example
that you reach for technology? Yeah, just where we
it was daunting and then we adapt. I mean, I think like I
there's so many by like reading the stories of what
very smart people said after atomic weapons were developed
about the chances of our survival being 0%, 2%, you know, just that
they didn't see any way through it. We g
ot through that recombinant DNA. I think, again,
we got through that social media I thought was kind of going
to destroy the fabric of society. And we're still here. I mean, weakened for sure, but still here, I think, like we do not always in the moment, but over time we show incredible, I think wisdom and resilience
and adaptability. And we can face huge new challenges and get through them. I got one last audience question. I'm going to try to combine a few of them because all of these
are abou
t the economy, what you know, and how does I you know,
how soon will human humans be replaced by autonomous, independent technology
without human presence? But this is another interesting
question to as a student,
I fear using models like Chachi t, I fear that using models
like Chachi will make me dumber or lazier. Do you think future generations
will be able to find the balance with using AI? So how does it transform our economy
and does it make us stupider? Let's that's get in a little bit
at
that Apprentice Gap piece I was speaking to, I think the A.I. systems that are being developed are forcing discipline. So when you have what appears be the easy way out,
what structures do actually put in place right to do what might be difficult,
like exercising consistently. So now we have a way where we don't have
to exercise our brains consistently. If we're outsourcing some of what
we would have do in a regular day. So I think just like you have to set up
routines to keep your body sharp, y
ou're going to have to set up routines
to keep your mind sharp. So I have great faith in humanity
on this topic in particular, I the kids that were a
few older than me in school, they would tell these stories of how when Google first came out,
I think it was like very, very early. 2000s teachers would make them promise to sign things saying they wouldn't
use Google for their homework because if you could just find
all the information like you never had to memorize anything,
you didn't have to le
arn and. I remember at the time that sounded to me
like stories I had read about of teachers banning calculators
because they say, Well, why have math class
if there's a calculator? So we got to kill these things. And I think kids born today
that grow up using these tools will be far smarter
and capable than any of us. And I think that's awesome. Like, that's the way it's supposed to go. You give people better tools,
they do more. They use more of their cognitive capacity
on new and exciting are
as. And that is how we make human progress. Like think about what
we are all capable of because of the tools that the people
who came before us built for us and contributed it back to this
sort of like long unwinding story of human progress and. Now we get to build on top of that
and make a new one. And people that have that are going to build on top of this
and make even far better ones. And you talk to students who use tragedy
and some of them will say, okay, yeah, you can write my paper for m
e,
that's great. But a lot of them also say,
But I can learn these new things in these new ways
or I can do things in these ways and the world
that they're all going to go out into and become a member of society
and is going to have these tools. And so learning how to use them and learni It's true that the like capability goes up. You know, people can do more, but that means expectations go up too. And people are not only able to do more,
but our expectations go up and we want them to do more. A
nd you know, we're going through
a little bit of a lurch right now, but will evolve and we'll expect
more of each other and we'll be delighted that we can express our creative abilities
and our ideas better. That's all wonderful. The time of when, you know,
we just automate things and humans don't do anything. I think that is never
I think that again, it's so tempting to think of air as a creature
and not a tool, but it's really important to remember
that it's a tool and not a creature. This is
a thing that we use to do whatever we want to do more and that humans are really good
at knowing what other humans want. And so we'll be able to create
for each other, for all of us, for ourselves, better and better things. Another A.I. memory
from when I was a little kid, when A.I. first beat humans at chess,
the IBM Deep Blue thing. The prevailing wisdom was, That's it. Chess is over if I can win. No human ever wants to play chess again. It's done. And, you know, fast forward many decades. Che
ss has never been more popular, not only to play, but to watch. And I don't know about you all, but I don't know anybody who watches
like two A's play each other right? Like, we really are wired to care
about humans and what other humans do. And in fact, if you use that to cheat,
that's like really bad. So the jobs will change. Sure, we'll be able to do new things. We'll find new ways
to be useful to each other, to make each other happy,
to make life better and better. But it's just a new tool
w
e'll just go on to greater heights. I hope now that you're
taking more of a positive tone, I hope I'll counter a little bit. All right. We can keep going. So I wonder about who gets to enjoy these new tools. And I think about the people who make the tools possible
that aren't always visible. Right? So the content moderators,
the data scrapers, we actually have a different type of role
at the Algorithmic Justice League of AI Harms analyst. And so I think about this notion
of the digital divide th
at's been around for a while,
who has access to computing is to have billions of people
who are not on the Internet. And then I also think about AI
in terms of a digital chasm. And so even what I thought
what you were saying was interesting because I was at MoMA
and I was looking at unsupervised the beautiful data, eye moving mural
and there were babies crawling. So the poetic part of me is like Eleanor
to start to crawl and then the whole thing
is shifting as well. And so looking at the kid
who
se parents have brought to MoMA and this is their environment, right? What that trajectory is, you know, versus somebody else who maybe
their decided we're afraid of AI, so we're not going to let students
experiment. Meanwhile, the private school is saying,
let's bring in the tools so people are equipped. So I'm looking at those trajectories and thinking through
what that chasm looks like could be. And then how do we it. Dr. Joy, you're a poet, and I
just want to give you the final word here. I
understand
you have something you'd like to read. yes. I have a poem this I wrote earlier this year, and then someone asked me
if a chap borrowed the book and I felt some kind of way. I did. So now I signed my poems as poet of code certified Human made. But maybe I did that too. So the poem I'd like to close us out
with is called Unstable Desire. Unstable Desire prompted to competition. Where be the guardrails now threat insight will might make great hallucinations taken as prophecy destabilized
on a middling journey to outpace
to open chase, to claim supremacy, to rein indefinitely haste and paced control altering deletion. Unstable desire remains under fated the fate of a still uncompleted responding with fear responsible
I beware prophets do snare people still dare to believe our humanity is more than neural nets
and transformations of collected means is more than data
any rather more than transactional. The fusions are we not transcendent beings bound in transient forms? Can this p
ower be guided with care,
augmenting the light alongside economic destitution? Temporary Band-Aids cannot hold the wind when the task ahead is to transform the atmosphere of innovation. The Android dreams entice the nightmare schemes of ice. Thank you. Thank you. Thank you, Dr. Joy. Thanks. Thank you, Sam. We encourage to pick up a copy of Dr. Joy's book here
or at your local bookstore. And if you'd like to support
the club's effort in making this programing possible,
please visit their website.
I'm the opposite there.
Amen. Thank you. Take care.
Comments