- One of the big issues that I see with Artificial Intelligence is that we're building
these powerful AI systems that are shaping the world, and they're influencing the future for everyone who lives in the world, but also everyone who
will live in the future but they're being created
by a tiny minority of the world population. So very small number of people in a room, you know, building these AI systems. And one of the problems that emerges when with that kind of relationship is that sometimes t
he benefits and the harms of the AI
systems that we're building are unevenly distributed. - Welcome to "Speaking of Psychology." I'm Kim Mills. We're doing something a little
bit different this week. In early January, the American
Psychological Association joined the Consumer
Technology Association at CES, the world's largest technology trade show for a series of discussions about how technology is
shaping human behavior and about how psychological science can help the development of more ethica
l and effective technology. Amid the robots, gaming systems, smart tech and AI powered
everything on display at CES, we talked about some of the field's biggest questions. How can we harness the power of artificial intelligence ethically? Can digital health interventions help solve the mental health crisis? How should companies
approach online privacy? And can video games promote mental health? After the discussion on AI, we caught up with
panelists, Nathanael Fast, Director of the Neely Center
for Ethical Leadership and Decision Making at the USC
Marshall School of Business. Dr. Fast is an expert
on technology adoption and studies how power and status influence people's decision making. How it interacts with human behavior? And how AI will shape our future? We talked about why people might
make less ethical decisions when they're acting through AI agents, whether most of us trust AI, and why it's important to make sure that the potential benefits
of AI flow to everyone and not just th
e most privileged. Here's our discussion. If you want to hear more from Dr. Fast and the other psychologists
who spoke at CES, you can find all the talks at ces.apa.org. So, Dr. Fast, I wanna say thanks for joining us here at CES. We're in Las Vegas at the most incredible
technology show in the world. And you were part of a panel
presentation this morning on Artificial Intelligence and Ethics, and I just want to ask you a
few questions coming from that and from the work that you do. So I'm wonde
ring, as we
talk about AI technologies, they're being developed at an
amazingly fast pace right now. And there are doom and
gloom predictions out there about what AI is going to do to our lives, but there are also people who are saying that our lives will be
changed for the better. Where do you fall on that spectrum? - Thanks. It's great to be here. You know, I'll first preface it by saying I'm pretty excited about
AI and where technology is going in today's world. It's a very exciting time to b
e alive, but I definitely get both sides. I understand the people
who are very concerned and I share a lot of their concerns that I guess those that you might say are in the Doomer Camp have. I also share some of the excitement that some who might be
techno optimists have. I think it's pretty naive to think that it's gonna be one way or the other. I mean, I don't think we've ever developed a powerful technology or tool that only had positive uses or negative, you know, harms for society. And so
for me, I think, you know, I see myself as kind
of a measured optimist. I think I'm optimistic because I think as psychologists, we know that we often get
what we are looking for, the self-fulfilling prophecy. And so I think it really makes sense for us to be optimistic
about what we can achieve as humans in response to
these new technologies. But I'm very measured about it because I think that there's
a lot of tremendous harms and downsides of these technologies if we don't kind of
deploy them
and adopt them and use and govern them effectively. - In your research, you look at how interacting with AI may change human behavior, and the way we relate to each other. For example, in one study you looked at how using an AI assistant might actually make people
behave less ethically, and that's what you were
talking about this morning, the ethics of AI. Can you talk about that study what you did and what did you find? - Well, I mean, we're actually still in
the early phases of this, and we de
scribed this... I described this in a
paper with John Gratch. And John and his team are the ones who did that particular study. But what we're finding overall is that, we can often hide behind the AI that we're using to kind of
mediate our relationships. And so when we are
face-to-face as we are now, you know, there's a greater sense of kind of evolutionary pressure that I think is a positive one to kind of treat each
other fairly and so on, and not manipulate each other or do harm. But when our
interactions are
mediated by an AI assistant or an AI model that's
negotiating on our behalf, then we can kind of hide behind that and you can also think about
it in terms of algorithms. Like if you're hailing a
ride from Lyft or Uber, and the price looks a little low, you know, it's a little
easier to just kind of let that algorithm do its thing and it's a little bit harder
for us to kind of like step in and do something about it. And so I think what we were
trying to say in that article was j
ust that there's
some potential for kind of moving in an unethical direction, maybe more so than we would
naturally, the more we adopt AI. Another study that I have with Roshni Raveendhran
and Peter Carnevale, we looked at managers and when managers... We put managers in situations
where they had to engage in kind of socially unacceptable behaviors like micromanagement and things like that. They much preferred to
use virtual, you know, to manage virtually rather than in person when they were doi
ng
those types of things. And so I think that also
speaks to the psychology that we kind of hide
behind these technologies in some cases. - Speaking of hiding behind technologies, you've studied how much people trust AI. For instance, whether they'd
rather be monitored at work, if they're going to be monitored at work by a human or an AI system. What have you found? - Well, it's interesting. So, I found kind of opposite results, but they actually make a lot of sense. So across a couple of differ
ent projects. So in one project with Roshni, again, we actually found that people
are more willing to be tracked when the tracking is
done by technology only. So if people are at work and they're gonna be
tracked by their computer, or a smartwatch, or something like that, it's kind of monitoring their performance and things like that. They're much more willing to
say yes to those situations than when those same tools
are being used to track them but there's a human who's
doing analyzing the data
and looking at the data and
all of that kind of stuff. And we found that even when we told them what the data were gonna
be used for and so on, there's something psychological that we've kind of evolved to feel kind of what we
argued and found in our paper was that people feel kind of less autonomy when they're being
watched by another person. That creates kind of a
social pressure to perform and to not be judged negatively. Whereas the technology doesn't judge us, it just kind of measures our
performance. And so people tend to have a
greater preference for that. But in another paper with
David Newman and Derek Harmon, we actually found that... You know, once we have all these data that we're collecting in
the organizational context, we can actually use them to make decisions like HR decisions, hiring and firing, promotions and things like that. And perhaps remove some of this human bias that enters into these decisions. And it's one of the biggest
things that employees are, you know,
complain about is
human bias in these decisions. But what we found is that actually people in that case prefer humans
to make the decisions and so we would give them
a whole set of different, you know, HR related
decisions that were made. And when we told them those same decisions were made by an algorithm and AI, they viewed them as less fair than when they were made by humans. And so we have this kind of juxtaposition where we're giving our data, you know, we're more willing to be
tracked and
provide our data, which might be problematic for privacy. Once we have that data, we could actually use it
to make good decisions but in those cases, and that's when we want
to step in and say, we don't trust it, so
it's kind of interesting. - But with some of these AI, with some of these HR
applications of AI... Because AI is basically scraping what's out there in the
world and coming together and then, you know, using the data to basically interpret and I
mean, it becomes a tool for us. What
I'm trying to get at is that AI can be as biased as we are, the people who create the initial data that goes into the AI. And so we've seen in some HR instances where AI makes the same bias decisions that the human beings make. How can we counteract that? - So Amazon famously tried
to create a hiring algorithm and they had to scrap it at the end. They tried to overcome the bias. It kept hiring men and suggesting that they
hire men and not women. And so they took names out, they took, you know, t
hey
tried to strip everything out from the resumes, but the AI is very good at assessing and knowing what's, you know, whether you played
women's sports in college or whether you use even certain adjectives to describe your performance, that men and women differed on that. And so they had to scrap that. So in some cases, I think
we actually just need to not rely on AI when we can't
remove that from the system. In other cases, Sendhil Mullainathan and his team of researchers
have this great study
where they found that there was racism embedded in a decision making algorithm that was used by hospital
in a medical context. And they actually did an
audit of the algorithm, and they were able to find
that it was making decisions that were based on monetary preferences that ended up being racist decisions, and they were able to go in and fix those. And as Sendhil talks about, you know, once you fix an algorithm, a decision-making algorithm, it doesn't make that mistake anymore. But you can ki
nd of tell people that they're making biased decisions and they continue to make them,
you know, year after year. And so there is... You know, it's not as easy as just saying, "Let's throw it out altogether. Or let's always use it." I think we have to be really smart about how we're doing this. - You know, your panel this
morning was about ethics and AI, and I got to thinking about a story I had read about somebody who really wanted to be on the podcast that Esther Perel does, the psychologist w
ho
talks about relationships and works with people
actually on her podcast. And because he couldn't
quite get on the show, he created an AI Esther Perel. And I think we're seeing more of these kinds of things happening. I read about an AI, Martin Seligman and other psychologists because they have a big
body of work out there that can be synthesized. Is that an ethical thing to do? And, you know, if somebody
wanted to make an AI of you, how would you feel about that? - Yeah, well, I definitely
wo
uld feel concerned and you know, that's one of the things that I'm concerned about, a lot of people are concerned about, especially as we head into the election. You know, with 20 to
30 seconds of a voice, a person's voice, you can
actually create a deepfake, that sounds just like that person. We can do that with videos now too. This is new territory for us. We have to, you know,
figure these things out. I certainly can't say that I'm
comfortable with that idea. I think we will find our way as a
society and try to, you know, figure out how to handle those situations but it's probably gonna be messy. And this upcoming election is actually gonna be quite messy as well. - You have also talked about
the need to democratize AI to make sure that the benefits get to everyone in all parts of the world. What does that mean? What do you mean when
you say democratize AI? - A lot of people are talking
about democratizing AI and they mean different things. And so I think that is exactly where we sh
ould start with that question. And the reason why it's a
big priority for me is that, you know, we live in a world where right now we're
developing powerful AI systems, and these AI systems are
affecting the entire world. And not only are they
affecting the entire world, but also, you know, all future humans who, you know, are lucky enough to walk the face of the earth are gonna be affected
by these systems too. And they're made by such a tiny minority of the existing population
in the world tod
ay. So that's a problem in my perspective. I've studied power for most of my career, and that's a power imbalance
if you've ever seen one. And so we do need to democratize AI, meaning we need better, we
need to infuse the AI systems that we're building with more
input from around the world. And there are different
things you can democratize. And I wanna make this point here because I think Big Tech often
talks about democratizing AI, and I don't really like the way that they're talking about it.
They mean creating cheap products that lots of people can use. And there's nothing
inherently wrong with products that a lot of people can use but in the case of
something like social media, you know, making sure that
it's like a free product that everybody gets to use, and in many ways humans are
kind of the product there. I don't know that
democratizing access to that is really a positive force for good that democratizing implies. And so when I talk about democratizing AI, I'm really talking
about democratizing the design of the systems, democratizing the use of the systems and democratizing the
governance of the systems. And really finding ways
for more people's voices to be infused into each
of those three areas. - A lot of educators worry that Artificial Intelligence is going to change teaching and learning for the worst by letting a lot of students
offload their writing and other work to AI chatbots. You're a professor as
well as a researcher. Is this something you worry about?
How do you approach it in your classroom? - Yeah, no. I mean, I'm
not worried about it. Maybe I should be. But I'm not worried about it because as long as we change how we teach, I think we're gonna be okay. And I actually believe we
need to change how we teach. I think we've needed
to change how we teach for a long, long time. And so when we make our
classrooms more experiential, when we make them more
kind of exploratory, team-based things like that, I think people learn a lot more through wor
king on projects together. And so I think AI actually
lends itself really well for students to kind of explore new tools and explore new ways of using AI. But it does require that we change and it requires that we both
find good ways of using AI. And also, you know, especially when we're
trying to teach writing. Yeah, we're gonna probably
have to have people work on that in the classroom
and flip the classroom. If they do it at home, they're going to use
chat GPT in many cases. And so, you know,
but I
think we can handle this. I think we can find good ways to continue to educate people - Despite these concerns, AI does have the potential
to transform our lives in beneficial ways as well. What are some of your biggest
hopes for AI at this point? - There's a lot of benefits. And another reason why it's important to kind of democratize the use of AI and I think this comes
from kind of educating and getting the word out to populations that might not otherwise
get the word out is because th
ere are benefits of AI. And so, like, I was just recently in Kenya and was touring the Kibera slum. And my tour guide lives there in Kibera, and I asked him if he had
ever heard of ChatGPT? And you know, I was across the world, I thought I would take
advantage of the opportunity to ask someone. And it turns out he had
and he surprised me, he had heard of ChatGPT, he had heard about it
from somebody from the UK who was on a tour. And he actually uses ChatGPT
to increase bookings. And he takes its
advice about like take how to take pictures, how to arrange his website and so on. And so I think that
there's a lot of benefits that people around the
world who don't have access to great education and who don't have
access to personal tutors the way that the wealthy often do that I'm really excited about. I'm really excited about
people getting access to good tutors and Khan
Academy has created a tool that's pretty exciting
as well on that front. I'm also excited about the possibility that AI
will bring people
together to address it. And maybe I'm a little bit
like blindly optimistic about this, but I think
that there's potential here. And so at the Neely Center, at the USC Marshall School
of Business that I direct, we have something called the Neely Indices where we're tracking user experiences, how our social media
platforms affecting users? How our AI models affecting users? How our mixed reality
technologies affecting users? One of the things that we found with AI is that both R
epublicans
and Democrats are concerned kind of equally across the board. They're both excited
and concerned about AI to equal amounts. We haven't politicized this issue yet. Of course, we tend to
politicize every big issue. And so that's a concern, but I think it's also an opportunity for us to kinda work together. And so that's one thing
I'm also hopeful about. - One of the things that
struck me that you said on the panel here at CES was that we need to slow
down the development of AI. Why do y
ou think that? And is it even possible to
make business slow down? - Well, I mean, that's
another good question. And so to clarify, so there's a gap, which is the speed capacity
gap that I mentioned, and where we're deploying
and developing new AI and new technologies with a greater speed than we're able to handle. And so we don't have the capacity
to kind of make decisions about these new technologies and we don't know how
they're affecting us. And so it's really hard to govern, to set policy,
to design
technologies more effectively and with greater health benefits when we don't really know
how they're affecting us. And so, of course, you can close that gap by either slowing things down
or speeding up the capacity. And I'm actually not a big
proponent of slowing things down. I do think that one way to
slow things down effectively is to hold companies more accountable for the harms that their
technologies are creating. When we do that, they're
going to slow themselves down by choice be
cause they don't want to put technologies out there too quickly, right? And so I think that kind
of slowing down is good, but I don't like the idea of slowing down simply to slow down. And the reason is because we
learn from each iteration. And so if you think about something like Large Language Models, each iteration of large
language models as we deploy it, we learn a whole bunch of stuff and that gets embedded
into the next model. And so if we're trying to learn
as much as we possibly can by
say the year 2030 or 2040, the more iterations that we can have in between now and then, the more we're gonna learn and be able to create safer models. The caveat to that is there are times where we're kind of deploying the technology too
quickly and with too great, you know, too much speed that we're not actually
able to learn and give, you know, adequate feedback
in between the iterations. And so that's where
we're really working hard to try to elevate the capacity of society. And I think for
me anyway,
one of, as an academic, one of the best ways that we
can improve society's capacity to handle the speed is
actually to collect data more quickly and share it broadly. And so with the Neely
Indices, for example, we're collecting data about
all the different platforms, not just one, we're making it public so the companies feel kind of pressure and also incentives, you know,
when they do good things, they also get credit for that. And then we're also sharing
that with researchers so that
they can actually get
research out there quicker. So, you know, I'm more
bullish on the idea of like speeding up our capacity than
I am on slowing down the tech. - But when it comes to
punishing a developer that is doing harm, how would that happen? I mean, we have watched Congress try to wrestle with social media, which I think 80% of them
don't understand or use. And then we have organizations
or regulators like the FTC, but I mean, they're also
slow on the draw as well. So does the punishmen
t just
come from the marketplace? - I think the marketplace
is the best place for punishment to come for
the companies with, you know, by not buying products
and abandoning products. And you see that with, you know, companies like X or Twitter, you know, and you see some of that market
pressure happen as a result of decisions that the companies make. And that's another, you
know, one of the benefits of the indices that
we're trying to work on through the Neely Center is
making those data public.
And so for example, you know, Twitter ranked very high on
our initial sets of indices for people reporting
that there was, you know, content that was bad for the world or bad for them personally, and lower on kind of
connecting with others or learning new things. Whereas like LinkedIn, for example, score very high or decently
high on learning new things and connecting with others,
but really low on the harms. And so that's evidence
or that's data out there that's relevant to users
and they can
make decisions about where to spend their time. I think one of the things
that I do wanna note... And this is really messy, and a lot of people have
a lot of arguments about, like accelerationist are naive or maybe the people who are saying, "Let's take a pause and
slow down," are naive. I actually think it's a messy debate, which actually is a healthy democracy. That's what that looks like. And so the Future of Life
Institute, for example, had the big pause letter, you know, let's pause for six
months, things like that. And you could critique those things, but I actually think that they got the policymaker's attention. And I think when you see what policymakers and the kind of the lack of understanding of how social media was working, back when they were dealing with that. And you compare that to how
much they understand about AI, there's a big difference. They're actually a lot more
skillful with regard to AI. And you know, they have room to grow, but I think a lot of that is because
of the concern
that's been generated. So I think everything comes
with its pros and cons, but I do want to acknowledge that I think some of those cons had the effect of getting
everybody's attention, and I think that's a good thing. - Is there enough transparency in AI as it's developing today? - No.
(both laughing) That's an easy one. We
need more transparency. And I think, you know, the
companies that are building it, I think we have to have a measure of like, kind of mission... Almost like m
issional quality
to what they're doing. Like we're building AI, we're doing something that's
never been done before. And so part of our mission
is to be transparent about what we're building,
how we're building, and what's going into the
data, the training data. And maybe weighing in on
some of the research findings that are out there. There's just a huge stream of research is not fun to try to stay
on top of this field. It is like unbelievable. And so I think if companies
were more transparent
about what they're doing and
how they're doing it as well as like weighing in on
some of the research and doing research of their own, I think we're gonna be better
off the more that happens. - And what are the next
big questions for you? What are you working on? - You know, I'm putting
a lot of effort this year into efforts to democratize AI and the way that we're talking about getting more input from around the world. I'll be doing a lot of
international travel to talk to people who
are in dif
ferent areas. We're expanding our indices
to Poland, to Kenya, to Somalia, to other countries to collect more data from people who don't typically get their
data into the conversations. And then the second thing
is really working on purpose driven technology and
trying to shift the paradigm away from kind of maximizing engagement, maximizing profit through engagement, and instead really, you
know, tech designers, but also consumers as well as policymakers thinking about purpose-driven technology
. What is the purpose of
this particular, you know, what is the purpose of using
this large language model or this VR headset? What am I trying to achieve with it? And is it achieving that purpose? And, you know, measuring that and what are some of the
you know, side effects? Or the harms that come from it. And we do that with
medicine, with new drugs. And I think we need to do that with a lot of these new technologies because they're getting
to be quite powerful. And so I'll be focusing on
how
to shift the paradigm to focus more on purpose-driven tech. - And just to illuminate
for our listeners, what you mean by that? Are there some examples of purpose-driven AI technologies right now? - Sure. I mean, you can think
about like the Metaverse. All the conversations about the Metaverse and you could think about virtual reality as kind of an opportunity
to create a virtual space that we push people into
or incentivize people into, and they spend a bunch of
time in virtual reality. And that
's the Metaverse,
it's this container. And we make money because
we collect data from them, you know, lots of data
while they're in there. Almost like a glorified social media. It's unclear what the purpose of that is. And so that's like a profit-driven or engagement-driven model. And that's not actually working. People are not rushing to Meta's vision of what the Metaverse could be. And instead, there's so many, and we heard many of them
today with the APA sessions and others where people
are u
sing virtual reality to treat pain or to treat Parkinson's or to you know, improve learning and improve kind of
optimism about the world and make a difference in the
world and things like that. Those are very purpose-driven experiences that we can create for people. And I think the more we do that, I think the better off we'll be. - Well, Dr. Fast, I wanna
thank you for joining me today. I wanna thank you for being here at CES participating in the
panels that APA did today. - Well, thank you. It
was my pleasure. - You can find previous episodes of "Speaking of Psychology" on our website at www.speakingofpsychology.org. Or on Apple, Spotify, YouTube, or wherever you get your podcasts. And if you like what you've heard, please subscribe and leave us a review. If you have comments or
ideas for future podcasts, you can email us at
speakingofpsychology@apa.org. "Speaking of Psychology"
is produced by Lee Weinman. Our sound editor is Chris Condayan. Thank you for listening. For the American
Psychological
Association, I'm Kim Mills. (upbeat techno music)
Comments