My guest today is a dear colleague of
mine sitting with me on the Microsoft Senior Leadership team. He is someone who thinks
deeply about how technology will affect people, communities and society overall. He is Executive
Vice President and CTO of Microsoft, Kevin Scott. Kevin plays a key role in shaping Microsoft’s
strategy. He is deeply, deeply passionate about creating technology that benefits everyone. It's
his job, actually, to ask where we need to fill the gaps or build something that
does not exist
yet. Kevin grew up in a rural town of Gladys, Virginia, a place that's been and will continue
to be changed by emerging technologies. And I know from working closely with him for the last
couple of years and talking to him over the years, that he thinks deeply about the potential
impact of AI on the people and workplaces in towns like Gladys and many others. So he's also
the host of the brilliant Behind the Tech podcast, which invites listeners to geek out with an
amazing l
ine-up of tech heroes, inventors, innovators. He's also the author of a great
book, Reprogramming the American Dream, which I highly recommend, by the way. So, Kevin, a very
warm welcome to the Positive Leadership Podcast. Thank you for having me and for the
very kind and generous introduction. So, Kevin, one of the big themes running through
your book that I was mentioning is the power of storytelling. It's foundational to our existence,
and that's something that I like to pick up in our c
onversations going on. In the Positive
Leadership Podcast, one of the first things that we get people to do, actually, is to write down
or share the story of their life a little bit, at least, to reflect on the moments that I've
shared them into the person and leader that they are today. So I'd like to start at the
very beginning with your story back in Gladys, Virginia. What was it like for you as a
kid growing up? Who or what was the largest influence on you and your core beliefs, Kevin?
The biggest influence on my beliefs and – maybe the better way to say it is just sort of my
ethos and values – were my family. We certainly weren't well off. We had, at times, a lot of
financial difficulty and at best my family was always paycheck to paycheck. But I don't think we
really paid much attention to that. Or at least that's not the thing that we were always talking
about. My dad was a construction worker, and my mom hustled with a bunch of little side jobs that
she did. She was a
tax preparer. She ran a nursery school for the kids at church for a brief period.
She did a whole bunch of work, but her primary job was taking care of the family. I just had the
luxury of being around a bunch of people who were hardworking, very curious, who always worked
with their hands, who were always tinkering with something and who felt like they had an obligation
to their community to do something valuable, that our job was to take care of each other.
People who cared for their comm
unity and who were serving others in their day to day lives as
well, all the time. So something that's important to you, Kevin, and is also referenced in the
title of your book is The American Dream. This idea that anyone, regardless of where they were
born and what class they were born into, can attain their own version of success. And it's an
idea that has served as a powerful motivation for many people in the US for many years and around
the world as well, inspiring people to strive agai
n for a better life. In your book, you paint
a vivid picture again of your parents and you just discussed some of that. So I'd like to understand
did the American Dream seem like something that was within their grasp, or yours as their child?
I've thought a lot about this idea of the American dream over the past handful of years. And I
think one of the things that a lot of people may be getting confused about is confusing the
idea of a dream and a promise. The thing that both my parents and
I believed when I was
growing up – and I still believe this and my family back in Virginia still believes
this – is that there is this motivating idea that if you go figure out some way to
be valuable to your fellow human beings, you try to understand what they want and need,
and then you go do work and work as hard as you possibly can to try to meet those needs of your
fellow human beings, that good things will happen to you. I tell this to my kids all the time…
CTO of Microsoft or Head o
f Global Business for Microsoft or whatever these big titles are like,
there's so much luck involved in that. That's a crazy thing to try to fixate on and plan for. But
I would have been okay no matter what my outcome was by just applying this set of ideas that I can
work hard, try to discover what's valuable to do, and then going to do that thing, and it really
isn't a promise. And it was kind of implicit in the way that my parents talked to me about this
and in how they acted. It's like,
you just have to go do the work and you’ve got to go grind
through it. And, you know, sometimes it works and sometimes it doesn't. And it's maybe even
more valuable to do something and fail and learn something from the failure and then get up and
go again. Sometimes success teaches you nothing. Yeah, absolutely. So in a way, very much
a hard work ethic and also almost like an entrepreneurship mindset the way
you described it actually, Kevin. Pretty much. I didn't even I didn't
even know th
e word entrepreneur when I was growing up. But I think that’s what I
was surrounded by, a bunch of entrepreneurs. So early on as a child, at one point you'd been
saving up little money and you bought your first computer, Radio Shack Color Computer Two, if
I'm not mistaken. And I think it was a huge deal, a big deal for you personally, because as
basic as it was, you were able to create your first programs, I think, including one for the
fantasy game Dungeons and Dragons, which caught the at
tention, I think, of your teachers and then
the rest, I would say, is almost history. So I'd love to go back to that, because when I listen
to some of your, again, life stories, you talked about the fact that actually as an undergrad you
studied English, and you kept it as your minor for college and computer science as your major.
You actually almost went – I think people don't know that – for an English PhD. So what happened?
What made you decide between mastering programming versus master
ing your native language English?
Part of it is pragmatic. I was constantly trying to figure out how I was going to support
myself and support my family, because I got a scholarship that partially covered the cost of
going to school, but didn't completely, so I had student loans. I was on my own financially, and
part of that was my decision and part of it was my parents’ financial circumstances. I just had to
take care of myself and eventually I had to take care of my entire family. And so p
art of it was
a pragmatic thing. I enjoyed both equally and I think that's a tremendous luxury because sometimes
it's hard to even find the one thing that you really enjoy. And so I picked computer science.
And the story that I told myself about picking computer science is that there's a way to do
computer science that is artful, that is grounded in the humanities, and that I wasn't really
forsaking anything by making this choice. And, truth be told, I'm interested in a lot of things.
I dr
ive my wife nuts sometimes with the amount of stuff that I'm interested in. So I think it's fine
having a vocation that you go and focus on and try to make yourself as good at it as you possibly
can and also like it's great for you, the person, and maybe even for the vocation to have these
other things that you're interested in as well. We'll discuss that later Kevin, as well, as I
ask you to reflect, of course, on AI in a few questions, a couple of questions, to think about
the way those h
istorical routes between English as a language and technology and AI are kind of
coming together now with the so-called LLMs and other language models. So more to come, because
I think it'd be great to understand, actually, in your mind, in your brain, what prepared you for
that new era. So I think you wrote your first line of code back in 1983. Interestingly enough that
was the year I joined a French software start-up company. I was doing modest – because I was not
good at that – basic pro
gramming on an Apple Two. It was more than 40 years ago. So today, now
shifting really to today actually… What would be your advice to a young 18-year-old Kevin, who’d
love to learn how to develop a piece of software without being able to go through a computer
science degree? Which studies, if any, should he consider? How would you go about that today?
It sort of depends on what stage of your career and your life that you're in. I don't know
whether I would I personally would have done anyth
ing different because I like to understand
things as deeply as humanly possible. I've always been fascinated with tools. Young Kevin would
have looked at these AI systems and would have wanted to understand what's behind the scenes
of them and how they worked. But I think the advice that I would give people more generally is
like, find whatever it is that you really, really, really have a passion about, that is valuable to
other people and try to use these tools that you now have available
to you to go do that thing.
I have a 13-year-old and a 15-year-old who, uh, are like thinking about what it is that they want
to be. My 15-year-old daughter has wanted to be a cardiothoracic surgeon for the past year. It’s
sort of a weird thing that she got fixated on when she was really young, because she watched too
much Grey's Anatomy and she had a couple of doctor parents of her friends in school. She had decided
that she's not a tech person, but her mind has been changed over the past
few years. The way that
she's using tools like ChatGPT and GitHub Copilot to help her solve these medical problems and
healthcare problems that she's really motivated by is quite interesting. I think that that's
the mindset. It's like all of these problems we need to go solve, all of these things that matter
and that are important, and we have increasingly powerful tools to go do them. You don't have
to be a programmer anymore to get the computer to do unbelievably powerful things for you i
n
service of what it is you're trying to accomplish. I think it's a great story, and it reminds me of
a of one of the episodes I had with Sal Khan. Of course, he's been developing the Comingo,
an AI agent who is becoming increasingly – when you ask him questions about the
future – kind of your personal tutor, beyond actually just the course you are taking
on, and he could even portray a day where he would say he could become actually a lifelong learning
agent for all of us. We could actual
ly all of us keep learning a bunch of things differently.
We all learn differently, and we all learn at different rates. I was reflecting on this
actually this morning… I'm learning how to do ceramics right now. And the way that I learn is
like, I can go watch a bunch of videos and read a bunch of books and I can absorb it all, but
when it comes to the practice of the thing that I'm trying to learn, I have to actually go do
it because I don't know how to prioritize all of this information I'
ve stuffed in my head
and I think the way that I learn is like, other people do it, but it is idiosyncratic there
are other ways that people learn that are more effective for them than how I learn. And so this
idea that you could have an infinitely patient tutor that could adapt to your particular
learning needs very individually, I think it’s just a really interesting, powerful idea.
I think it's a wonderful analogy. Again, another very recent podcast that I had with a French
chef, in term
s of cooking called Thierry Marx, who is least well known in the country and beyond.
He's doing some amazing work as well socially. He is someone who loves about talking about this
philosophy of basically learning by doing, putting his hands into, what he called the katas
of cooking, the fundamentals of cooking. And this is the way he grew up as a kid, actually, in
the kitchens of big chefs. Anyway, so a very nice analogy, which I love when it comes to AI.
Let's go back to the roots of AI w
ith Alan Turing, a pioneering figure in computer science and
AI. In a seminal 1950 paper called Computing Machinery and Intelligence, he asked the following
questions. Can machines think? Can machines do what we as thinking entities can do? And he said,
it is possible to do a machine which will learn by experience. And obviously we know that his work
laid the foundational concept that continues to guide our research and development today. A
lot has happened since Alan Turing obviously, and
even his first test, over the past 70 years
– I can't believe it. What did you personally experience as the most stunning breakthrough
over the past years, and what has been the most defining moment in a way, if I can ask a question
in your own tech life, when you realized that we are really entering into a very different new era?
Maybe I could just pull back for a second and talk about the history of AI. We’ve had an
extraordinary, difficult time across the entire history of the discipline
of AI. So when
Turing wrote that paper, the term artificial intelligence hadn't been coined yet. It was five
years later at a workshop at Dartmouth, when bunch of these mathematicians and computer scientists
and information theorists sat down and said we want to build a machine that in many ways can
replicate what a human brain does and we’re going to call this study, this new field, artificial
intelligence. And we have had a really hard time over the entire history of the discipline
even
defining what artificial intelligence is. We’ve got this track record of we think that we
understand what an intelligence is like, what our own intelligence is, and then we go build machines
to replicate some aspect of it. And as soon as we accomplish the thing, then we redefine what the
measure of intelligence is. We used to think that the apotheosis of human cognition are these
very challenging games like chess and go. In 1997, IBM built a system called Deep Blue that beat
Garry Kasparov,
who was the world champion chess player at the time. And it was stunning, nobody
thought it was possible. As soon as it happened, we figured out that this really isn't the
breakthrough that is going to tip us over into this idea of artificial general intelligence like
this solves a very narrow thing. Funnily enough, I think chess is now a more popular human
pastime and hobby. You could even think about it as a sport, given how competitive it is and
how much joy people derive from competing
and watching the competition, even though since 1997
there has never been a human chess player as good as a computer at playing chess. We just don't
care anymore. I think that's a really important thing to keep in mind. And to answer your exact
question, the most stunning breakthrough in my career I think probably happened recently. It was
when GPT 4 finished training, and this idea that we had that you could follow a particular path of
developing artificial intelligence was going to resul
t in an AI system that was very generally
useful for doing a bunch of cognitive tasks. That was a big a-ha moment.
I had a strong sense that it was coming, but I didn't know it was coming this fast. And
the thing doesn't think. Going back to your Turing quote, he said two distinct things. One is he's
talking about thinking and one he's talking about, can we write a piece of software that emulates
aspects of what our brains do? Absolutely, we can do the latter. But the former is like
almost
a philosophical thing. When I say, JP, what are you thinking about? How do you think?
That’s a very human question. It’s very different from what these software systems are doing.
Back just a couple of years ago, you just alluded to that, you were really an inception
point of the partnership between Microsoft and OpenAI. And because back in the spring of 2018,
I think when you first went down to meet them, obviously OpenAI already had a relationship
with Microsoft using Azure, and you met th
e team including Sam Altman and also Ilya Sutskever.
What were your first impressions meeting those guys and the work they were doing at the time
before GPT 4, 4.5, Turbo and more coming up now? Well, I will tell you, I was sceptical going
into the meeting. And like my my scepticism in general is just a personality disorder,
like not anything specific to the team. I'm always slightly sceptical of things that I haven't
fully understood. And sometimes, like, I will go fully understand them an
d I'm still sceptical
afterwards. But this was one of those rare instances where I went in and… I'd been building
machine learning systems at that point since 2004, like very large-scale things, doing very valuable
commercial applications of machine learning, and I was not prepared to go into that meeting
to have my mind set about what was possible in the next handful of years. And I walked out of
it, like, absolutely convinced that we were on the cusp of something very interesting. We were
still years away, but they had a framework and a way of thinking about the problem that they
were tackling that was very scientific. There was a methodology like, here's the experiment
we're going to run. This is what we will learn from it and this is what it will tell us about
the experiment we will go run after that. I thought we had the foundations for a partnership
where we could do some valuable things for them to support this very disciplined development
of a very useful AI system.
And the way that it was going to be useful and interesting for
us was that it wasn't trying to solve a narrow problem. A lot of the AI systems before then were
built to solve a narrow problem, like play chess, play go, predict which ads someone is going
to click on. But this was a, we think we're going to be able to build this system and that
as a function of scale. It's going to become more broadly useful and you're going to be able
to build like dozens and hundreds and thousands and milli
ons of applications on top of it. As
a platform company, that's exactly the mission Microsoft has been on for almost five decades now.
Great clarity about that trigger in the history of AI and what you experienced yourself. I'd
like to come back to your book and the role you have at Microsoft, because I think
you've always been quite sceptical, yes, but actually quite optimistic as well. Someone
who's driving the positive side of innovation. I would say that's also my bias, I must admit, as
you know. But you also experienced early on in your childhood, as you said, poverty and loss of
industry in the region where you grew up. And so you understand well the need to invest in rural
infrastructure. So let's imagine that today, Kevin, you are now in charge of shaping the social
and economic development of Virginia that you know so well. What would you do to transform the
lives of the people in those rural communities, to reshape the traditional sectors like
construction, textile,
furniture, agriculture, and more? In other words, how would you reprogram
the American Dream in your home city? What would you do if you were in charge with AI?
I think the thing that you should do, and this is this is a set of things that we ought
to be thinking about doing broadly actually, not just in the rural parts of the United States,
but in the developing world and in all parts of the industrialized world as well. I think job one
is you actually have to have the infrastructure in pl
ace to support people using AI tools to solve
the problems that they want to solve. So part of it is like, do you have a platform available? Is
it open? Is it getting cheaper and more powerful over time? Can you operate it freely across global
boundaries? It means some very basic things, like one of the struggles that folks in my
community have is the internet just doesn't work well. My mom and brother live near a
local telephone building, like the exchange for Gladys and they have very goo
d internet.
And my uncle, who lives three miles away, has 300 kilobit per second internet, which was
great in 1990, and is pretty horrendous now. So you can't leverage AI if you can't connect to AI.
I think a bunch of the stuff you need to go do is like really educate people to be entrepreneurs
in a sense. So how do you take young people and expose them to this full palette of tools
that they have to use, and like help them think critically about what the interesting problems
there are in
the world that they could put those tools to use solving. It’s a very different
educational paradigm than the one that we've had since the beginning of the Industrial Revolution,
where it was like, you human being need to learn these basic skills. You need to be literate. You
need to know a little bit of math. You need to have structure in your life where you can get
up in the morning and go do something for this number of hours and work in teams and understand
hierarchies and like all of t
his stuff. That was very industrial revolution sort of learning.
And that's not really what we need right now. Not anymore, no. So it's both infrastructure,
it's skilling, it's actually lowering the bar of accessing as well, not just freely, but also I
think the level of confidence too. Because there's a lot of fear in many countries. I travelled the
world as well for a company, and you can still see a lot of confusion, fear and anxiety in just
getting access to AI. So I think there's a lot
to be done there as well in terms of education.
Not to interrupt, but part of my job is – and like I’d do this even if it weren't my job –
is thinking about long time horizons. And so it's really hard for us to think past the month,
the quarter, the year, the next election cycle, whatever it is. We’re very short time-horizon
focused. But if you look over decade time-horizons, we've got some very challenging
things happening in the world. We have a warming climate which is going to impose a
whole bunch
of structural changes on the world. We basically have designed a world for one temperature regime,
and we're about to enter another one. We have lots of work to go do to adapt to that and to engineer
an energy economy that won't make the temperature regime worse than it's going to be. Maybe the
one that we don't talk a lot about is demographic change. Almost everywhere in the industrialized
world, we are either in population decline or the population growth is decelerating and y
ou can sort
of see a point near in the future where you're going to tip into deceleration. I think the United
States and France are demographically two of the countries that are in the best situation. The last
data I looked at, France was decelerating, but still growing through the 2030s.
It’s actually getting almost to decline. Last year we had this pretty bad signal
in France. Still ahead of most European countries, but just like in the US, I think we
are the almost the last one last. Ita
ly is in decline. Germany is in decline.
China's in in decline. Japan and Italy in particular are in pretty serious decline. In
our lifetime – this isn’t like an imaginary time-horizon where I'm worrying on the behalf of
my children, which I do – but like we can also worry on behalf of ourselves. These demographic
changes in the world will mean that you just don't have a big enough workforce to go do the work of
the world, unless you have major breakthroughs in productivity and major breakt
hroughs in
productivity, meaning technology. Whether it's AI or something else, something has to happen or
we will have a profoundly different world than the one that we occupy right now. AI is currently the
best shot on goal for productivity that we have. I fully agree with you. It's fascinating to
see all that happening hopefully in our lives, as you say, Kevin. So I'd like to shift gears
a little bit. And you actually alluded to the times we are going through right now in 2024. This
is
actually the biggest election year in history, as you know, with countries representing
more than half of the world's population, like 4 billion people are going to send their
citizens to the polls. I was rereading a book I got from Bill Gates, of course, our founder and
first CEO of the company. He wrote this book in 1985 called The Road Ahead. He said, we don't have
the option of turning away from the future. No one gets to vote on whether technology is going to
change our lives. So, Kevi
n, do you agree that no one gets to vote for using AI in their lives?
They totally get to vote. I agree with Bill on a great many things like on this particular
point, I agree. The problem I think that you have in democratic society is if you make the
choices very brittle that people have to make, then the votes can be really stark. The way that
I talk about this and I wrote about it in the book and I talk about it all the time and it's one
of the things that I love about Satya so much is li
ke, I think all the time about permission.
We get to do what we do because society has given us permission. We don't have a right to
do what we do. We currently have permission to do what we do and like the permission
can be revoked if we do it irresponsibly, if we are not listening very carefully to
what people are telling us. If you have that mindset you basically have more fluid voting
because before you have to have extreme votes, people are voting every day by telling you what
they do
and don't like, and if you're listening and adapting what you're doing, hopefully you get
into an equilibrium that people are happy with, that they feel confident about and hopeful
about. But you can't just jam stuff into society. It really doesn't in the limit work that way.
Yeah. I fully agree with you, Kevin. And again, in my roles over many years at Microsoft, I have
been traveling to many governments and countries around the world so many times, and it was really
interesting to listen
to the Last State of the Union speech by your president, Joe Biden. He
said we must address the growing concern about AI generated voice impersonations. I propose a
ban on the misuse of this technology to create deceptive voice recordings or sensibilities
to manage the risk associated with AI, especially in areas like society, economy and
national security. So how should governments in the first place, and as we just started talking
about, tech companies like Microsoft and our peers embrace
what we've called responsible AI?
What does it mean, actually, in a practical way? Responsible AI is sort of like responsible
governance or generally accepted accounting practices. It is about saying this is what
we believe responsible AI means. It is being transparent like in the saying, like publish
what your guidelines are, and having a set of processes and controls inside of your company
that make sure that you adhere to the standard, and then being willing to engage with stakeholders
about evolving the standard and like having some degree of auditability of your processes for
adhering to it. I think every company that is developing and deploying this technology needs
a framework like this. The same way that you’ve got board audit committees and outside
auditors that are looking at your books. When the thing gets important enough, you need
some kind of framework, so that collectively, everybody can believe that you're doing the thing.
Some of the stuff that I think the
president called for is perfectly reasonable. One of the
things that we've been very, very hesitant about and we've had technology that can do interesting
things with voices for a while. And for a while, we have chosen not to deploy that because the
risk of people using that in fraudulent ways, in ways that far outweigh the benefits, seems
like something we need to think carefully about. And maybe at some point you will do it because,
there are also positive uses for the technology. Like on
e of the things that we've been doing in
my team is there’s this neurodegenerative disease called ALS that will ultimately cause, among
other things, people to literally lose their voice. And so we have been using this technology
to archive people's voices. So before they lose the ability to speak, we get enough of a
set of samples from their actual speech so that you could have an AI system give them their
voice back. That is an unbelievably powerful and beneficial thing. And so, yeah, wit
h all of this
stuff, it's about the balances. It's like, what good does it enable? Like what bad does it enable?
How do you make sure that the good far outweighs the bad and that like when the bad is really
bad that you can prevent that from happening and very quickly detect and mitigate misuse.
I'm with you. And I think obviously not just as a company, but with the industry and others,
I think I've never seen my 40 years plus in tech, as much, I would say, urgency coming from
governments t
hemselves, actually to regulate. It's happening in Europe, it's happening in
the US, it's happening in Asia. And I think we are all in around the table to make sure
we agree on those guiding principles and then implement some tools and processes. And you
know that well, Kevin, because you are also the head of the Responsible AI Office with some
of your peers and Microsoft to make sure that in our own ways of building AI tools, we do it in a
much responsible way. I don’t know if you want to
touch a bit on that, on the way our customers
can benefit from those practices as well. Yeah. I mean, obviously we're a platform
company. And so wherever possible, when we're building tools and infrastructure to help
ourselves build products and to launch them and operate them safely and efficiently and all the
other attributes you want of products, we try to make that infrastructure available to customers.
We’re on the second version of our responsible AI standard that we have published. T
he new US NIST
AI framework that they have published is based in part on our responsible AI framework. There are a
bunch of companies and partners who are using our AI framework as a jumping off point for their
own. We have an increasingly sophisticated set of infrastructure that we're offering in Azure to
help people build and operate AI powered products responsibly. It’s a brand new toolchain. It's a
way of testing things. It's a way of monitoring things. And increasingly, the tools that
you
use for our RAI powered by AI themselves, which is like one of the really interesting
things. The more powerful AI gets, the more powerful your RAI infrastructure gets. One thing
that I will say, like, since we're talking about policy and regulation, is that there's also… The
same way that when you're developing a technology, there's this balance between benefits
and harms that you're trying to weigh, there’s also positive and negative for regulation.
If you regulate too aggressively a
nd too early, you may curtail the development of technology
that could be unbelievably beneficial. The thing that I… I think, again, this is all about
societal equilibrium here, like we have to do what we're doing in a way where we have enough societal
trust, where we can develop the technology and let people lay their hands on it and evaluate whether
it's good or bad themselves, so that we can inform the regulation that actually has to happen. If
you don't do that, you're kind of regulatin
g into a vacuum, you're sort of imagining what a future
might be, and you don't even know that it exists. You sort of risk two things, like regulating the
wrong things. You think very thoughtfully about regulation, you pass something and it doesn't
produce the effect you intended. Or you inhibit something that could be very, very valuable.
I think it was really great that you could share that. I would say, as usual with any innovation,
but really with this one in particular, it's really a ba
lancing act between policies
at the highest level, but also then technology innovation, processes and people, people, people.
We’re going to dig into that, but obviously people are critical to the way you develop here.
Yeah. People, people, people is the right thing. That’s the most important thing. It´s
more important than the regulation. It's more important than the implementation details. If
you are not doing something that is valuable, that is serving the public interest, that
is making
everyone's lives net better, then what are you doing? You’re honestly wasting
your time. You don't need regulation. You need to wake up and stop doing what you're doing.
Well, that's a foundational principle. I'd like to shift gears a little bit and use one of the
one of the favourite quotes here – a poem from our manager, Satya. Satya likes poems, as you
know. And there's one quote from this Austrian poet called Rainer Maria Rilke. He once wrote that
the future enters into us in order to t
ransform itself in us long before it happens. So, Kevin,
is AI entering into us? Is AI entering into humans for a better world or for a dystopian world?
Well, I think technology in a sense – I mean, there are many people who will disagree with me –
but I do think that technology is almost neutral. Obviously when a technology emerges, it absolutely
changes us. It changes us in pretty deep ways. If you just imagine, you and I for instance, how
we process the world and see the world is deeply i
nfluenced by a whole bunch of technologies
that have emerged over the past 40 years. My great grandmother, who passed away many years ago,
lived to be a hundred years old. She was born in the 19th century and lived through the first
stages of the first internet bubble. She went from a world that had no electricity, no cars,
no airplanes… She didn't have indoor plumbing. She was in a barely industrialized, kind of
Victorian world when she was born, and by the end, we had airplanes, space tra
vel, satellites,
internet, mobile phones. So technology has an extraordinary impact on not just how we live
our lives, but how we perceive reality. Whether that's good or bad is up to us. It’s choices that
we make about how we use the technology, like what we decide to put it to. If you look over the
long arc of history, it has been, by and large, positive. And in a sense, it has to be, otherwise
you have no progress over very long periods of time. If you choose the wrong path every time, y
ou
will collapse. We occasionally choose wrong things and then we course correct. We’ve been building,
building, building to a world that is so much more prosperous than the world that you or I were born
into, or certainly that my great grandmother was born into, where there’s less violence, less
disease… There's all of these things, there's disease and suffering. All of the things still
exist. It's just they're much less now than they were decades ago. And, the question we have to
ask our
selves as people who build technology is, are we doing the best job that we possibly can to
build technology in a way where 40 years from now, our children and our grandchildren will look back
and say, wow, this was beneficial, this mattered, this made all of these things that we care
about with justice and the human condition better, not worse. We’d better get it right.
Absolutely. That's the big question. There’s not even any financing amount associated with
this question. I’d love us to p
ursue – because we're almost coming to an end – with more
positive scenarios in mind. You mentioned already a couple of great examples like ALS. I
could refer as well to eyesight. I could refer to a wonderful visit in India 2 or 3 weeks ago. This
wonderful multilingual platform they built and Microsoft Research actually has been partnering
with them. They actually have 270 languages, 22 officials enabling a farmer in a remote
village to talk his language, to be not just translated to Englis
h and Hindi, but to connect,
and using a Copilot like ChatGPT interface to get access for the first time ever to the public
subsidies he was actually able to get in his life. And it can apply to farming, education.
What are what are you the most excited about in regards to application of gen AI when it comes to
the most positive breakthrough for the society? Yeah, I think there's so much stuff that. One that
you mentioned in particular, the pattern I think is really interesting. So one of t
he things that
modern generative AI ought to be very good at is helping everyone navigate the complexity of the
world. The example that you gave is a good one. We’ve been doing some work with an organization
in the United States called Share Our Strength, whose mission is to feed kids. It’s just sort
of amazing how much childhood hunger there is and what the knock on effects of childhood hunger
are. A tremendous amount of what gets diagnosed as ADHD is actually just kids being hungry. When
you're hungry, you can't focus, you can't learn. It just sort of becomes a snowball effect in
people's lives. And so the thing that we're doing with Share Our Strength is we're trying to
figure out how to use generative AI to help people navigate the entitlement programs that already
exist, that already are funded, the money has been appropriated and it's sitting there unspent
because people either aren't aware of it or can't navigate the bureaucracy to sign themselves up
for it. And this
ought to be a great thing for generative AI to help with. I think that that
as a pattern is really, really extraordinarily powerful. I think educational equity is another
thing. My daughter goes to a very good school here in Silicon Valley with excellent teachers.
My school did the best that it could do back in the 70s and 80s in rural central Virginia. But if
I look at my school versus my daughter's school, there’s not even a comparison. The thing
that I think AI could do is close that gap
to try to make a higher quality of education and
learning and enablement more equally available, not just across the United States, but across the
world. I think there's some truly exciting stuff happening right now where the fundamental
pattern driving progress in AI right now, this notion of self-supervised learning, that you
can transform, compute and data and increasingly just compute into AI systems that can solve very
complicated problems, is applicable to more than just language. An
d so we're doing some really
fascinating work at Microsoft Research on how you can apply these techniques to physics and
chemistry and biology to help with building the next electrolyte for energy storage or designing
a therapeutic molecule that will cure a disease. It’s the thing I think is underappreciated
because they're more complicated for a layperson to perceive what the progress is versus what’s
happening with language agents or linguistic base agents. The progress is extraordinary.
These are not little steps we're making. They are just unbelievable in some cases. The biggest
jumps forward in progress that we've ever seen. It’s an exciting, exciting, exciting time.
I share the excitement with you. Of course, I'm lucky enough being part of Microsoft to see
some amazing, mind blowing developments going on. Finishing last couple of questions, Kevin,
with even more positive stories because that's the core, the spirit, of my podcast, as you know.
I know that with your wife,
Shannon Hunt-Scott, you started the Scott Foundation back
in 2014 with a desire to give back to the Silicon Valley community, where they work
and raised their family. I think the initial focus of the foundation was on supporting leading
edge organizations addressing critical needs such as childhood hunger, early childhood education,
women and girls in technology. So what is it that you and your wife are the most proud of and?
And do you see yourself more and more involved in such foundatio
ns, like our friend Bill has
done for the last few decades of his life now? I think what Bill has done with the Gates
Foundation is really extraordinary. The thing I think that makes their accomplishment
extraordinary is they've just picked a handful of things to focus on and they have just driven very
intensely to try to solve. The work that they've done in public health is just extraordinary.
They have chosen these acute problems that are not getting solved fast enough that don't have
a
natural mechanism to get solved. If something doesn't change, they're going to be just as bad
or worse 50 years from now. So the thing that we focus on at the Scott Foundation is trying to
identify and relieve structural poverty cycles so things that lock people into intergenerational
poverty. And the reason that that's our focus is because both my wife and I grew up not terribly
privileged. We had some privilege, because even though my dad went bankrupt a couple of times
and we had a lot o
f financial hardship to go deal with, we're still more privileged than someone who
was born in equatorial Africa, for instance, in the 1970s. I think you always have to appreciate
what you have, but I think both my wife and I, in a sense, look at ourselves and we're like, wow.
There are a handful of things that happened to us, that if they hadn't have happened, we would have
had very, very, very different lives. The point of the foundation is what can you do to go engineer
a helping hand, t
o go engineer some of the good luck that my wife and I had so that you don't
have to have so much left to chance in getting people to snap out of a poverty cycle. We try to
invest in organizations that are entrepreneurial, where they’re thinking about how you can
take an investment and with leverage, with tools like AI, for instance, you can go
tackle a problem and like get a big benefit. Yeah. That's wonderful. My very last questions
– two, if I may – one very quickly, in the positive lead
ership philosophy, we learn how to be
self-aware and ought to build our self-confidence, but also to build our own positivity, which
I think is super important, by the way, in the way we show up in a way we connect with
people. So what are your daily routines, Kevin, or maybe habits you have from time to
time to grow your positive leadership with your colleagues, customers, and all
the people you connect with in your life? So there there's a bunch of stuff, actually.
Since we don't have a
lot of time, maybe I'll just sort of say the most important ones. I think
it is really important for your own positivity and the positivity that you project in the world to
be grateful. No matter how crappy your day is, to ground yourself in “what's one thing that I can
be grateful for today”? The reason I think it's so important is because once you start feeling
gratitude, there's almost a snowball effect to it. Once you've felt the first thing you're grateful
for, it's easy to see all of
the other things that you should be grateful for, and it helps
you be humbler. I think one of the foundations of having compassion in your life, of being
able to put yourself in someone else's shoes, like not feeling what they're feeling, but
just trying to understand what their point of view is and why they might be doing what they're
doing. Even if your knee jerk reaction is like, this is irritating. There’s always a
reason for everything, so just a little bit of gratitude can go an awful
ly long way.
My very last question, I promise you, Kevin, because, I mean, you are still a young
man, at least relative to me. Everything is relative in life, I think. It's a bit
early to be thinking about your legacy. But perhaps what story would you like people
to tell about you, Kevin, in the future? What kind of mark would you want to leave and the
impression you want to leave behind yourself? I don’t know. Honestly I'm deeply uncomfortable
with anybody paying any attention at all to wh
at I'm doing. I would like the opportunity to work
with people who care about what it is that they're doing. The thing often that I'm proudest about.
I just got a note this morning from a friend – someone that I hired 19 years ago, actually – they
sent me a note on the 19th anniversary of their start date at this thing that we were both doing
back then and thanked me. I should be thanking them because the thing that makes me feel the best
about what I've done is having some tiny little impa
ct, a positive impact on someone's career.
Just seeing the things that those people go do after we are no longer working closely together
just fills me with joy. Just knowing that... I kind of don't care what everybody thinks of me,
but having those people that I've worked with, like, feel like it wasn't a waste of time working
with Kevin Scott. That's what I would like. And obviously I care a lot about my family. I want
to support them in their ambitions and dreams and have my wife be succ
essful and my children
be successful and do good things in the world and believe that they had a husband and a father
who supported them being their best selves. That's a wonderful way to close the podcast,
Kevin, it's been wonderful to feel your gratitude, to feel your joy as well, to feel the vibes of
AI innovation opening to wonderful things in the world while being very conscious about our
responsibility for sure. So thank you so much, Kevin. I enjoyed tremendously, of
course, our part
nership in the company, but also as a friend in this podcast,
you have been incredible. Thank you. Thank you very much for having me on.
And thanks for putting positive energy out into the world. Thank you, Kevin.
Comments