Dive into the future of AI with Thomas Ermacora as he unravels the transformative journey from Deep Blue's historic chess win to the era of ChatGPT and beyond.
This talk is a must-watch for anyone interested in the intersection of technology, society, and the future, offering a compelling call to action for shaping a world where AI enhances human potential and fosters a regenerative civilization. Join us in this enlightening journey to understand AI's promise, its challenges, and our collective role in navigating its future.
Speaker: Thomas Ermacora
Event: Intelligent Change - Summit 2023
Website: https://www.intelligentchange.com/
Intelligent change Free Weekly Newsletter https://www.intelligentchange.com/pages/intelligent-weekly
TikTok: https://www.tiktok.com/@intelligentchange
Instagram: https://www.instagram.com/intelligentchange/
LinkedIn https://www.linkedin.com/company/28128421/admin/feed/posts/
#AI #ThomasErmacora #Futurism #EthicalAI #Superintelligence #BenevolentAI #AIRevolution #RegenerativeCivilization
[Music] machine intelligence is the last invention
that Humanity will ever need to make this is Nick Bostrom who wrote the book super
intelligence how many of you know this book from 1997 deep blue to 2023 Deep
Mind when some of you were born there was a computer that beat a chess player
called Kasparov how many you know that event that was a moment in history where we
found out that we had a new foe or a new friend or a new friend and a new foe many things have
happened since and over the
past generation we've gotten accustomed to the idea that artificial
intelligence is going to be in our everyday forever in 2016 a computer built
by Google Alpha go beat a human at go that was the next step
related to semantics not just calculations and just recently um more computers
have been in your everyday of course and with chat GPT in particular how many of you know chat GPT
how many of you are subscribers or use it wow okay that's amazing so what I'm trying to depict
here is we've
all seen this field evolve I've seen it close clly evolved as a futurist and in
2017 I got offered the opportunity by the G7 to work as a futurist with 80 scientists from all
over the world but particularly from the eight countries that con to the G7 to lead a team
of 80 scientists to think about what the idea of benevolent AI would be benevolent idea the
benevolent U AI idea a was simply the fact that we knew that at some point the future of work
the future of care the future of togetherne
ss the future of everything would be affected by Ai
and institutions that are out there including the UN and the G7 in particular that sort of
Corrals the leading oecd countries wanted to make sure that we started having the right
conversation because it's a GE geopolitical um War race effectively so many of you here see it
as a technological Frontier but from the point of view of a nation state it's a war race it's an
arms race even and so with this work with the i7 G7 commission I had the
opportunity to really
kind of dig into what people were thinking at the time what the predictions would be and how
it would affect our daily lives but a few things happened since and my work at the ex prise
I was the resident futurist of the X prise um and then on the global Council um Global
future Council of Human Rights of the world economic Forum a bit later and effectively
we saw that a few things would be released in the public domain that you could call uh
military weaponry in the
hands of civilians so many of us are super excited by the
opportunities that are within AI I mean it's extraordinary we might cure cancer we might
live forever you know it's not to throw away but then there's another part and it's a tricky
one it's a tricky one because we don't really know how to control it and actually arguably we
can't control it and how many of you are familiar with large language models okay a few so large
language models have changed the game particularly an example is
GPT uh GPT 4 coming out now and the
main thing really is that before you would design things for particular AIS that would do facial
recognition or you know DNA sequencing Etc and they would each kind of be trained in their own
ways and they would have different languages whereas with large language models computers speak
to each other and they have reinforcement learning and it changes the game because what we've
feared and what we call artificial general intelligence is something that yo
u know is
effectively technically already present in some shape or form today and the most worrisome
part is what people call emergent properties so I'll give give you an example how many of you
know emergent properties fewer okay that's the one that you have to know to understand what's
going on so emerging properties is really this idea that there's a lot of unexpected things that
happen in the learning loops and the reinforcement models so effectively if you want to give you
you if if I
want to give you a small example the the best uh I can give you is that um they
gave an AI the uh task of translating Bengali and it had never encountered Bengali before and
in the matter of 5 minutes it was better than every other computer up to date because of the
large language models in 5 minutes it learned Bengali and this is today with our current compute
power so it means that within the next few years it'll be seconds to learn an appropriate new
knowledge and that's exciting if you
want to do kung fu like in the matrix it's not exciting
if you want to do things such as you know garage terrorism so emerging properties really you know
is what's factors out as something that many call X risk how many people call know X risk okay
so X risk effectively is just exponential Tech related risk so you know a synthetic bio you
know um um you know Material Science you know there's different areas of exponential technology
that we're unleashing now and X risk is you know the cate
gory of EX x- risk associated with AI
is the one that you know many of my peers and colleagues are the most worried about because
of emerging properties so have anyone here has anyone here seen the social dilemma the movie The
Social dilemma great so Tristan Harris and ASA Rasin uh good friends they have created a thing
called the center for Humane technology which is effectively a an Ethics organization helping
governments and major players in technology deal with emergent risk and um very
recently
they released a video you can see on YouTube which I recommend you watch which effectively
really showcases that we have no idea what's going to show up in the very near term and we
need to have a serious societal conversation it can't be in the back rooms of government in
the rooms of big tech companies it needs to spread so the reason of me being here today and
my message to you and why I'm saying intelligent change is not to scare you on the contrary it's to
activate you it's
an initation to participate and the other day there's an organization called the
future of Life Institute led by a guy called Max techmark who wrote a book called uh life 3.0 and
they released an open letter that Elon Musk for example signed as well calling for a global pause
in the development or specifically the deployment of large language modelbased Technologies into
the hands of civilians so you'll ask how many of you know this okay so more so that was really
caught the news and you kn
ow why is this very interesting why it's interesting because obviously
um it means that even those who are in the middle of it who have the big tech companies are scared
themselves but also it shows a little bit of let's say uh the level of surprise that is hitting them
because they haven't actually figured out how to deal with the speed of evolution of the technology
and therefore um you know we call this a problem of alignment so alignment is the general notion
that AI will be following t
he interest of humanity or aligned with the interest the best interest of
the commons of humanity and right now we're not so sure about that so that doesn't mean by the
way that um you know we're in trouble today it just means that if we don't pay attention we could
quickly be in trouble and so I want to highlight the fact that uh open AI that released GPT used
to be a nonprofit organization and today it's a for-profit organization uh owned in majority uh
or the largest stake is Mark Zucker
berg you know the guy you know he's not been demonstrating you
know the best level of Ethics in the development of Facebook that not is not to blame him in
particular it's just to say that you know there's a repeat idea there so we should be you
know watchful of that and the organization that has given the most money to open AI $10 billion
is Microsoft so whether you're a conspiracist or not I'm not a conspiracist um and actually
I don't think that there is necessarily ill intent but the un
intended consequence level
is just quite extraordinary in this case so I'm just going to suggest here that if you look
at this s it's hard for me to see from where I am can you actually see anything yeah H Maybe
I'm the one who's blind okay so yeah I can't even see it okay so I'm going to pull up the
slide on my phone so that I can actually see it okay so on your left here you have the
social media Revolution and then between two bars the AI moment that we have today and just
before we had
the election meddling Co in the Ukraine war we don't know what's in the future
but we know there's a few things there that we have exemplars you know that are problematic in
terms of societal consequence and then a little bit to the right of that AI moment is the
Quantum moment so our current compute power is going to be multiplied exponentially by
Quantum Computing so what we're seeing here is the probably the closing of the singularity
event are you familiar with the sing Singularity how
many people know the singularity okay
I mean broadly speaking it just means that biology and Technology become one and that can
be exciting if you want to live forever it can be problematic if you're a garage terrorist so
um what you see then on the right is ecosystem collapse which is not to say that it will happen
but unless we do something about it we all know with climate change which is generally what
I spend most of my time on and you know it's it's in the Horizon if we don't deal wi
th it and
I've put at the top there what you know I call anthropophilic so you have Utopia dystopia
and protopia which is the middle wave Monica bilski a fellow futurist has an organization
called protopia Futures and effectively it's just trying to design a relationship between us
and our future that engages us and takes us away from this binary dialogue of you know where we're
heading so why am i showing this well because what we want to really avoid is an AGI apocalypse
um and it is not
if we don't do anything an impossibility actually many researchers today
and there was a survey said that 50% um of AI researchers think that there's a 10% chance to
have within a generation an Extinction event for Humanity 50% of researchers think there's a
one in 10 chance of an Extinction event that's pretty big you know that's much much bigger
than meteorites just to give a comparison so I want to move on to what I like to call the
nature of intelligence and if you allow me to read for
sale baby shoes never worn from ER
Hemingway how many people know this a few okay storytellers usually know this this is Hemingway's
six word Memoir and we use it in branding and many different other um approaches to Leverage The
Simplicity of complex semantics and poetry to develop strong identity in storytelling and
so this story was given to Google's barred um Ai and within 5 seconds in front of the 60
Minutes uh presenter it wrote a story that was indistinguishable from a human with a
level of
emotional complexity that was incredible now it doesn't mean that the machine feels but it means
that it's capable of mimicking everything that all of us here together think that we feel and
know that we feel so that is just again it shows that the nature of intelligence even if there are
many kinds of intelligence machines can mimic it and that's the further concern so here what I'm
just going to show here is that you know another way of looking at it is that you have such an idea
as Peak humanity and if you're a graph nerd this graph is incorrect they should be exponentials
it's just for simplification and to show that there's a Singularity window somewhere there and
that human capacity what we don't want is that it grows with AI and we multiply our capacity and
then all of a sudden we fall back into where we were and potentially lower here I was an optimist
saying that we're going to find ourselves at the same level so the idea really here is that we're
going to
see a flourishing of human intelligence and let's leverage it but let's not forget that
that has to serve our interest in the long term and not just for a generation so so there was a
fellow who was a psychologist at Harvard called Howard Gartner who developed a theory of eight
forms of intelligence which includes musical kinetic uh or spatial kinetic uh in you know
there's eight forms of intellig I'm not going toite them all and you can look them up but the
general idea here is that there
is such a thing as a variety of intelligences and so machines
are particularly good at uh mimicking but right now they can only model fully and have their
own recursive capability of intelligence with a few of these intelligences so I think it's
important to talk about how we can as humans develop our capacity to train AI in the best and
most beneficial way and so I like to think of this as you know the art of immersion in problems
so I immerse myself as a futurist a Hands-On futurist in wh
at we call Wicked problems for
society and I've done this in many ways you know know with the extinction Rebellion with you
know uh different things I did for uh cities which has been one of my main focuses around climate
change and sustainability but basically as you immerse yourself in problems and many of you here
are innovators and thinkers and entrepreneurs and you know this is idea of developing a lot of
positive probability so this group here is a positive probability engine and I wo
uld like I
would like to imagine yourselves as a engine for Humanity not for saving it but for growing it and
so this idea of regenerative civilization being one of our great objectives tomorrow this Arizona
is going to take you on a farm tour you know you you're going to see reg regenerative uh you know
farming firsthand you're going to see there's lots of examples here on the island of ibisa even this
hotel even if it's a five-star hotel that has to obey to certain codes it is very innova
tive in
many respects and I'm sure you're enjoying some of these uh features so the key point I want to
make there is that as a regenerative civilization futurist I've had to redesign my talk in the
last few weeks simply because of the speed of change of AI every day I see something new and
the other day the Chinese were going to release was Alibaba was going to release uh their chap
gbd3 equivalent but it was blocked by the Chinese government so what does that mean well it's
very interest
ing an authoritarian government can stop the deployment of Technology whereas
a democracy so-called like America struggles to do so and that has major consequence so America
currently is a lab rat for AI and I think it's important for us to consider that if we want
a regenerative civilization we can't just you know close you know be like this with blinders
like a horse course we actually have to engage with the concept of AI very consciously and so
this idea of developing collective intelli
gence and seeing yourselves as the you know um the
future humans I think is an important one and I really incite you to see that so you got all
these this deck of cards uh from Alex and Mimi which is beautiful and making all these micro
habits is a way of growing yourself but also see this in um in association to you and collective
intelligence every time you grow you influence the other and we can pull all of each other up
together so I think this is as much an as as it is an individual uh
growth uh and development
exercise it's a an opportunity for Collective growth and I like to call this Legacy design I
have a branding organization and we work with high net worth individuals and patrons of change
to really shape their legacy and how they can appropriate uh let's say their Highest Potential
uh in society today and you know one of the things that we keep telling them is that well they can
do everything they do but they cannot ignore AI natural intelligence is where I think
we
need to go um one of my mentors was one of the early biomimicry Geniuses he froze the wheel
poool in his bathtub and revers engineered it with a computer and figured out the mathematical
formula of a whirlpool and then designed a whole slew of Technologies uh using this mathematical
formula of nature 5 billion years of evolution into design that's what biomimicry is and
I really think that you know we need to do this at a large scale and if I want to you know
just simplify it here I thi
nk that our escape velocity as a humanity is going to be increased
radically by building collective intelligence Ence through events like these where you grow
yourself and you grow your sense of awareness and Consciousness so I'm a proponent of protopia
or anthropophilic through natural intelligence towards regenerative civilization development
and so that's really what I call biomimicry at the scale of civilization so we we as a Humanity
you know this is an opportunity to all of a sudden n
ot be you know individuals but to really see
us all together we're in the same boat for the climate but for AI even faster this is what I
normally spend my time doing this is a project I'm involved in called super nature where we think
about how we can use construction Automation and various exponential Technologies to transform
the way that we live and build green cities um and green cities I don't really like the word
anymore but that's kind of you know what you're seeing here where you k
now in this instance the
main idea is not only that we leverage those Technologies but all cities tend to be carbon um
you know they impact us negatively you know in terms of carbon whereas here the embedded CO2 and
also the substrate of nature becomes the city so this is the you know radical new idea um that
we call biopanning where basically instead of creating cities that are you know not great for
nature but they can be as good as they can be and then you have nature on the side why don
't we
use cities to be the new growth Matrix for nature so even though I'm dealing with this I'm like
well AI is really uh you know the biggest thing that I need to deal with so I just threw in this
little drawing because I think we're we're falling into what I call a wisdom Gap or a wisdom trap so
intelligence is beautiful um and when you look at what a computer can do it's amazing but um we
have something else to offer we have wisdom we have uh emotional intelligence we have spiritual
in
telligence and what I like I said earlier this idea of the nature of intelligence I think we need
to find a way to close this Gap where basically we become better companions with AIS and we will
code the machine to reflect our nature so we need to improve our nature or raise our game
in order to be better friends because we're not going to stop the machine being developed when
when they signed the letter um the future of Life Institute they asked for a pause of deployment
not a pause of dev
elopment so in the labs what you have and what I've seen I have to admit
is extraordinarily exciting and fantastically frightening so there was a famous anecdote of
Einstein and Marilyn Monroe meeting how many of you know this story okay that's interesting so
The Story Goes that Marilyn came up to Einstein and said we should have a baby imagine just
imagine my beauty and your intelligence and then Einstein looked up a bit and he says
yes but imagine reversely my beauty and your intelligence
I'll let you think about that so this is
basically a call to upgrade ourselves here and now and I really want to thank Alex for
inviting me here you're a fascinating bunch I look forward to learning about many of you
over this weekend and um you know it's just wonderful that you know we can get together
like this but let's make sure it counts and you know this was I was just like note to
self saying okay so my to-do list buy gold buy a sailboat buy 100 kg of canned beans find
someone to l
ove forever and hide in the open ocean so this is the ostrich approach right
I think we should try and be present aware and collaborate instead so I'm going to
read you this quote that um was a prompt by uh friend Reed Hoffman who's the founder
of LinkedIn and he uh asked the AI to write as if it was the Buddha it says artificial
intelligence is not a separate entity from us but a reflection of our own mind by
cultivating it with skillful means and ethical values we can enhance our
own Enl
ightenment and benefit all beings this was was a machine writing it Buddha's
got competition so um I love this image so I chose these uh so I'm dressed in blue purely because
it's the color of trust and um calm and apparently wisdom and not to say that I have the wisdom but
I want you to reflect on these three qualities um and I know you have the founder of CM here so
sounds good to be the founder of Colm I have to say so um the current IQ of uh that's been noted
and this is actually alread
y very old but they said IQ 48 which is the average IQ of a chimpanzee
so we were not there 2 million years ago when we came down from the trees and were called Lucy
um but there was a moment in history from an evolutionary standpoint where all of a sudden
humans became the Apex animal and it is time to admit that 2023 may just be the year where we're
no longer the Apex animal and so I think that actually this is for me hello change is the
intelligence and I really like this drawing so thes
e drawings are actually naturalist drawings
uh roughly the same time as Darwin so many people think of the evolution of species as a book that
talks about competition but actually it talks more specifically about competition within niches and
nature is an eminently collaborative let's say or superorganism and so that's again it's an image
it's a metaphor for me you know of the darwinian understanding that yes we can compete but on a
bigger scale we should collaborate so it's collab ation so
I really want you to leave this with
a sense of uh empowerment you know you're the generation that will not only decide if we stay
on Earth and we still have you know clouds and waves and you know the sea to swim in and not
too many jellyfish uh you'll be the generation that decides if we still have democracy you'll be
the generation decides so many things but let's do it with this in mind that we have become second
we need to be humbled Now by um our own Humanity to work together and that
is really what I want
to inspire you to do thank you very [Applause] much
Comments