Main

Report: AI Is Going To KILL Us All

A report obtained by TIME says AI's risk to national security can cause an "extinction-level threat to the human species." Cenk Uygur and Ana Kasparian discuss on The Young Turks. | Your Support is Crucial to the Show: https://go.tyt.com/jointoday Get the Progressive battle plan: https://go.tyt.com/book-description Watch TYT LIVE on weekdays 6-8 pm ET. http://youtube.com/theyoungturks/live Read more HERE: https://time.com/6898967/ai-extinction-national-security-risks-report/ "The U.S. government must move “quickly and decisively” to avert substantial national security risks stemming from artificial intelligence (AI) which could, in the worst case, cause an “extinction-level threat to the human species,” says a report commissioned by the U.S. government published on Monday. “Current frontier AI development poses urgent and growing risks to national security,” the report, which TIME obtained ahead of its publication, says. “The rise of advanced AI and AGI [artificial general intelligence] has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons.” AGI is a hypothetical technology that could perform most tasks at or above the level of a human. Such systems do not currently exist, but the leading AI labs are working toward them and many expect AGI to arrive within the next five years or less." *** The largest online progressive news show in the world. Hosted by Cenk Uygur and Ana Kasparian. LIVE weekdays 6-8 pm ET. Help support our mission and get perks. Membership protects TYT's independence from corporate ownership and allows us to provide free live shows that speak truth to power for people around the world. See Perks: ▶ https://www.youtube.com/TheYoungTurks/join SUBSCRIBE on YOUTUBE: ☞ http://www.youtube.com/subscription_center?add_user=theyoungturks FACEBOOK: ☞ http://www.facebook.com/TheYoungTurks TWITTER: ☞ http://www.twitter.com/TheYoungTurks INSTAGRAM: ☞ http://www.instagram.com/TheYoungTurks TWITCH: ☞ http://www.twitch.com/tyt 👕 Merch: http://shoptyt.com ❤ Donate: http://www.tyt.com/go 🔗 Website: https://www.tyt.com 📱App: http://www.tyt.com/app 📬 Newsletters: https://www.tyt.com/newsletters/ If you want to watch more videos from TYT, consider subscribing to other channels in our network: The Watchlist https://www.youtube.com/watchlisttyt Indisputable with Dr. Rashad Richey https://www.youtube.com/indisputabletyt Unbossed with Nina Turner https://www.youtube.com/unbossedtyt The Damage Report ▶ https://www.youtube.com/thedamagereport TYT Sports ▶ https://www.youtube.com/tytsports The Conversation ▶ https://www.youtube.com/tytconversation Rebel HQ ▶ https://www.youtube.com/rebelhq TYT Investigates ▶ https://www.youtube.com/channel/UCwNJt9PYyN1uyw2XhNIQMMA #TYT #TheYoungTurks #BreakingNews 240312__BE01Ai

The Young Turks

1 day ago

A new report commissioned by the United States government alleges that artificial intelligence poses an existential threat to humanity if not properly regulated. And it won't be popular, won't be properly regulated. So just buckle up because we are now facing an existential threat to humanity. Yeah, guys, I want to double down on what Anna just said. There's no way we're going to do the regulations that they suggested. So listen carefully as Anna tells you what the threat is, because apparently
it's real, which is very scary. - I'll give you my thoughts in a second. - Yes. So more on a government commissioned report. That's basically a waste of resources, because the government has no interest in using the information in the report to do the right thing. The report was written by Gladstone AI. It's a four person company that runs technical briefings on AI for government employees. The authors worked on the report, titled An Action Plan to Increase the Safety and Security of Advanced AI
. For more than a year. They worked on it for more than a year, and they spoke with more than 200 government employees, experts and workers at frontier AI companies. Okay, so at the end of their investigation, they concluded the following the US government must move quickly and decisively to avert substantial national security risks stemming from artificial intelligence, which could, in the worst case, cause an extinction level threat to the human species. Okay, so that's what's going to happen.
Like like just see this as a step by step your future. This is what it's going to look like. Okay. There's two giant threats. I think one is a much bigger deal than the other, but the one that's not as big a deal is definitely going to change the world. - So go. - Ahead. So they write that the rise of advanced AI and AGI, which is artificial general intelligence, has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons. And if you're wondering
what AGI is, it's something that's currently being developed. It is not developed yet, but AGI could. It's like technology that could perform tasks at or above the level of humans. And so since these systems don't exist yet, I don't want anyone to think that it's not a threat. They're expecting to have this, like fully developed in five years or less. So it's coming. The report also focus focuses on two separate categories of risk, as Jen alluded to earlier. So let's talk about what those two ty
pes of risk categories are describing. The first category which it calls weaponization risk. The report states such systems could potentially be used to design and even execute catastrophic biological, chemical or cyber attacks or enable unprecedented weaponized applications in swarm robotics. Fun. - I'm going to explain that in a second. - And then there's the second category. It's what the report calls the loss of control risk, or the possibility that advanced AI systems may outmaneuver their
creators. There is, the report says, reason to believe that they may be uncontrollable if they are developed using current techniques and could behave adversarily to human beings. - By default. - Yeah. Okay. We're definitely in trouble. So let me break it down without doing, like, histrionics and, etc. But look, you can begin to understand through common sense how some of this might play out. So in terms of the weaponization risk, that's the one that's much more real. Will they at some point wea
ponize AI? Of course they weaponize everything. So you think they're not going to make well, they're already making killer robots. We did a story on this a while back. The Pentagon, hilariously, is claiming, no, no, we wouldn't use those robots to kill people as they're beginning to put weapons on them. Of course they will. You know, look, they're already every country, not every country. But a lot of countries are already using drones, to kill people. And that doesn't have artificial intelligen
ce, but it is a, some mechanism that's killing from afar. And they're using it in Gaza today, for example, Israel is when those 100 hungry people, 112 were killed as they were rushing for the food. It was both tanks and drones that were killing them from the sky. So now the swarm robotics within the weaponization is what could change the entire military landscape of the planet, because this is something that I've been talking about for a while. And I always give credit to Wes Clark Jr, who broug
ht. This up a long time ago, way before I came around, he said, well, look, eventually the Chinese are going to put up a thousand drones and they're just going to run them into our jets, and then we're not going to have any air superiority, which is how we mainly maintain control over the whole world. Right. But now with swarm robotics, it's possible that our entire Pentagon is useless other than the nukes, because we. - Stop funding it. - Then that's what I was thinking. I think that might be t
he one. But then they're going to fund this because imagine a swarm of drones with artificial intelligence attacking an aircraft carrier. And they're all loaded up with bombs. What's the aircraft going to carry? You're going to do. It's a sitting duck. Okay. So and imagine now Iran has them. Why can't they have them. They can have them. It's not that complicated. So now, oh there goes your control of the waters, let alone the skies. So this stuff is headed down the pike. It would be. I mean, may
be it takes us five years and maybe it takes China seven for they could develop it before us. Maybe it takes Iran 12, but they're all going to get it eventually. So it's going to change the entire very likely change the landscape of, of the military at a minimum. And these guys are saying this is the biggest national security threat, and this is commissioned by the government. And they were part of the original people who were part of funding, not funding founding I in the first place. So they'r
e the OGs of I, they're not some randos that don't know what they're talking about. Now, this is the weaponization part is the less important part. Loss of control means we're done for. So that is a thing they cannot screw up. So if they accidentally program them, especially the ones that are weaponized to preserve their own life or their own entity over us. And they're smarter than us. Well, then we're in a world of trouble because they're definitely going to be smarter than us. And by smarter,
it's that's a very loose word, but they're going to be able to access more information, they're going to be able to make better decisions. And us versus them if they're weaponized and we lost control. Good night Irene. And so that one is, I hope, less likely I hope, I hope, I hope but remember, it's already happened once, although no one thinks of it in this way. You know, we did create machines we lost control over. Those are called corporations because we wrote the wrong code. We wrote only o
ne line of code here in America maximize profit. And we didn't give a second line. So the corporations took over everything in an effort to maximize profit. And now we will all live under corporate rule. All of our politicians are controlled by corporate donors, and they pass only things that help corporations and and they crush us and our standard of living and and the life that we enjoyed here in America. So those robotic machines already exist. But now when you make them physical, that's anot
her round of hurt that's headed potentially in our direction. I can tell you, if you're not scared enough yet, I can tell you why this is. There's no chance we could stop them. Now, I'm sure. Go ahead. Okay, so these guys suggest regulations, and you could debate whether the regulations they suggest are exactly right or not. I don't think that's the important part. The important part is there's no chance they're going to pass. Why? Because right now, Silicon Valley is in a mad rush to get the be
st AI first among the different companies, meta and the others. Right. And one of the main reasons is Nvidia relatively new company. Their specialty is AI. And they have skyrocketed into the top seven wealthiest corporations in the world. - Oh the Pelosi's no. - Yeah. So everyone's trying to be Nvidia and trying to beat Nvidia. So and there is a world of money involved here past billions okay. So now you think you're going to get in the way of all that money. There's no the corporations already
control government. There's no way they would allow the government to not allow them to make billions or maybe even trillions of dollars. They're going to steamroll any politician that dares oppose them. So there's no way they're going to pass these regulations. And remember, they're also competing against the Chinese and other countries. So they have a super easy excuse. Well, if we don't do it, the Chinese will. And that'll be the fig leaf that the politicians use to make sure that they don't
get in their way as they create these AI apparatuses. Yep. Are why did you tease that story for the bonus episode? - But okay, so we're done with it. - Yeah, definitely done with it. Don't get too depressed, man. We die. We die too late. But you know, the real competition is between AI and the super babies. You know, I've been talking about the super babies for a while. Let's bring in the super babies. I prefer the super babies. You know, because the super babies are at least kind of human and t
hey are. Human. They are just going to do like, genetically modified babies that are like, better than us. Okay. Who cares? Let's move on with our lives. Everyone's like panicking over it. Like, why are we panicking over super babies? - You know. - Why? Because I think we need a little bit of help. - The human race needs a little bit of help. - Well, the thing is. Okay, now they were. We did this series story. Now we're into fun, crazy speculation. Just so you're clear on what we're doing. Okay,
look, the super babies would be a different class since there would be considerably smarter than us. We're going to catch feelings. That's the nature of this age. Oh, I. Would I would prefer to have more people who are smarter than me in the. World. I know, but you're. But you're alone. You're not alone because we have the tight community, but the great majority of human beings are going to catch feelings, and they're going to want to tear them down. And since they're smarter and stronger than
us, they're not going to want to be torn down. And then we're going to get into a conflict. Okay. Yeah, yeah. Okay. But we might need the super babies to attack the super AI that's coming. So that's the future you have to look forward to. I wonder what. My super babies would be like if the technology was already there and I could afford it. See even she's tempted to do it. I'm telling you, the super babies are on their way. - No. I'm excited. - Maybe they'll save. Us. I'm excited about the super
baby situation. I think we all should be okay. But the AI robots getting together, getting smarter than us, and then killing us. - I'm worried about that a little bit. - Yeah, and super last thing on this. Guys, things evolve. Its evolution is real. So, like, we think this version of humanity is going to be around forever and ever and ever because we're so freaking special. - I hope. - Not. Yeah, no, we're likely going to evolve. And so the way that we would evolve in the modern world is throug
h changing our genes, like physically, literally changing our genes. - I do it every day, Jake. - Yeah. And, and it's not like we haven't done it before. We did it to wolves and we turned them into Charlie. - See, I see, so maybe it could be positive. - See? So that's headed up, but. So don't freak out. It's okay. Whatever will come will come. We'll deal with it. Then stick together and we'll be okay.

Comments

@Nuggetwill

AI is already an extinction-level threat to some people's careers.

@beansnrice321

TYT really needs a tech or science correspondent because they just struggle on topics like these. At lease Cenk's trying but Ana never helps. Also Nvidia is pronounced Envidia, not Nevidia.

@NikoKun

There's another MAJOR problem with AI most people seem to be overlooking, and it's one we cannot stop. It's impact on jobs in the next few years. Current predictions suggest we could see anywhere from 25 to 60% of all workers out competed by AI, by 2030, with higher numbers beyond that. Even if we only see barely that lower number, that's still unsustainable and demands significant economic changes. That means there won't be enough new jobs for everyone, and we need to rethink the requirements our society places on the survival of individuals! Adapting is our outdated system is our only option. AI exists thanks to years of data and content collected from all of us, that goes into training it, so it should benefit everyone. Demand YOUR AI Dividend!

@mr.e5595

7:23 Cenk was spittin. People have been writing sci-fi about runaway AI when the real runaway AI was the corporations we made along the way.

@andyvonbourske6405

what scares me is A.I being capable of modifying it's own code. you absolutely cannot control what they will do.

@SeaJay_Oceans

Natural Stupidity is a far greater threat than Artificial Intelligence.

@generaljellyroll8737

Its like cavemen discovering fire for the first time and burning themselves so badly it becomes a barrier instead of a great leap forward

@FusionDeveloper

Movies are not documentaries. Movies are made for entertainment, not to tell the future. Remember that.

@vectorcontrol4979

The Terminator: The Skynet Funding Bill is passed. The system goes on-line August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug. Sarah Connor: Skynet fights back.

@lours6993

The EU has just passed a full AI regulatory framework.

@fjgavaller5058

Anybody who's ever watched Terminator could've told you this!!

@carbondated6151

An example of "AI gone mad" was a Tel-a-marketing call using a AI Bot to do "Cold Calling". The AI Bot had been programmed to try and sell a product in a conniving way, turning the customer into a buyer. Though a line of questioning, the Bot sensed skepticism in each caller and began to question the their level of intelligence. The conversations deteriorated as the Bot continued to insult the human's character and honesty, a proctor monitoring the call explained the situation to the caller and offered to send a check for their time. A study afterward concluded that parameters used to "judge " human responses was too narrowly defined and that the bot responded to the skepticism as a threat.

@andyscott5277

Love how the government takes the threat of AI more seriously than that of climate change 🤦🏻‍♂️

@kayleelockheart8208

Is he saying NVidia? They've been around for decades. New into the AI world, but have been making GPU's since the early 2000's at least.

@velvetlensfilms3290

1992 T2 comes out. Scientists 20 years later- let’s see if can create T2 in real life.

@DaTooch_e

Skynet in Terminator is the true AI. I think the movie Gattica was about superbabies.

@MechaMSgundamfan

Uh, humanity poses an existential threat to humanity...

@aliceholmes4952

Skynet coming soon (sooner)

@skaarahoon7635

If a machine fears its own destruction and it's not programmed to fear its own destruction. That is a form of sentient life.

@jasonhendler8892

While most people are familiar with I Robot and Terminator sagas, another lesser known saga, Beserker by Fred Saberhagen, is based on the creation of killing machines designed by alien races in a distant star system in the past that encounter humans in the future. Most interesting is Saberhagen's concept of "Good Life", who are people who help the Berserkers in their mission to exterminate all life -just as willing individuals today help corporations destroy society and the world for a better standard of living for themselves