Main

SOTN2024-08 Intelligent Threats: Understanding AI's Impact on Cybersecurity Policy

State of the Net Conference - February 12 2024 - Washington DC SPEAKERS Moderator: Heather West - Senior Director, Cybersecurity and Privacy Services, Venable LLP Austin Carson - Founder & President, SeedAI Alissa Starzak - Cloudflare Grace Abuhamad - Chief of Staff, National Telecommunications and Information Administration Charley Snyder - Head of Security Policy, Google REAL TIME TEXT https://otter.ai/u/TiXKT8Pl-IKMnw0FXTsiSkfH-nI DOWNLOADS https://archive.org/details/sotn2024 YOUTUBE PLAYLIST http://bit.ly/sotn2024vids The Nation’s Premier Internet Policy Conference Series Annually attracting over 600 attendees, the State of the Net Conference provides unparalleled opportunities to network and engage on key policy issues. It is also the only Internet policy conference with over 50 percent of Congressional staff and government policymakers in attendance, making it the perfect setting to explore important, emerging trends. The State of the Net Conference Series is hosted by the Internet Education Foundation, a 501(c)(3) non-profit organization dedicated to educating the public and policymakers about the potential of a decentralized global Internet to promote communications, commerce and democracy. IEF works closely with leaders on Capitol Hill and in the private sector to host the most important debates in Internet policy. IEF’s board of directors is comprised by public interest groups, corporations, and associations representative of the diversity of the Internet community. https://www.stateofthenet.org/ #SOTN #SOTN2024 #TechPolicy #NetGov

State of the Net

4 days ago

All right, I think we will get started. I'm not sure given that we're running over exactly when we're supposed to get started, but we're just going to run with it. So thank you all for coming. My name is Heather West. I am at Venable and I work with the Alliance for trust in AI. And I am thrilled to be here today to talk about intelligent threats understanding AI's impact on cybersecurity policy. I think, you know, our goal is to talk about how AI is impacting cybersecurity policy and practice,
but also what we're seeing out in the world. So there's been a lot of discussion about AI and cybersecurity since the last state of the net. There has been discussion this morning. There's everyone's talking about AI in the hallway. So you know, it is it is where we all need to be and what we're talking about. So today we'll talk about you know, are AI tools, super powering cyber attacks, or AI tools, making it easier to protect people? Can we protect AI from itself? Can we protect it from other
attackers? Can we just replace our security team with AI armies? See, that sounds fun. So we'll dive in with our fantastic panel. I'm really excited to be here today. These are my favorite panels where it's just me and a bunch of friends chatting. And a big thanks to the State of the Net team for putting this together. So today we have Charley Snyder who's the head of security policy at Google. Alissa Starzak is the Vice President and Global Head of Public Policy at Cloudflare. Austin Carson, t
he founder and president of SeedAI, and Grace Abuhamad, the chief of staff at the National Telecommunications and Information Administration. So let's dive in. There's been... that is the page I just looked at. Um, so we've been talking breathlessly about AI for a year. We all kind of know that AI isn't just chatbots. But apparently, it's working against me. And, and we should just dig in. So Alissa, can you start us off by talking a bit about how AI is used in cybersecurity, big question. Thank
s, Heather. And now I can't complain. Thanks, Heather. And thanks for having me today. By the way, I think the fact that Heather's microphone dropped when she was talking about AI armies may tell you something about what's going on in the world. But no. So, you know, I think the reality of where we are with AI is that AI has been around for a long time. The thing that that is relatively new, and that came out in the past year is the sort of democratization of AI with things like large language m
odels. So when you think about that, you know, practically, it's just changed the world a little bit, not because we haven't had concepts of AI and cybersecurity, but because of who has access and what that then means. So, you know, we've used we I met Cloudflare CloudFlare, if you don't know us, we run a lot very large global network, we sit in front of something like 20% of the world's websites, that is a ton a ton of traffic that runs through our network on a daily basis, the only way we can
actually think about managing that is with AI, there's too much there's too much data, there's too many, there are too many sort of mechanisms that have to happen in real time. To do things in a purely automated way, you have to think about applying either machine learning initially, or now AI to it. So what does that mean, in practice, it means you have to have systems that sort of anticipate patterns, and can then adapt in real time. And that's really kind of what AI gets used for in cybersecu
rity on sort of a traditional basis. Again, the thing that has changed over the course of the past year since the release of Chad GPT is this idea that people can use AI themselves. So think about what that means for an enterprise all of a sudden, right now, every enterprise and while I'm sure we'll talk about this wants to use AI tools. Well, now you have new supply chain attacks, if you're thinking about it from a cybersecurity angle, you have to worry about the cybersecurity of those tools. Y
ou have to worry about your employees who think hey, this jack GPT thing is super cool. I can use write all my emails with it. And all of a sudden putting sort of sensitive information into external applications where they're not actually thinking about where that information goes or where it could end up. So the world has changed. Not because AI is necessarily a new thing, but because who has access to it and what that means from a cybersecurity risk standpoint. Thanks. So, so to build on that,
Charley, can you talk a little bit more about about what's changing for cybersecurity? Yeah, sure. So, you know, I agree with everything Alissa said. You know, I would say to build on it a little bit. You know, what we see as new is, you know, these generally capable As you know, systems that can accept and spit out natural language is one thing that's very new. And putting that very close to the consumer, or enterprise through through UI is rather new. And then increasingly, we're seeing how u
sers start to use that how users start to drive the technology in various directions. So think using it almost like an operating system. This is something people are starting to talk about where you're plugging into this, we're a chatbot has plugins to do various things with other systems, maybe over the Internet, maybe within your network, using it as a development platform. So starting to integrate these capabilities to help write code, help review code, things like that. Obviously, you know,
Alyssa touched on some of the risks that come with that, and I have a feeling we're going to we're going to dig in deeper there, I would say for for security on the positive side, at Google. And then the way we think about it for you know, our customers and the Internet more broadly, you know, we're looking at how we can use these capabilities, these things I mentioned, you know, natural language interface, integrating it with other systems, and ways to completely transform our approach to secur
ity. And so you can think of security as sort of a lifecycle, there's, there's, you know, detecting threats and vulnerabilities, and there's, there's, you know, discovering vulnerabilities which are, which can be exploitable, then there's writing safer code, there's patching the code that you might that an AI system might find that's vulnerable, then there's kind of, you know, post breach, there's incident response can help responders interact with AI systems to, to help them respond more speedi
ly. You could think of it as a huge lifecycle. And I think we're pretty busy seeing all the different places that these systems can, can fit in. And, and provide value. So what I'm hearing from the industry side is AI is everywhere. We're just starting to see it more for the rest of us a little bit. And I do think that that was fairly transformative. And so, Austin, I know that you're taking a bit of a different tack, your red teaming work has really broadened what we're thinking about is potent
ial risks and security risks for AI. Can you tell us a little bit about that? So I'm gonna just take a hard left on that for fun. So I mean, in listening to everybody talk about the application of AI to cybersecurity, and in thinking about the transition from, like, artificial intelligence as like deep learning, you know, from like, 2012, big deep learning explosion to generative AI to chat GPT, I think you make a very good point that it is closer and closer to the consumer. It's about touching
people. But most importantly, in my view, it's about taking kind of the fuzzy math that humans have retained an advantage on until very recently, and moving that into computation. Right? So we've taken we've taken everything that was just like one plus one equals two. And then we've computed all that we can and now we're in the part where we're computing like probabilities, right? If you look at what cybersecurity is, it's the part of computing that already is in the probabilities, it already is
in fuzziness, it already is in like unexplained and inexplicable breakages throughout the entire system. And so if you think about what it means to add that extra probability layer is much more effective at stopping things much more effective at doing things. And the failure mode for most cybersecurity issues is human in nature. It's about the spear phishing email. It's about doing something stupid and downloading the wrong app. Right. So that's going to be a huge vulnerability. And that's part
of why we have gone towards this like red teaming, big picture approach. Because the only way you're going to capture this, like, massive level of failures that are implicit across any system now that you have this fuzzy math and human interaction with AI is having huge participation. Yeah. And so from our perspective, driving that is like one of the number one things you can do. Yeah, and we'll talk a little bit more about red teaming later because I think it's so interesting, but But bringing
so many people together to see oh, what can I What can I break? What what what doesn't work quite right. You know, that's a traditional approach in cybersecurity, but applied a little bit differently. So Grace, as the relationship between security and AI evolves. The US government and NTA are doing a ton of work on this. I know, like, from my perspective, the Ai yo is like full employment. And and it's a lot of really, really interesting stuff. Can you talk about your work on accountability, an
d then some of the work on risks and benefits of open? dual use donation models, lots of buzzwords that are kind of terrible government buzzwords. Yeah. So there's two big initiatives that we're working on in NTIA. One of them is EO related. One of them existed where we started before the EO. I'll talk about the the one that we AI accountability work because that preceded the EO and as a I'm irrelevant to this conversation in the sense that we were looking at when, when Assistant Secretary David
son came in, one of the big questions we asked is sort of like, how can we help move the field, in, there's a little bit of a link, because we're in the middle of a big broadband deployment across the United States, you might have heard about it. And part of that involves, you know, a long grant program, but we have to be building a movement. For once people are wanting to, you know, once we build the infrastructure, we're gonna have all these people online, we're thinking about the online space
differently as it's evolving over time. And as part of that, we were also thinking, Where can we apply those that sort of Internet policy work to other areas that are not just broadband infrastructure, AI policy is one of them. Today's the 10 year anniversary of the release of the NIST cybersecurity framework, it's actually perfect timing in some ways. But there's been a whole decade or more, you can say longer, probably 1520 years of work to build a cybersecurity field and ecosystem businesses
that have created that are created specifically to work on cybersecurity, etc. And you can see that evolving with the AI space as well. So part of what we tried to do when we launched our AI accountability work was think about, what do we want the AI accountability ecosystem to look like? And that includes, what do we want? You know, what are we going to need in terms of workforce? What are we going to need in terms of audits across sectors? What are we going to need in terms of access to data
or, or not, etc. So that's the report that we started working on a couple of years ago, it's coming out soon. And that sort of the work that we we started doing before the executive order came out. In the executive order there was we were given one specific assignment. And that assignment is to look at the risks and benefits of dual use foundation models with widely accessible model weights. That's, that's a very specific task in the sense that the definition of the dual use foundation model, wh
ich is in the executive order, really applies to very large models. And right now, there's only like a handful of them that we really we can think about in that case. And then the question of model weights, again, focuses really specifically on on a type of risk that or it's or a component that we're looking at evaluating, so we have a request for comment coming out soon on that as well. happily take, you know, comments on all aspects, but the task itself is really, really quite narrow. And we c
an come back to that in a little bit more detail as to why that's the case. But I think you know, and when we're thinking about the cybersecurity risk to AI, there's a lot more that we can be there are a lot more, there's a lot more than we can be thinking about in the immediate term than some of the larger scale risks that the CEO is trying to address is trying to do both large scale, potential risk, and then sort of immediate, and different parts of the government are dressing different pieces
. So we have this one task that's quite specific and a little bit more focused on the larger scale risk. Great, thank you. So let's talk about that a little bit more some of some of the cyber risks that come with this explosion of AI, and particularly the explosion of widely accessible, really interesting AI. Alyssa CloudFlare, you were you were saying sees so much of the Internet, and and all of the interactions that are happening? Can you can you talk about any trends that you can trace to the
use of AI? You know, I think the funny thing, you know, here we are talking about AI and this sort of world, the new world of AI. You know, honestly, from a cybersecurity standpoint, the biggest risk I think we all face are the sort of pre existing risks. The fact that people don't do sort of basic cybersecurity, the thing that AI does, is actually power that it makes them easier to find it makes vulnerabilities easier to find. It makes something like a phishing email a lot better. So now, you
don't see the errors in it. To the very, I think the biggest thing that we're seeing right now is not the really sophisticated, you know, change in model weights, that that might sort of influence something long term. It's very much the sort of basics right, we have to fix the basics. And AI is sort of supercharges everything that we end up worrying about, because it makes it easier to find but the challenge is it makes it easier to find the problems. Got it. Charley, similar question. I I'm int
erested in what you're actually seeing the risk being I think, Alyssa, it makes sense to me that all of the things that were vulnerabilities before are still vulnerabilities. What are what is Google saying in terms of malicious AI and attacking AI? I think at a at a high level. What we've seen so far is you know, it has the potential to lower the barrier to entry to for less skilled adversaries. People who want to do harms online, I think we're very interested in seeing how it can be adapted in
the future for for more skilled adversaries as well, but at a high level, you know, Information Operations, we are seeing organizations start to make use of LMS to generate, essentially, you know, inauthentic content and propaganda and the like, I think for the most part, it has been quite ineffective, we don't see it having a huge, huge impact. But I think that's obviously one area where MLMs are well suited to generate content in a specific language that may be more persuasive than, you know,
someone who's not, you know, a native language or that speaker would otherwise be in terms of hacking and intrusion operations really quite limited. We see it used in kind of the first stage of an operation. So generally like the social engineering, which again, relies on kind of natural language for persuasion, things like fueling phishing attacks, I think, you know, Alissa made my point for me, I think that at Google, we consider, you know, password phishing, for instance, largely a solved pro
blem. For organizations that are serious about it, it's, it is not a problem organizations should be having in 2024, let alone 2023, or 2022. And for consumer services, you know, multi factor authentication is largely available for free as well. And so I think it's important for us to be like really crisp on the details of are these presenting kind of new threats, or just exploiting old things that we still haven't fixed? I think for the most part, it's in that second category, what we're really
interested in are, you know, kind of an open call it to the community, the government's other companies interested to the degree to which adversaries are using it for other parts of their operations, you know, malware development, you know, General Reconnaissance, things like that, you know, the social engineering part is the most visible and again, I don't think we've seen it, being, you know, all that much of a game changer. Yeah, if I can make one suggestion, what I find in talking, especial
ly to like security researchers, and just kind of observation is that the reason that it's like that is because we have such an insane breadth of old problems, right, like a crazy amount of old problems that we just didn't touch. It's kind of like content on the Internet. It's just a burning pile of trash. And we're slapping MLMs on top of it in hopes that it makes it less trash. And then like not training on the Internet after 2021, because everything after GPT three happened is trash, you know
? And so we're trying to like repair who really think, have you checked the Internet since 2021? I think like a fundamental level, what scares me is that you see, like, really normal spear phishing things over and over again. And then you see a website with like, four zero days strung together that remotely takes over everyone's iPhone, that's associated with the weeders, you know, and I really am concerned that pretty much around the table and all of these areas of software, and maybe like soci
ety to you know, software must immediately that we haven't addressed, we have like lurking massive vulnerabilities that we're all just kind of chillin on. So we're waiting for everybody to do multifactor authentication, you know, so I don't know exactly what I'm recommending in terms of a policy prescription. But it is kind of why I ended up with this, like mass public participation, because we need like a randomizer. Honestly, like, if you don't care about people care about the randomizer. And
we don't have enough compute for simulating it out right now. Right. But in all seriousness, I don't know I kind of encourage everybody that talks about this to at least point out the fact that we in no shape, but at least pointed the fact that we ultimately do have this like lurking super capability for hacking. Even just that, like the most basic levels, even at the metal split level, you know, Grace, I know NTIA isn't operating Internet infrastructure, but you're kind of the the tech policy h
ub for the US government. I'm kind of interested what you're hearing, especially as you get ready to release this RFC or like, and talking about accountability, which feels more bread and butter, if we're going to just double down on the pieces that we already know how to do and that we already know, or issues. What kind of concerns are people bringing up for you? The concerns aren't the concerns aren't that different than the ones that we've heard about over many years, right? People are still
concerned about privacy. And whether or not they're going to have some sort of consumer empowerment over their data or protections around their data. And its use. We're still hearing a lot about, of course about competition and access and what that means in this space, what it means to I mean, at the bigger, bigger philosophical level, like what democratization really means, and for whom we Focus on the US primarily. But we do have a lot of international cooperation that we have to build out and
think about in this context. And I was at Silicon Flatirons, a few weeks ago, or listen, last week, I can't remember. And one of the panelists talked about, you know, how these systems are being deployed in Southeast Asia and African, there isn't really enough of an ecosystem there to even think about sort of one, whether people will be able to really use these tools for all the wonderful benefits that we talked about here, but then also have any sense of control or protection over the informat
ion? How those three so big questions like that, right. But then, and then practically, even just with for us, you know, what does it mean for our ecosystem in the US competition with companies in the United States? How how we're going to develop sort of a strong research community and attract researchers from around the world to work on AI systems here. And then, more recently, than I think, in the past, more than other NTI issues that we've worked on, we've heard a lot more about harassment, a
nd questions about gender harassment, apparent, you know, I read a report that something like 75% of women in the world now consider, consider how much online hate they're gonna get before running for office, or considering any kind of public facing role. And so as much as we're trying to encourage people to be more active in public life, or more active in all kinds of things, we also don't really realize some of the impacts that people are facing online. Same thing with, you know, in the past c
ouple of months, Arab American hatred, or that sort of ramping, and rising anti semitism, again, huge problem. So, so big questions like that we have an initiative right now that we're working on with kids online health and safety, and that we're co leading a task force with HHS. And that's, you know, a whole different stream of work. But again, the AI impacts are there to a lot of young children, children, teenagers using systems thinking about, you know, as they're developing, and not really s
ort of conscious of yet of how much some of these algorithms are affecting their mental health and well being a development. So some of these issues are not new or not unique to AI, but they're very much at the center of some of the work that we're doing on tech policy and into about it. I think that's a resounding round of it's the same issues that we've been talking about here for years, faster, but faster, faster, is bigger, more more accessible, and it also writes poems about my dog. Does it
make you it makes you miss the Nigerian prince? I'm clear like that. No, no, I would actually say that this is like the hyper version of the Nigerian prince because the Nigerian prince is a self selection device. It's written really crappy, because it supposed to identify people dumb enough to fall for it. Now you have eight tiers of Nigerian prince tune towards like, the bottom eight tiers of Prince Charming. That's what I'm saying. You're like a very realistic like Facebook avatar and Instagr
ams dreams and all this stuff. Like, we're falling into hyper version of what we were already doing before. And I think to this idea of like, the kids don't know how much they're being manipulated, y'all we know way less than the kids. We were raised on this, like, absolutely bullshit idea that we were immaculate and like and manipulatable, like, individually responsible entities, right. And so we sit around on the Internet all day as like adults acting like we know how to filter information. Bu
t that is a hilarious lie. And so I think if anybody actually needs to be like, humbled about what we know, it's kind of us. You know what I mean? Like the kids are alright, guys, we're kind of fucked. Yeah. There's some truth to that, I think. I think it's actually I don't I don't know who else watched the Super Bowl last night. But the number of times, the group of folks in my living room pointed at the TV and said, is that AI? Like it was non trivial. There was at least 10 times and you're li
ke, there SpongeBob sitting up there. And the commentator, booth and someone's like, is that AI? And they're like, No, SpongeBob is really the what? So, so we're talking about, like, kind of the history of some of these discussions before AI was the buzzword. We're talking about how this this kind of harkens back to all of the problems that we've thought about deeply. One of the things that this makes me think of is, is the discussions about open source, and how democratization of tools and secu
rity really has been a part of this discussion for decades. And I'm a big open source fan. But the AI piece of this puzzle kind of changes the discussion a little bit. How are you all thinking about defending and something that's so widely available and powerful and turbocharged, open question. Start My RFC for me. Yeah, I mean, my main thing is I like how Grace downplay that she has like the most important tasks like we have this like little thing about like dual use foundation models that have
publicly available model weights. I was like, Oh, you mean, whether or not the foundational technology we have is available for people to play with? Yeah, it's a small, but I mean, like whether or not the model weights are available is pretty critical from every level, right? Like there is no aspect of this technology that is not fairly equivalently dual use. And what that means is it cuts exactly as positively as it does negatively. It's exactly as helpful as it could be harmful, right. And so
like, I think encryption is kind of the best analog that most of us know and deal with, right? When NTIA and everybody else has done playing with encryption. People like lived and died over encryption effectively, like Bruce Schneier was like, I can solve this, I can ship this book with printed out encryption overseas. So there's kind of this hilarious thing where we think we're going to stop open source distribution of like model waits for like research like that in mass without there being li
ke a really good widely agreed upon idea. But I don't know, Bruce and I are gonna print those out and mail them across the ocean. So it's like, we still have to cope with the fact that software is inherently free, right. And that means we have to create like ecosystems and kind of movements and networks and platforms that are designed to support like a more positive overall movement of the ecosystem. But also, in general, like increasing kind of like the validation of the value proposition of op
en source, you know, the value prop of open source is that it makes things safer and more secure, and the whole community can work on it, and you can fork it off into like, a better version of it can become standard. So I think like, if having model weights on the Internet makes that happen, we'll probably not really have as much of a problem with model weights being available, because we will continuously demonstrate that it's like cyber Pearl Harbor, will always talk about how the model is bei
ng there to destroy the world, but they'll never destroy the world. Right? If somehow we screw that up in a way I don't understand or don't anticipate, then we'll have cyber Pearl Harbor like Oh, damn, we knew is going to happen eventually. So I think Grace has like one of the hardest tasks like, do we take this technology that half the people in the world pretenders God, and like, put it on the Internet, so that you can not guess what it does? Or do we not put it there? So we're just guessing w
hat it does from now on? I think it might be worth before we start talking about these, the likelihood, the probability of the cyber Pearl Harbor, which we can ask AI about will have its own version, but it's worth sort of stepping back and thinking what AI is and does and what the different pieces are of AI because you have, you know, a lot of the emphasis, the public discussion has been around training models, right? So how much compute power? Do you need to train a model? Um, you know, there
aren't that many entities with that much computing power, what does that look like a small number of, you know, a very small number of companies that actually have that kind of computing power. But if you think about sort of what the stages are of AI, you have the training of the model, but then you also have the questions of deployment and then building on it. So and the, the, the open source, the, the, the weight of the model, right, so what goes out there, we, we don't train at CloudFlare, we
don't train our own models for things that are sort of generally available. But we do what's called AI inference. So the, the notion that you can put a model on our edge and do something that's close to an end user, which also requires a lot of actually networking power, right? Because you need if you want to do an AI decision that's close to a user, you need a big network to do that, typically, or you need to do it on a device. And so there are all these questions that come up about how you wo
uld actually go about democratizing AI, what it means for software to be open, right? It's not, they're not just it's not just a given the way software might be, it can't run on everything, it has to train on something bigger, it has to run on a bigger network. And so we've been trying to do a lot of thinking about what that means. Because there are all sorts of different stages that you could potentially regulate. Or that you could potentially sort of think about risks on. Do you think about th
e risks on the training? Do you think about the risks on the deployment? When you get to a model? Do you think about the model weights? And so really kind of being thoughtful about what that looks like? And thinking about deployment, I think matters and will matter in the ultimate outcome, which hopefully is not cyber. Let me let me clarify something one of the cyber Pearl Harbor thing is mostly a riff on like, a lot of people say that open source model is being in line, isn't it? Cuz like biote
rrorism and all these things, it's like, you can logically extrapolate out how that could be the case, we can also logically extrapolate out how Russia should have turned off our power grid like a decade ago. You know what I mean? And I think that's where we're coming from. But just to clarify, like open source model weights available on the Internet right now, GPT four and Chet GPT live behind an API wall. There's a bunch of magic that happens behind there that you don't see and who knows exact
ly what it is. But it seems like you're putting a query into one model, and it spits it out, but there are like the open source versions of this which are like you may have heard somebody talk about like Ms. Shah or you may have heard of course of llama two which is Facebook's model, which like proliferated out into the universe and made open source language models effectively exist. I'm sorry, still If you're here, and like the ability to see them means that you can calculate The math, you can
understand much more this black box, this explainability, you can at least do the math version of it. If you can't like, like reverse engineer with the math means to words, you can lose math it, right. So like one key experiment that shows you why this matters, right is that I think it was Stanford, probably God knows. But they had, they were using like llama too. And they were using open AI. And they did like 10,000 or 20,000 conversations. So like, asked a thing and got a response. And you kno
w, the study like, well, we could look at the my how the, like how the input went through the model weights and how like, lights up the little thing plink goes in here, we want to visualize it. And we could see that its internal state was actually what it said it was doing, it wasn't lying to us or making stuff up with GPT. For I mean, we kind of got to guess seems like it's not lying to us. So there's this thing where like, if you're concerned about what AI is actually doing, on one hand, you w
ant to be able to see the model weights for yourself, so you can kind of know what it's doing. On the other hand, if you can see how the engine runs, you can like, make a most powerful engine. So people are trying to like kind of hide how the engine works, you can decide if that's good or bad. But the one point I meant on open sources, there's really just the need for a holistic approach to governance here, the difference between or or policy approaches to closed source are directly linked to wh
at we're going to see an open source policy approaches to regulating models above a certain compute threshold is going to have direct impact on what folks are doing to develop models below that threshold. And so, you know, for me, it's it's very important, as we're, you know, I think the horses kind of left the barn a little sooner on, you know, we think compute above this, you know, threshold requires XY and Z, I think that has probably directly led to folks investing more in developing models
below that threshold. And I think Ditto with with access to open source. If, for instance, let's just take the positive view, if organizations cannot easily adopt, you know, closed proprietary source models for any particular reason, they're going to invest more in open source, and that's where we're going to see experimentation. So when you look at attackers doing that, you have to look at, you know, attacker access to proprietary and so that, you know, gets into things like you know, know, you
r customer rules and things like that, that's going to directly drive the market for for experimentation with open source models. And I think it's just important that we keep that keep that in mind and have that kind of balanced approach. And just add one. So if a few points to move, as already said the I mean, we were we joke about it here, but we can't take the risk of a cyber Pearl Harbor lightly in government. Right. So. But so But the other piece here is that you mentioned Mr. Powell, for e
xample, right? The the executive order defines or in that definition of dual use foundation model, there's sort of three key pieces to the definition. So longer definition, but there's three key pieces, one of them is that the model has to have more than 10s of billions of parameters. So they're very large in that definition. Right? They have to be applicable across a wide range of contexts. So they're not like you, you know, like a facial facial recognition system wouldn't be applicable here. R
ight? Mr. All wouldn't be applicable, because Mr. Hall's only 7 billion parameters. So, so there's and then and then the third piece is that you have to be the models would be posing a serious risk to national security, public health, some combination of sort of a large risk that could be anticipated there, there's information to indicate that there could be so in this case, right, there's a lot of sort of nervousness, and understandably so about what it means to have access or not to model weig
hts. But in this particular task, we're really looking at a very specific set of, of types of models, right, very specific type of model. And that does create the incentives that you were talking about, Charlie, we're now you see, there's a ton of movement in the space to build smaller models, models that use less compute, because of access to compute issues, right. And because of sort of the anticipated or the, the sort of indication or the theory behind some of how anyone would read or interpr
et the actions in the EU. You think in part, I don't want to give NTA too much credit. But maybe the reason why NTI was assigned to this particular task is because, you know, within the US government, we have an across sort of tech policy, history here. We have a reputation of really thinking hard about what it means to make sure that there's access to systems open access. Oh, In software, etc, we released a report last year on mobile app ecosystem competition. That wasn't a very popular report
with with two particular companies, but it was an important question to ask. And I think a lot of people hadn't really thought about that question about whether or not we needed to have more than two app developers in for mobile app technology. So, you know, we're looking at all the options are on the table, we're looking at everything. We're hoping that this RFC will yield lots of plentiful comments. And we're taking the report seriously, it's due in July. So you know, the sooner we get the RCO
, the sooner we get the report the comments in faster, we can get that report out. I will say it's been very impressive to watch people actually hit the deadlines in the EO is moving fast. Oh, yeah. And there's no if anyone saw the EO came out with a, they came out with a 90 day update. Last week, I think it was January 26. One of the tasks in there was for the nine critical infrastructure agencies to do risk assessments of the risks posed by AI systems. And those were all complete. So that's re
ally sort of that's the beginning of also understanding within the government in the federal government, what we think of risk of the different risks of these systems and how we're going to be thinking about them. Next to you. Thank you. So so to shift a little bit here, Charlie, I know Google spends a lot of time thinking about how to secure Google. But you're also spending a lot of time thinking about developers and customers that are building their own products, services, infrastructure, can
AI tools help the ecosystem? Very nice question. Thank you for that. I mean, it's mildly leading. We're long term optimist about this technology. And, you know, I think when we look at, you know, kind of adversarial dynamics online, I think we hope, AI will, at worst, have a neutral impact, and at best to have a positive impact for people. And I think there's kind of two main kind of drivers where we think that could be the case. You know, one, I think, to a point that Austin made earlier, so ma
ny of the breaches, we see, in fact, I would argue probably every single breach that we we've ever seen, comes down to humans inability to deal with complexity, I think, the online ecosystem, both, you know, the amount of network services and products, as well as the complexity of software itself has just gotten too complex to handle. So whether that's for a developer or sysadmin, or you know, a user just trying to manage their exploding inbox, basically, every breach in the world comes down to
that, and the eyes ability to, you know, reason and learn rapidly and at scale offers a lot of potential to address that kind of root cause of so many of so many breaches. And then the other and related to that is, you know, we've started talking a lot and I think we need more research to play this out, but kind of AI being the great equalizer, and, you know, kind of emerging research that LLM can help, you know, skilled professionals a little bit, but it helps unskilled professionals a lot. And
when I look at like attackers versus defenders online, you know, I think attackers by their nature maintain at least a tiny bit of capability and intent, even if we would call them very low skilled, there's plenty of organizations that don't have a single IT professional don't have a single cybersecurity professional. And I think if we can incorporate this technology in a smart way, which is not to say like everyone should buy, like a new, you know, our new fangled AI product, but embed it in k
ind of the ecosystem through the widely used platforms and services and systems everybody uses, I think it could have this great leveling ability and take care of this kind of low hanging fruit. So, you know, of course, you know, Google, we have both, you know, consumer business as well as enterprises. And we're absolutely, you know, starting to see how AI can help these organizations both make making it into the consumer platforms to help them and that's kind of something we've been doing for f
or a very long time and in places like Gmail, but then also offering kind of technology to organizations, whether that's to, you know, help them manage threats, better write more secure code, and what have you. And the last thing I would say on you know, a reason I'm an optimist here is because I think attackers have a lot of advantages online right now. One advantage I do think the defensive community has is data. I think they've always had it. So you know, the cybersecurity companies, big tech
companies. They've always had kind of access to better data sets. Stan attackers, the problem has been dealing with and managing that data. And now that we have this ability to turn data into models into software to help organizations, as long as we can keep developing that technology to aid organizations for defensive purposes, I think that can be really beneficial for the for the ecosystem. Elva How are you thinking about protecting your own AI systems? Yeah, you know, I think. So this gets i
nto a little bit of the open source, you know, we use some of our own systems for, for AI, so for protection of others. So if you think about what we do, we offer essentially a set of cybersecurity services that sit on top of a lot of entities infrastructure. So like you going into your internal networks, it might be going to your website, a lot of things that sort of sit on top, but the things that we can use AI for, we can actually start looking for patterns of exploits, essentially. So imagin
e, now you have a vulnerability, everybody knows about it fine. Everyone, you people go at the vulnerability that same way, that's really easy. That's not AI, right. That's just the basic rule, you know, what you're doing. Now think about, if you think about that same set of, of mechanisms of going in, and you start seeing it in an area, that's an unknown vulnerability, that has a way of teaching, if a system can identify those patterns, it can actually find a new vulnerability potentially, that
you might have not even known could be exploited. And that's a world where AI has a huge potential long term benefit, where you can actually block something that is a vulnerability before you even knew its vulnerability. And that's huge, right? Or think about, think about, you know, phishing, emails, business email compromised, right? That this sort of concept, you can do the same thing, we're talking about improvements of a phishing email. But the AI systems going the other way, where you're n
ot just looking for, you know, sort of very well known fish kids, but you're looking for sort of probability that this actually might be malicious. And, you know, going to Austin's point about what AI is ultimately is and saying, Hey, if it's more than a certain probability, there's going to be additional checks on it. Those are huge potential developments that have both the consumer benefits and the enterprise benefits in the long run. And to Charlie's point, I think they are democratizing, rig
ht? They're easily available to everyone potentially, you could have a small business that doesn't have an IT. And it as somebody who's who, who even runs their IT staff that can actually get access to them potentially. So there's some huge long term cybersecurity benefits, I think, in for for AI. But we have to figure out how we harness them. Well. The other thing I would add just on the the sort of long term benefits, you know, we often talk about sort of insecure code and vulnerabilities like
, the reality of having an AI system helps you write more secure code, doesn't mean it's writing it and you trust it wholesale. But the ability of it to do the first cut at it is huge. You can you know, we were talking to sort of very senior cybersecurity researchers, they're like, we're not going to have this problem, we're not going to have insecure code and a few years because AI is going to it's going to identify and help us cure those vulnerabilities. And again, those are just long term pot
ential benefits. I think that we see, that's not quite the same as protecting our own but no, yeah, yeah. So there's two things that I think will be really cool to hear interesting, like the classic cat and mouse game, you know, like, they find it one ability to fix vulnerability, we anticipate when we fix it, it's going to change into kind of like a different two sided thing of like, who can think about the wackier thing to have aI use and who can think of the way for AI to like, fuzz through a
ll your existing code and fix everything I know, Google just released a report about going through repairing like 17% a part of your codebase using a fuzzer. Like those black swan events were kind of talking about or like what if people finally exploit any of our abilities at scale? This is our opportunity to jump in and use these tools to do what to your point humans could never ever do. Right? And then start having your cybersecurity professionals who are going through and like fixing your you
r spear phishing email thing if you're at a regular normal corporation, or like, you know, fixing your massive back and most sophisticated ML model ever for Spamfilter Google, right. And instead, there's also like some really creative people who were like, Okay, well, the next spear phishing email is going to be something from your mom about your sister's birthday and something about your dog and they're going to try to extract all the information that's normally used to form your passwords, and
then cobble them together into a pet you know, I mean, they're gonna spy deftly definitely more creative interesting people than me out here. Fishing madlibs Yeah, you know, like, whatever. Just like there's kind of like a wacky job that should exist in the world now called like, the trying a bunch of stuff guy, you know, or it's just like, every company that's like kind of wacky mad scientists like Google X, but just kind of like a I don't know what if we just made it do this and they just try
it out. And that's weird that that is probably going to happen soon. Or maybe we just have you. That's the only one we need like that. Oh, no, I just made a terrible it is it is going to be super interesting, by the way as we start to see like multimodal attacks where you're combining like text and audio and image and things like that in ways that I think AI systems might expect, but I don't think humans would. That's the fun of being a prompt engineer right now. Right? I mean, that's basically
that job in a lot of ways. I mean, you're, you're testing the model for any kind of, you know, see what I can. Prompt and small, it's a small piece of it. That's true, but it's a I think that's one piece, the Rube Goldberg machine. Yeah. Yeah, if I can. All right, I want to do one more question for everybody. And then we'll open the room to q&a. My suspicion is, there's lots of good questions in this room. And this is a big one. And it's also might count as doing Grace's homework. What do we no
t like? What do we know? We don't know. What do we need more information about to do this well, and to think about AI and cybersecurity in a helpful, productive way that really helps us, protect ourselves, protect the AI and use AI for all of the purposes that we really are excited about, while we minimize the risk. You know, like everything we've never been able to use computers on before now. All of that. Like, honestly, if you want to ask what we don't know, it's like an absurd, unthinkable,
like, like Eldritch Horror amount, to be honest. It's kind of crazy that we're all just going with that as the fact actually the NIST risk management framework, original panels still kind of haunt me, because they had this moment of validation panel where they have like Kathy Baxter and some other people, they're just like, oh, we can validate this thing and that thing, and then the moderator said, What about large language models? I like those, and all of them laughed. And just like, of course
not, that would be crazy to say that we can validate one of these, and then everybody laughs and we all move on. And then like, replace computers with these, you know, we're really it's not even that it's better than that replace computers with these operating what used to be a, you know, is like a whole system that we don't understand at all. So I think there's something that's like, like we have really unlocked on, like, very exciting technology in a very powerful new age of computation and li
ke, human extension of the world, but at the same time, we have opened infinite, unexplored space, right. And there's like a very deep need and opportunity when that, right like, it's not just like a scary thing. And if anything, it's kind of like the grand new age of exploration or something, you know, if we can actually view it that way. Otherwise, it's horrific, we don't know. But it's like, this isn't a reason for everybody to have a job. Figuring out what happens when we open up fuzzy math
in the world for computers. fuzzy math is fun. So I'll answer one that like, I'm not sure it's a it's exactly answer your question, but something that I think is potential, but like we don't know, if we could ever get there, which is good. You know, AI systems help drive towards formal methods for software. So providing mathematical proofs that a piece of software is secure, would just be like, in a millisecond, completely transformative to software security. Right now, I don't think LLM 's are
well suited to that. But could future other forms of AI technology get us there? I think it would be really exciting, really cool. That'd be amazing. I think I'm gonna go a different direction. I think gray sort of touched on this, I think there's a personnel issue that we have to think about a little bit too, you know, engineers think in terms of math, right? We're now talking to in terms of probabilities, you know, non deterministic, that means you don't know what the outcome is. That's not ho
w an engineer normally thinks. It's got to be thinking about the question of risks actually requires an entirely different way of thinking when you get into AI models. And I'm not sure we've quite figured out how to actually do the education around that, or trying to sort of teach people how to work with AI models. And so I think, practically, there's a lot of work to be done in making sure people actually understand what they're talking about and understand how to work with an AI model. Great.
I was just gonna say thank you. I mean, I spent most of my life doing other people's homework. So I just did it. This is great. Thank you. Well, and you have a very formal mechanism to tell us what you want to know. So. All right, so I know that we have a microphone. Anyone have any questions for the panel? I see prime. Thanks. And thanks for a very interesting panel. So the question I have is around the 2023 Biden National Cybersecurity strategy and the emphasis that is very important there on
cybersecurity software liability and just curious for your, the panel's thoughts in whichever direction you want to take on, where that's going, how it's going to intersect with the, you know, sort of the problems we've been talking about and the potential to patch at scale, and particularly how it could provide a floor for thinking about God rails in the open source space particularly. I'm happy to jump in. I mean, I think we've been kind of dancing around this for a while, I think something th
at's been a little frustrating for me to watch is someone who's been in the security community a long time in the AI community less so is we've gotten to this point in kind of general software in general security, that there is this recognition that, you know, the vendors of widely use software and products need to be responsible, and it's not, you know, a fait accompli, that breaches need to happen, like there's very well known, you know, software engineering practices that can can actually sto
p most breaches. And, you know, we need to move to a place where, you know, when that's not happening, these vendors should be should be called out. And what's been hard to watch is, during the AI explosion, we've kind of, you know, not that the kind of, you know, longer tail existential risks that kind of far out stuff, not that we don't need to worry about that. But I think the balance is just gone, they're, you know, swung so hard in that direction, that we've kind of forgotten about the basi
cs that, you know, models seem really cool, but it's really just software and models is one part of a very big stack, usually. And if attackers want to do harm, they can target any parts of that stack, and are going to choose, you know, the shortest path to reach their objective. And so, as we're building out this, you know, AI industry, and there's such a kind of an explosion there, I think it is really important that we're not just perpetuating the current dynamics into the future. But like, w
e've learned how to do this, we've gotten before AI, we actually have gotten to this kind of consensus that like, we need to do better. And here's how we can do better. Just because we're going to market doesn't mean we should forget those things like that should be table stakes, that everyone's doing that before we put products to market. You know, in terms of how that, you know, liability debate, the national cyber strategy is gonna evolve, you know, I think, who's to say, but I do hope that t
hat kind of central insight that, you know, when there's breaches, we shouldn't always just immediately blame the end user, we should look at actually the upstream software. You know, I hope that that perpetuates, and today I era so and I'll tag that with one thing. When we did the DEF CON generative AI red team, it became incredibly apparent to me how important and useful it is for the hacker community and security like research community, to a lesser extent, normally cybersecurity community, b
ut definitely the first to to team up with AI research and AI implementation because the mindset of this like, kind of inevitable, unknowable failure, and then like a real process is designed to approach and understand and defend against that is, it's going to be so important because people are going to be stuck in this mindset of like, well, how do we stop all the bad stuff from happening? It's like bad news. It's all the Internet software, obviously, you can't? Well, I mean, we've been talking
about how democratized it is, and the non deterministic nature and all of these, all of these aspects make this even more challenging than some of the other discussions around liability. And I do think it's, it's an ongoing discussion, I am getting the signal that we are going to wrap up. So thank you all. I suspect that we will all be around and happy to chat. And I hope you have a lovely rest of your day. Thank you.

Comments