Main

Securing your charity in the age of AI

Much has been said about the power of Artificial Intelligence (AI). In the charity sector, the ability to automate tasks, forecast the future, and generate content can revolutionise how we operate, allowing us to dedicate more time to delivering impact to the communities we serve. But, as with any emerging technology, AI poses risks as well as rewards. It has raised concerns over privacy and security of the data fed into its systems, and with new tools emerging every day, it can be difficult to fully assess the risks and manage them accordingly. In this panel discussion, we bring together experts from across the charity and cyber security sector to address some key challenges facing charities today, exploring what exactly AI is and what it means for charities. What do charities need to understand in order to utilise or make decisions about AI effectively? What is the impact on cyber security with the rise of AI, and how can we address these challenges? Our panellists will answer these questions and more. Attendees can expect to learn: -What exactly is AI? -Factors their charity needs to consider around AI -How their organisation can remain cyber secure with the rise of AI

Charity Digital

4 days ago

hi everybody and welcome to today's webinar on securing your charity in the age of AI my name is Lisa shet I'm the head of Partnerships at charity digital and it's my pleasure to welcome you all for today's discussion uh whilst we waiting for others uh feel free to introduce yourselves in the chat box and let us know where you're joining us from in the meantime I'll go with a quick introduction about who we are and a few house rules before for welcoming Our Guest um we are chaty store for those
of you who may be joining us for the first time um we are ourselves a UK charity and our mission is to help other UK based nonprofits increase their impact by being more digital we do that through uh two main activities the first one is providing access to the software and Hardware uh Charities need at heavily discounted rates including essential antivirus software and also publishing content whether that's articles podcasts and videos as well as running webinars and events to educate Char profe
ssionals and volunteers on how to make better use of digital tools and technology so in this session um we will welcome all Partners from the national cyber security centers as well as guest speaker from nominet UK and a National Trust before we get started I just want to share a you has rules and pointers for today's session um the session is uh being recorded and we will upload it to watch On Demand within a week's time the slides and other resources will also be made available to you uh by th
e end of the week so don't worry if you missed something closed captions are available for today's session uh to enable these you can go at at the bottom where all of your uh the little icons are um and you can click on the closed um caption the show caption button to enable these during the webinar please ask loads of questions in the Q&A section of Zoom for um myself and our guests to address them at the end of the presentation uh feel free to share any other the comments experiences top tips
in the chat section uh because we love to hear from you and especially this very important topic uh we really really would like to hear what uh the sector is doing in terms of AI and triling in terms of AI if there's any particular question in the Q&A section that you'd like answered um you uh may be able to up vote uh but in any case we'll do our very best to answer all questions and if not we will take them offline H finally if you encounter any sound or image issues please let us know in the
chat um we've got a lovely colleague um in the chat ready to help uh so we'll do our very best to get the session back to normal as soon as we can and now without further Ado let's get started um obviously I am joined by a very esteemed guest panelists today I'm joined by David seep who is ncsc's technical directors for platform research um David will keep his cameras off but um he will be very glad to answer loads of questions and be part of today's discussion hi David hi thank you for having m
e Lely welcome uh we're I'm also being joined by Steve Forbes um who is cyber security and product leader at nominet and HRI K who's head of IT operation cyber security infrastructure at a National Trust hi all and welcome good afternoon and um with further Ado let's get started obviously today we'll see you know how really um the the sector can um address some of the key challenges facing the sector today exploring what exactly AI is and what it means for charity and obviously the first questio
n is what is AI um and I believe that um David can help us get started with that question hi yeah thank you um again yeah thanks thanks again for having me all um so um NCC regard AI as basically computers doing anything that a human usually would so there's kind of you might see a joke online that you know uh AI can just be a series of if statements and you know arguably yes that can meet the the definition of AI um we also think that things like machine learning is a subset of AI so one thing
a human does is learn and so when a machine learns that's kind of that's machine learning and we we take it to be a subset of AI something we think is really interesting though is if you look back even only 18 months um AI was um really only being developed and done and sort of uh created by small teams you know you might have a data science team or an analysis team who would be you know creating AI models and doing things and then those models just get used as part of analyses or products for t
he rest of the organization so you may be five people in an organization might be doing AI um what's changed now with generative AI which we think is worth sometimes calling out a bit separately is suddenly you perhaps your entire organization is asking different things of AI or you know trying to get it to create images or to solve their problems and so one of the interesting challenges for us from a soci technical Point View on on AI is that generative AI means you've gone from people who were
probably the main experts creating AI models and and creating things for other people to use to now maybe your entire organization or a good chunk of it is um is is doing their own things with it kind of you know getting it to solve problems Etc no absolutely well thank you very much David for um for that background um I don't know if um Steve from a nominate perspective you also want to provide more um to what David shared about definition of AI and what you seeing in your world I'm certainly
not going to argue with the national cyber security Cent's definition of of AI um but yeah I think uh as David um said in that that latter part there the the most interesting thing is the proliferation of AI through you know almost any tool that you use now is claiming to use AI even if uh it is just a set of if statements you know um any new software that you buy will claim to have AI in there and that may be just something very very simple in terms of like a chat bot that you're kind of talkin
g to or it may be kind of truly um uh enhanced by the by the use of AI so we are seeing it across everything I think one of the the difficult thing is to understand um as a as a consumer um of the software is you know is it is it really Ai and what's the AI actually doing and and kind of what do I need to consider in addition to the things I would normally consider um if it's if it has got AI know what are the additional things I need to think about both in terms of security its use and um perha
ps even kind of you know where the the sources are where where it's getting its um it it data from so there's all sorts of things now you have to take into account um just because something claims to have ai I think that's a kind of really um it's an interesting thing but also quite a scary thing as well when you think about it absolutely um and and obviously um as we are now uh joined mainly by nonprofits uh and and charity representative um what would be the opportunities um and obviously Henr
ik um being from the sector could could you walk us through specific examples of how the National Trust has used AI well I mean I would much rather talk generically I'm I'm certainly not going to argue with ncsc you know about the definitions I think that that was absolutely bang on it's probably the best Capal description I've I've heard in terms of the democratization or the spread of AI which Steve referred to it's Universal and one of of the problems that we have in the sector generally is t
hat um with the exception of the the larger Charities that have sort of sufficient budgets most of most most of the sector probably don't have even a dedicated it team never mind dedicated security team so as a result the there is there's a major there's a major opportunity in terms of AI in that it allows for the generation of content very very quickly there's also a major vulnerability and a VOR threat in AI in that it allows a generation of very very quickly it also allows u a certain opennes
s to a high driven attacks and this is not going to go away you know this is already a feature of you know the St the standard attack profiles that we we see out in the world and ncsc could speak this a lot more a lot more than I could but certainly we're seeing attacks now mounted by criminals and activists that two years ago would have been the reserve of nation states so you know the the the tools are spreading and there's a been a corresponding spread in the market in terms of the up the upt
ake of AI and it's incredibly attractive for a small charity being able to use AI to generate content or AI to answer queries it's very attractive you know it's potentially speaking it's very resource efficient potentially speaking it's very cost efficient the the difficulty of course is you still have to have a human in the loop at some point somebody actually has to look at this bloody content and say you know is this writer is this not similar if you use AI to sort of extract information or i
ntelligence from from you know large arrays of data there's all sorts of considerations the first is where is the Intelligence coming from is the you know is the information that's being returned is it is it true and accurate because you know just because something's on the web doesn't mean that it's right so again you know it's not a universal panace and there's there's every risk that the use of AI Tools in a well-meaning a well-meaning use of AI tools can lead you into some fairly fairly dubi
ous territory and you know to that end one of the things that we're putting a lot of time I'm left into is producing an ethical framework for the use of AI because there are ethical moral and legal issues as well as security issues and as well as all the sort of practical techy stuff yeah yeah absolutely um we um ourselves have got um Lads of content around uh the ethical implications of using Ai and we also did a webinar last year about um the the a the using AI to to generate content which we
can put some links to in the chat so so um yeah thanks for for bringing that Henrik um for from your perspective Steve um you know what would be the primary applications of also of AI in the charity sector yeah I mean as HK said there there are so many opportunities for it to help with you know kind of productivity in terms of generating content and those kinds of things and I I see it s a similar Evolution as we saw with software when software moved to software as a service you know used to hav
e to on your computer if you wanted a piece of software you have to get the disc and someone would have to install it and then all of a sudden anyone in your organization could go to a website and use software any piece of software and put their credentials in you know put in details about your organization into into the software and all of a sudden you've lost a bit of control over where your data is what software you're using you know how it's being used all those kind of things and there was
a you know that kind of Shadow it thing started to happen and with the proliferation of software as a service and now that's going even further with AI and that you're not just using a piece of software you're actually using something that might use the content that you're putting into it you not you not you don't necessarily have control over where that content is stored or how it's being used in the future um and so that kind of policy thing I think becomes really really important so that peop
le understand what the um what the potential risks are and how your organization is going to manage those risks or how you suggest those manage those risks to kind of put those um those those guard rails in place so you don't you don't um you know reduce the the opportunities and and the productivity gains but you do minimize the risk the organization um of of of these tools kind of being misused or um or you know facing kind of legal action because of something that was put into a um a chat bot
at some point um you know that's they're the risks but I think you know as Henrik said those those policies those guard rails just become so so important and the education becomes really important as well yeah yeah absolutely the the education and making sure that from a from a Charity's internal policy perspective everybody is aligned and everybody has got the the right Tools in place to use to use Ai and to to get what they need from AI um and uh David back to you what from from from an NCSU
perspective and obviously um uh working a lot with Charities and other sectors what would you say at the main um you know shortcomings and and threats of of AI I think i' probably Echo what the um the other two panalysts have said in that like a big risk is is using it and putting something out there that's just not not quite right is inappropriate you have been might have been unethical Etc you know not enough care given I think on top of all of the guard rails and kind of other important areas
that Steve just called out I think something which has to become bulletproof as is the quality control processes that people use um a really sort of dangerous element of of large language models particularly is that the things they create look at face value accurate and look fine and you can often only spot where it's gone wrong if you really really review it and possibly with almost like an adversarial mindset where you think like oh it's it's tried to sneak something through that's wrong how
do I find it you know you need a really deep uh kind of deep Quality Control process a great example I have of that was um early early on in um sort of last year um I did a team strategy day and um I think in PowerPoint as an EXC consultant and so I wrote up the findings from the team day in PowerPoint but I know some of my colleagues uh don't think that way so I used uh um I used a large language model to turn the PowerPoint into a document and it looked right all of the bits that we'd had said
were there but it had very subtly got things in the wrong place it had got like slight wrong meaning and one of the team after reading it got really upset because they felt that their comments hadn't been taken into account and I'd misunderstood them when I'd kind of been um when I kind of then rephrased uh the strategy day back to everyone and so for me that was kind of a real lesson that like you know i' i' looked at it I'd read it and it looked fine but you know you really the level of quali
ty control you need to do is much higher because these things sound accurate so I think there's something in there about you know is it ethical to use it is the thing it's creating actually accurate or does it just look accurate um and then I think um if you start looking towards uh using either products or automations or things like then then the security challenge gets gets higher as you need to make sure that you know the company who you're using or the service you're using isn't just hoing u
p all your data and you know doing something you know doing something bad with it you know you need to be sure that you're not designing a system that yourself can be abused Etc and it suddenly gets a lot a lot harder but I think the main risk for most folk is is just the the straight um you know I've used it for something and I don't really checked its output when I put it on my website or or mailed it to all of my customers and you know donations yeah absolutely and that's that's a very very g
ood example then and and you know staying along the the side alongside the threat and and distinguishing credible information from from from this information is is sometimes you know really hard when unless you have some some human eyes you know looking looking at it and and critically reviewing it so so that's an interesting point um uh Henry can you give us some specific example of how the National Trust is managing these threats around AI well yeah I mean I'm I'm I'm loathed to sort of open t
he kimono too much on what we actually do specifically for obvious reasons but um it's something it's something we're very keenly aware of and there is the the threats has a number of components there's the straightforward external threat you know the Bad actors want to do bad stuff to you and you know our our conventional our our security architecture is up up to up to the task of managing and containing containing that threat as as it stands today i' make no I make no guarantees for tomorrow u
m the the internal threat I think is sub and I think David has the right of it when he was talking about the Integrity of of of information the the the reliability and the trustworthiness of information and the the conclusions that the the algorithm will draw from the the information that it that it Hoovers up and one of the things that we started talking about the early naughties you know those those are a similar vage to myself remember the Knowledge Management boom of the early naughties and
one of the things that came out of that was the need for information hygiene we need to be absolutely sure that any information you hold is accurate and that has a number of implications in terms of how you manage your data in terms of the metadata that's associated with each individual data object so it has a Cell by data it has a validity indicator it has a a source indicator so that you know you can you can actually say the degree of degree of confidence that yes this information is accurate
because you know any AI driven extrapolation from information will go straight off the rails if it's based on duff informational dub dubious information that's that's that's one side of things the other side of things of course is that AI allows you to use the entire resource of the entire web and we all know that 99% of what's on the web to quote sturgeon's law is is garbage and it's it's it's inherently unreliable and I don't mean by that it's all wrong but it's unreliable you cannot verify it
's accuracy and you cannot verify its reliability you cannot verify its authentic authenticity and because AI delivers a very polished product exactly as David says it gives the impression of absolute accuracy it gives the impression of ab absolute respectability and exactly as he says the need for that deep dive and that very deep validation process and you know not just a human in the loop but many humans in in the loop at various points becomes more and more more and more apparent and the net
result is you end up not actually saving that much time because of the extra effort you have to put into validation and and checking absolutely yeah yeah you're you're very right so that's that's that's one a big a big element of risk there um and uh and you need to balance the the need for efficiency versus the the the need for um accuracy um and and providing the right the right information um Steve I don't know if you wanted to add add more to that as well yeah um I think the um as Conor Hen
rik said that that validation of the data source and it's not always clear when you're using these tools what their data sources are you know you can that is coming from the web you don't know which parts of the web and oh sorry muted yourself Steve he you sorry I press the button there yeah the majority of the uh the web um you know is wrong is is garbage um and you know particularly if you start to get your am Idols looking at um forums and uh social media and those kinds of things you know th
en it turns from garbage into just pure garbage you know there's kind of you know very very little uh of genuine quality in there that you might be able to kind of say is is is reliable and and the fact that these tools give you that content content back with such confidence as well um you know as as kind of David said earlier it really takes that analytical mind to say is that quite right you know quite often um you can ask it to to give you quotes and things like that if you're doing presentat
ions then you find out the quote that is stated was never said by anyone in fact was you know the time that they say it was quoted was before that person was even born and so you know you're just kind of like and it was a different age and you're just like actually if I'd stood up on stage and I'd said that people would be looking at me and thinking you know this this person's a fool and you know you kind of your personal credibility is at risk and not just you know from a security perspective b
ut from a your personal credibility if you start to become reliant on these tools and you lose that analytical mindset then you know that's a that's a a real risk to your personal credibility and I think one of the challenges is as we've got people entering employment that will start to use these tools tools more and rely on these tools more is how can we educate them to have that analytical mind and not just trust everything you know as anyone that's got children will know you know if something
's on social media then it's true you know there and it's very difficult to convince them otherwise just because you know the majority of their friends you know kind of have said something and it becomes true so and that that then leaks through to um a similar Behavior mindset of AI so if if you know if this chat bot says it's um so it's true then then it must be therefore you know that becomes the truth in mind so I think that's a challenge we've got for the generation that are coming into work
now that will start to use these tools um in in kind of their everyday work how can we educate them to have that analytical mindset and that's a it's a behavior thing as much as a you know as a technical thing yeah I mean you know as so often with security problems it's it's far less to do with the technology and far more to do with the people in the process abely yeah I mean one of one of the one of the things that alarms me slightly is with um machine learning is the ability to skew the algor
ithm is to to mistra an algorithm and we've seen this happen before with early chatbots where you know inside 24 hours chatbots that were put up in good faith by by large companies no names no pack but they're based in Seattle immediately very very rapidly turned into spouters of Nazi propaganda simply because of training the chat bot training the algorithm and that's that's that's alarming because it's a black box nobody knows what the hell's going on in there and it's very much a leap in the d
ark particularly you think about that in the context of democracy and elections and things like that you know we've already seen them being influenced even without these tools so you add these tools into the mix and that risk becomes really really quite High it does it's particularly relevant in the third sector for Charities because the reputational impact is a very significant one reputational risk is a huge risk that we all carry simply because you know we stand off all on our revenues and ou
r revenues are raised through reputation yeah yeah absolutely we have um um and and talking about the the context of of of Elections that because 2024 is one of the biggest election year throughout the world there are even more risks um of you know misinformation and and and Muse of AI um I think we have a couple of uh uh of Articles around that that pertain obviously to the UK context but um thinking about International um Charities that maybe attending this this panel discussion is also very i
mportant to for them to keep that into consideration because um like you said Henri the uh uh the reputational risk from a charity perspective is is is even greater than oh oh it is but the other point to bear in mind of course not not not so much relevant to the likes of us and the National Trust but the the larger NOS actually have geopolitical significance in themselves they are actual they're actual actors and I'm sure David could talk in a lot more detail about those but it occurs to me tha
t the major Nos and some of the sort of NGO adjacent organizations in the UN are incredibly open to this kind to to attacks or to influence delivered delivered through these means exactly as you say you know it's an election year it's there are there are nation states that have are declared part of their National Security Doctrine is use of dist misinformation in in support in support of their political objectives and so it's an alarming time you know scary days it is it is but um moving um you
know I know we talked a lot about about the RIS so how how can you know attendees here and and obviously larger organizations um how how can they mitigate those risk and and how can they ensure that they're operating effectively I know we mentioned um you know policy Frameworks for instance but um if you have any perhaps other more you know practical steps that that they should take that' be that' be good perhaps um um you know um David if you want to if you want to start on that with lot of gui
dance from yeah so um I think one of our one of our initial positions would be that it's worth organizations thinking through sort of the risks themselves from Ai and llm but they don't necessarily need to kind of run off and create um sort of policy for every part of their business around Ai and LMS just yet so one thing would say is that Mo if organizations are doing governance well then their current processes for things like are probably good places to start and might only require small twea
ks so for instance if if organizations asking the question like oh should we use this generative AI based thing to help our website developers like you probably don't need a new process for that you probably just want to look at your existing process for how do we use any software package or how do we use any provider for our um to help our developers and so you know then you'll be looking at things like the terms of conditions the Privacy the security you know how you're planning to use it and
so you know you might find that you might might need to add small bits to kind of account for some AI specific things but but generally you know looking at your existing processes and using test cases where people in your organizations want to try and do something using AI you know kind of helping them um through the existing processes and making sure they stand up will probably get you pretty far um specific stuff organizations might want to do um staff awareness is probably our number one thin
g so helping people be aware that these things can go wrong and that you know they're not the panaceas and you know kind of Oracles of truth that you might accidentally mistake them for to being and so kind of getting that message out there that yes you know it's great if we can find ways to be more efficient as Charities but you know there are some some edges to these things and so you know quality control and knowing what you're doing for and having an ethical kind of framework does become imp
ortant so I'd say um you know our kind of our light touch approach would be staff awareness front and center of these new things and kind of that they can go wrong and then just really kind of doubling down on existing processes and making sure that they're fit for purpose with kind of the some of the new threats coming in for AI and then new policies where necessary yeah thank you David um that's a that's some very good point Henrik would you would you also like to elaborate on what you're I co
mpletely agree with what what what David says and he's he's absolutely right not to not not not not to Big this up I mean when you come down to it AI is just another thing that computers do and we've we've we've all pretty much got mature policy and proced procedural harnesses within which it is operated you know the the the bigger organizations function the effectively as it Enterprises so you know they they they pretty much got that nailed in in in our case we treat AI as just another thing co
mputers do and if somebody wants to bring an AI product on the stream then it goes through exactly the same process of software evaluation exactly as David says you know the test the testing regime for functional testing the compliance with nonfunctional requirements um the in Information Security review uh the data protection review because obviously you know as as with all other Charities we have a huge gdpr exposure so we we take that stuff very very seriously and we we do try our hardest to
regulate the introduction of new new new software onto our Network and there is a balance you know we you know we we are not a top- down heavily hierarchical organization we we we we're not locked down solidly because you know we're a charity you know we we have we have we have staff we have volunteers who want to do good things they want to do good things for other good people so you know there's there's there's a limit to the number of sticks we can apply so we we we we try to use carrots as w
ell we try to make it we try to make the process as easy and as as inclusive as as possible and so far touchwood we've been quite successful what's keeping me awake at night is the increasing integration of AI into the the tools that are commonly used so take for example co-pilot on the the 365 Suite now co-pilot is pretty much unaffordable for people at the moment at $30 per user per month but that's not going to be the case forever and AI is increasing becoming integration to things like WebEx
and Salesforce so that you know Salesforce which a lot of people will use has got will have a very strong AI component going forwards and and that's fine if you know it and that's fine if you understand it that's fine if you can sort of test it and you can evaluate it and train people you can understand it my my concern is it it may well end up happening by stealth and some some well-meaning companies May well introduce an AI component into whatever it is they they're Ser they're serving you up
and that's specifically in it where you're you're buying buying their services in as a service where you have no particular control over what they're doing on the other side of the on the other side of the fence and that that that that that is quite an alarming Prospect yes yes yes we we agree as well and obviously we we you know we work some of our partners are are the ones that you that you mentioned and uh um we also want to to make sure especially from a third sector perspective that the th
e voice of the third sector is is heard so we need you know the AI proposition is also um uh you know shaped by the sector for the sector um and not necessarily only from technology providers um and and on that we have a a couple of uh of workshops in that uh around that proposition uh the 21st of March so we'll put more details on that um in a chat but yes so yeah completely agree with you there's a lot of AI that is going to be added to the tools we're using every day and we need to fully unde
rstand understand the you know the extent to which these are going to contribute to you know the results we getting the productivity Etc um and and from um from your perspective um Steve also what what would uh what would be the the mitigation of the risk or or you know how to operate effectively using AI from nomin expect yes so uh I think you know part part of the thing here is education and awareness and as as Henry said these things are going to be used right you know there's no kind of gett
ing away from this it's it's becoming part of you know all the software that that we use so making sure people understand what it is and what it does um and also encouraging them to to use it in constructive ways that meet the policies and Frameworks that you've given if you just say actually do you know what this is we're just ban AI you can't use AI you know the organization you just going to create more Shadow it people using things that you know just not telling you about it and then you won
't know about it and control so you know it's about kind of that kind of positive um reinforcement of of the policies and the Frameworks that you give um what I think the the other thing just to build on um something that Henrik said that um really resonates to me was about the the tools that we're using now starting to have ai come in and as there as a service you know you you get very little foresight or oversight of what those things are and I think that's that that becomes a real risk when y
ou're using things say say um platforms like this you know kind of video conferencing platform and productivity Suites where all of a sudden perhaps um The Suite starts to you know we largely we've talked about LMS till now but what about when it starts to kind of recognize facial um Expressions uh or um the activity you're doing on computer how fast you're typing um what else you're doing whilst you're on a team score and those kind of things and you know from a I can see from a productivity pe
rspective someone goes that's great because I know when someone's working how long they're working for how many rests they have if they're they're looking tired do they get enough sleep that kind of stuff but actually from a privacy perspective do I want my employer knowing all of that you know there's a um you know there's there's a real kind of line between someone being productive and you know kind of understanding how you can help them be more productive into actually invading their privacy
and you know when it starts to start to read your emotions and your your faing all those kind of things you know if if if um if a tour starts saying well actually you know they were saying this but they didn't actually mean that because you know I could tell from their facial expressions you know that starts to become a real challenge um for us you know kind of socially but also from a an employer and employee perspective and and I've I've seen this in some of the tools that we're seeing now you
know it starts to tell you how many connections you've had and um how long you've been in meetings or not in meetings all that kind of stuff and and so there's an awful lot of data there and whilst that's not AI That's Just collection of data when you start to add AI on top of that um and you know we've we've already talked about the fact that it can make inferences that AR just aren't true um and perhaps employees that employers then start reading that information and and you know um taking th
at as as fact when actually you know that it may be um just inferred so there's all those sorts of things I think we need to think about in the future and it's not something that's right now but as we go forwards we need to start thinking about that kind of that control over those tools um when these new things are introduced that we're um reviewing them understanding if it's something we want to use and and you know kind of feeding back to the software suppliers if it's something you know if yo
u can't turn it off well actually you know either I want to be able to turn it off or or I'm going to switch to a to to a different Suite um but yeah I think it's a both a fascinating Al and also quite scary world that or into yeah yeah absolutely and I like I like the fact that you you mentioned you know if that AI functionality or features is is being implemented is there a possibility to opt out of it and and sometimes you know we don't know actually I don't know the answer I don't know the a
nswer for for each individual software uh it would be interesting from um probably you know reviewing your your it portfolio and software portfolio to be able to ask that question and to have to have that answer uh from uh from your perspective also David what what would be the the future of you know possibilities of ai's impact on on society and on the third sector it's yeah like as I think was just saying it's like truly exciting time I think like in my my career in Tech so far this is probabl
y the most have excited I've been about sort of Technology again it's it's so much happening and I think it's truly exciting that even the creators of these artificial model intelligence models don't really know how they're doing what they're doing or where it will go so it's kind of hard to predict um I think we can like probably predict out or at least make some guesses over the next sort of you know year or two I think you know we I think AI is going to go through a bit of a a bit of a proces
s as people learn that it's hard to get some of the benefits that everyone's hoping will get so like whenever we've done research inside ncsc of trying to turn a nice idea into an actual like production automation or something that just does something repeatedly it's really hard like llms are incredibly good at giving you a one-off demo of something to show the potential and they're incredibly bad at doing something repeatedly every time in a way that you can rely on and so I think over the next
year we're going to have a bit of a you know I think AI may go through a bit of a winter or at least a chilling where people start to realize that it's maybe a little harder to to get quite what they were hoping for from it and then I'm fairly hopeful that we'll come out the other side and you know organizations will start to be you know being able to more people will be able to code than code you know who could code before so you know we'll be able to kind of be increasingly digital um you kno
w I think there'll be um I think the value the societal value of knowing an actual person was involved will get bigger so you know I think either may be people may become inundated with you know AI generated things and suddenly people want you know kind of more inperson connections or you know will want to have you know meet meet Charities they're supporting or know know that they're talking to people in the real rather than just talking to you know sort of a a chat bot we might see that um and
in terms of where we go after that if if you listen to the vendors you know they they want every AI to be a you know a fully intelligent PhD level person just there to help you on on any of your tasks and so there's a dream state where we can spend less time doing the jobs we don't want to do and you know more time doing doing the things that are valuable for us to do whether we'll get there um I'm not entirely sure but um I think there's definitely going to be some hurdles get getting there as
we we hit both the winter of um whether they quite live up to the promise but then also people discovering that sometimes they actually bring new security risks that haven't been managed as well so it's it's it's gon to be an interesting few years but it should be should be fun yeah absolutely thank you thank you David um and and what about you Henri in the National Trust world how how are you thinking about the possibilities of ai's impact well we we we're keeping a very close eye on it because
it's it's extremely attractive from from many perspectives in that it allows us to generate attractive and useful content and you know we're all we're all about the content you know as as the curators and the preservers of the naal Heritage and of you know a big chunk of Natural England is very important to us we we we we let people know what we're doing we attract people to come and look at the beautiful things we look after and come walk in the Beautiful the beautiful countryside we Cade so y
ou know that that in itself is incredibly incredibly attractive as is the fact that you know inevitably we as with the rest of the sector are going to be moving far more of our operations into the digital domain and not the physical domain because you know the it we're now mature you know the the old four four domains you air c space and now joined by virtual and it's it's you know it's it's it's it's real now you know people have been sort of talking about it off and on for years you know certa
inly for as long as I've been in technology which is pretty much since great reptiles roam the Earth but you know the the day is now dawned and you know people need need deserve and expect a completely fulfilling digital existence and we can do a lot with AI to to guarantee that we can also do a lot with AI to ruin that for people so we have to be incredibly careful and I think Steve's point about uh obtrusive use of AI for intrusive for intrusive reasons is very well taken it makes brings up th
e other point the ethical framework needs to needs to work in both directions it's it determines the ethics of use of AI by those who were responsible for operating it as well as for those who are actually doing it and I think that was that was a shrewd Point very well made I'd agree with David that we're we're we're up for you know there's going to be a chilling period but that's inevitable after every period of tech hype there's always a chilling period before everybody you know puts their toy
s back in the box and just gets on with their lives and gets ready for the next big thing I think AI potentially speaking has the potential to be revolutionary its impact on the the the the information industry and you know I think the market will probably sort it out and over the probably in three four years time when we have a more much more mature market and a much greater awareness of what the hell you can and can't do and what what it will and won't do and what you should and shouldn't do w
ith it is more widespread I think that the the the the impact will be will be less will be more easily contained I think we're into a particularly sensitive and dangerous period for the next year to 18 months as people are desper trying to find benefit from this and U this is where the the potential for UPS is at the at its highest I think MH would strong would strongly Echo that good and and um obviously I know we mentioned um before how can the charity sector upscale its stuff because people a
re a core element um of you know the use of AI and the ethical use of AI so um obviously David do you want to walk us through some of the ncsc advice available yeah so I think our Our advice Our advice generally is like a good way to be kind of upskilling on these things you know there so you know formal and informal training Etc there are free courses out there you know I think a lot of the V the main vendors do do do trainings on on on llms and kind of understanding and using them I would say
um for us a good thing important thing to do is provide it provide a trodden path for people to play with them and start getting their heads around them so you know find a model that you're or find a service that you're happy people using and and point them at that because that minimizes the risk that they'll kind of go and find their own way or find something and put data in where you're not happy I would say I would probably rather an organization goes and finds a model where they're happy peo
ple just using it with all of their organizational data perhaps because it's you know a paid for model or because they've reviewed the t's and C's carefully than telling people not to use it or to make their own choice because we find on the latter that leads to people you know putting it somewhere you're more worried about and so it's often better to just provide kind of a trodden path to one that you're happy them using and that way them them getting their head around it but I think we have fo
und that playing with them and getting your own feel for where they're good where they're bad is is really important in a way that you can't fully learn from a presentation or from a from a talk you you you do need to talk to them a bit yeah absolutely now thank you um and obviously the um um NC has got great uh you know great resources um available um as well um how about uh you Steve from your from your perspective any other things you'd like to add to David's Point yeah no I think you David w
as was spot on I would just add that um you know perhaps some that there are training you know materials available from a lot of the major providers um but it may help organizations to do some of their own training in the use how and how those tools might be used by the organization in a in a constructive way and how they might not be very constructive so you know um as um Henrik was pointing out earlier about you know some of the the materials being created and kind of the ethical concerns arou
nd it and things like that or the how how valid they are you know kind of taking people through that process so they can see the pros and the cons um but also you know pointing towards some of the things where um it can be really powerful so you know kind of you know summarizing large amounts of content and those kinds of things and then you know getting people to think about how that could uh how it can help them what the risk are so kind of doing workshops and those kind of things just as you
would do for for any kind of you know s security when you're doing kind of training around you know kind of fishing and those kinds of things it's like you know actually taking it through through people in the context of the job that they do rather than a kind of theoretical thing where where you know um because you always learn better when it's in the context of something you understand and doing on an everyday basis doing yeah absolutely agree um and how about you Henrik any uh further thought
s or comments around the upskilling of Staff yeah I think there's there's there's something you can say about the UK third sector which is that the bulk of the UK third sector is extremely it's it's a huge constellation of relatively small organizations which which have limited resources and a lot of them don't even have a full-time it guy don't mind a full-time security guy and that that can be an issue and you know the you know people like you and the n ncsc and various others are are doing th
eir bit to try to weld this together and provide communities of interest and to allow for the pooling in the sharing of information and knowledge and I think that one of the issues that the the sector as a whole faces is that most people most people involved in the sector are so busy doing what they're doing for a charitable purpose that they haven't really thought very much about the it side of things never mind the security side of things and certainly never mind the AI side of things so the m
ore we can do in terms of educating the sector as a sector the more we can do in terms of Outreach and you know I think it's probably done to some some of the larger players in the in the in the sector to do their bit and certainly we we we we we work quite closely with ncsc and we support the ncsc and their initiatives and we very much appreciate the work they do and very much appreciate the resources they make available but we do have to find some way of reaching out further into the the The W
ider third sector because the wider third sector is where the major exposure is I don't want I don't want to be smug about this because you know we're we we're in no way perfect but we are better protected and we are better prepared simply because we're bigger and we have economies of scale and we have you know fewer budgetary constraints and we have the luxury of having you know full-time dedicated resources working on this stuff and that's not the case for most of the sector and I think that's
that that would be my worry for the sector so having um well supporting staff in in sort of trusted self-learning you know um um opportunities provided by the ncsc obviously we can amplify the work of the ncsc and also having I guess loads of conversations with you know with within the sector as to what training is available affordable um from trusted providers yeah even just sparking the conversation because I I guarantee you that probably 70% of the trustees out in the big wide world of the t
he third sector have never thought about this and you know they have they have an oblig they have a number of obligations under you know a number of legal obligations they have a number of other obligations moral and ethical obligations to the the Charities they oversee and you know one doesn't want to terrify them of course one doesn't but you know one does want to make them aware of the fact that there are some issues here that need to be considered yeah absolutely I think ncsc been thinking a
bout this I think David's probably got some thoughts on that yeah David yeah sorry I was uh yeah just putting into the chat links to some of the resources we we do have and where we where we would advise uh advise folk um so yeah I think there is um there's probably a bit of a a gap at the moment which we're looking at internally around advice for how do organizations kind of choose a chat bot or choose you know how do they kind of embed those in um into an organization we've got a fair we've go
t some good stuff for people who are Building Systems that might use these things um but yeah I think one thing we need to kind of we we're we're working on and I I'll take back is just the the we probably need to have some some good guidance for people wanting to you know how do you adopt these things more safely Beyond a statement saying make sure your your your existing processes are fine but I I'll drop some of the links to ex content um in in the in the chat yeah thank you thank you very mu
ch um to the three of you for uh for sharing your your Insight and expertise uh was very much appreciated um I'll go into the questions from our audience because there already have been um quite a few um in in a section um for all of all of you um you know that are have been attending please do share your questions and we'll go through them um uh first one from David um and he was referring to um obviously the co-pilots and and CHP and being um David says that I've had incorrect information from
Jud GPT before um can he ask if he can use AI generated images on their website instead of stock images what would guidance be Steve I think that tends to come down to where you're generating from and their terms and conditions so whether they're it's free to use for you know and and they they'll state for what purposes it's free to use or whether quite often these things have a premium um you know there's a free thing and it might have a a watermark or something on it and then there's which an
d you can't use those for kind of commercial purposes and then there's a free a paid four versions and how these things work so really it's down to the terms and conditions um of of each of the the tools that you're using yeah yeah yeah a really good point to kind of reinforce that you know when we talked about the kind of the use of these things is you know I might put my organization at legal Risk by using something um that's got a some terms and conditions that that refrain me from doing that
yeah yeah absolutely um especially as we've seen that um uh again chat GPT are and there are some call cases at the moment with some artists suing um chat GPT for for using uh some of the artwork withs the the right so um so it's certainly a very interesting uh a very interesting one um we can probably share a little bit more articles we've written on on the matter but it's it's best to check you know terms and conditions around using that um and and a similar question from from Sue actually is
there a way to ask AI to give references to validate its information well there is but you know it's not reliable you know because you know a large language mod model you know even if any 1% of the the the referen is it it sites are incorrect that invalidates the entire thing so I I would be very very very wary and any references cited I would check back and that's back to this idea of the human in the loop so if there's a if there's a quote for example attributed to somebody I would I would go
back and look I would go back and look for that and make make make sure it's make sure it's for real absolutely agree um David you also want to yeah I was just going to say again here they're they're really dangerous because if you if you ask them for references they'll give you an incredibly good look thing that looks like a reference like I was doing academic research and I was like oh where have you got that fact from and it gave you know a perfectly formatted academic reference and looked a
nd the journal existed and the author existed but they worked in a different field and they never published on on on that thing at all so you know you definitely can't just just ask them um the co-pilot thing that was mentioned in the chat um an approach you can you can do is is and this is something that I I don't think individual like you know charities for example would do but the people who sell these models and the vendors and the service providers they are working on it is something called
grounding which is where you try and take the answer from the uh the llm and then separately look up the reference yourself you know but but with you know with automation yeah and so one thing co-pilot's doing is it's saying yep the here was the data that went into the answer um and uh so those those references are are not just co-pilot pinky swearing that's where it's got the information from those areally validated um but that's something I don't you know that's that's a property of the servi
ce you might procure that's they they have to do it really thank you very much both um a question from Helen um who um is asking so you said we we talked about minimizing legal action if something comes out of out of a chatbot um do Charities need to put something on their website uh that says that they do not have control of what it's said from an AI tool about the organization that's a way to do yeah minimiz legal action I don't know who would like to take that David um I think again it comes
down to real specifics and I think if you're doing anything that you're going to be putting on your your your website um then you definitely the the eight kind of ncsc guidelines for secure systems you want to be looking at that but you want to be doing kind of full sort of like legal reviews ethics reviews you know is that an appropriate thing to be doing what what's the harm that could be caused if it goes Ary and have you properly made sure that it won't go arai with the right kind of guard r
ails and things so I think yeah if you're doing anything where you're putting something publicly on your website that is able to create content using AI you want to put do a full sort of yeah legal ethical you know all all of those other reviews because it's because it's hard um yeah and talk and talk to legal department to get get an appropriate for form of words keep them involved the legal the the legal department and Communications Department need to be absolutely involved in all pro in all
all parts of this yeah yeah absolutely you're you're very right and obviously if you're developing a chatbook that uses a I to always very um important to to ask um these questions to to the providers and be extremely um you know um thorough in your in your um uh Duty Allegiance review with these providers um moving on to the next question um somebody says they are at the start of their AI journey and working on creating an AI strategy policies and framework and asking if there's any example of
AI policies that could be shared or any po pointers well there there's a fair amount of resource available a simple web search will produce a representative representative handful including some from respectable organizations who who seen fit to share their policies just to as an example to the others and I think that the approach that I would suggest is you know downloading as many of these as you can get hold of and having a good long hard look at them and seeing whether you can synthesize fro
m from from those various inputs something that's fit for purpose for your organization that fits with your existing policy infrastructure and that will do do that will do what what you want it to do um as with all policy development there isn't no shortcut I think this is certainly something that Steph David I think would definitely have views on yeah you could already ask an llm to create one for you yeah yeah that's an idea and then then you could ask another one to summarize it that's right
yeah yeah um but but yeah I think as as Henrik said you know have a have a look um around you know I remember when um gdpr and the The Reincarnation of kind of data protection act came out you know there was lots of people scrambling around for policies around data protection and gdpr and and what we saw was an influx of people happy to share things and there um there will be organizations I imagine that will evolve from um from um from AI you know kind of becoming more um more democratized wher
e people will will have an interest in this and that might be around it might be organizations around kind of data protection those kind of things but it will start to see these things come uh more to the Forefront I think in terms of people wanting to help organizations with their with their policy so so yeah uh but again you know check your sources make sure it's come from a a reput Organization no thank you thank you all of you um and uh oule of last questions please keep them coming if you i
f you have any um question from David um who understand that there are statistical methods for determining whether any content has been likely generated by AI uh whether that's wor images or video um do we think we get to the point where all content everywhere might be legally required to have an AI probab probability score at broadcast um in terms of you know where where we think this this might go and David you got your hands up so the the hard thing here is that the at the moment the research
doesn't look like you can just tell whether something has been created by by AI um I don't know if you you've seen but there's websites out there which which claim to but um they they get it very very wrong so I think my favorite example was the the US Constitution was put into one of them and it said it was 100% AI written um which it you know is interesting philosophically if true but is probably not true so you like you I would go one thing your comm's team might want to think through the ri
sk is um what if someone says something you've put out there is AI generated when it wasn't um because like you know people do things you know people on Twitter will try and you know take something and put it through and then say oh this was AI generated and you know there's a good chance it wasn't it was just you know it just just flagged the thing now a bunch of the um people particularly who create images and videos so the vendors out there creating pict in videos are trying to find ways to k
ind of almost hide evidence in the videos and pictures that would would prove Pro that they created it you know so that their services can't be abused for sort of you know disinformation Etc but the problem is those those Solutions are only as good as like people's ability to bypass them so for instance if you take the image that's been generated and flip it left you know mirror it left to right does that break it you know if if so then people can you know will still be able to to kind of you kn
ow create it so I the way I see it going instead in terms of the kind of you know um how do we know what's true online is instead I'm hoping for something like um increased AI driven automation that helps us kind of quickly verify the the factor accuracy of a statement so if something is put out there some kind of a you know llm based agent system that basically says oh yes I can actually evidence the things they're saying on these websites or says actually there's no evidence for these things o
r there is partial evidence but it's being misinterpreted on this website so almost more automate the factchecking than kind of automate the detection of AI content yeah absolutely that's a very good point um Steve and Henrik don't know if you want to add to that after you hear it yeah I mean really just to agree with David because the the the AI is moving faster than the the the the ability to to identify ai ai artifacts so his his solution of you know cut cut straight to the Chase and just jus
t look at the content that's delivered and check you know examine the content for validity and don't really care how it how it came about and you could apply exactly the same tool to you know human generated content to to check the validity of this the statements and the facts and the the citations in in that and I think that's that's a shape of the future and that's a that's a fit use for AI actually and you know Sim similar usage is already in place for things like plagiarism detection acade i
n academic papers and I think that's an EXC it's it's an exciting field absolutely so you you want to add yeah I mean I was I was just going to say a similar thing really but um you know the more the the more advanced AI becomes you know it's artificial int that's that's what it's kind of designed to do um the more you can imagine that it becomes harder to spot that something's artificial intelligence because you know it's learning and and you know and perhaps that's one of the things it you kno
w it it tries to do is is become more natural and and less like artificial intelligence um so then it becomes harder and harder to detect it and um I was going to pick up on that point of plagiarism Henrik you know there are lots of tools out there that universities and and schools use now to detect plagarism and lots of Education assments have have banned their students from using um llms for for their essays and those kinds of things I imagine right now it's very difficult for them to detect w
hether something has come from from an LM and and is that a problem that's ever going to be solved and or and perhaps people need to think about whether it should be solved and as we said you know going to the the kind of trusted sources and um you know perhaps this is just a way of life in the future and so we have to think about um you know not not whether it was created by an LM or not but you know how was it analyzed kind of construct of it and think about um the marking of those essays in a
completely different way than we do today I think that which is I think a very um encouraging thing for education in the future as well as as well as a challenge yeah because what what becomes important is the artifact rather the process that was used to generate it absolutely and you know I I tend to agree I'm I'm pretty relaxed about something if it's been whether it's been produced by AI or by my brother-in-law you know if it's a good piece of work and if it if it if it stands up to rigorous
analysis then it's fine you know I don't care yeah absolutely um but great well thanks that that's all we're going to have the time for for today thank you very much to do three of you for joining um and thank you for everybody for uh attending this this webinar uh we hope you found it helpful and that you will have uh other uh resources to to go um to go and and um Deep dive into into the topic um just before we we leave obviously we have uh an there the 21st of March in London at resourceful
London and these workshops some of these workshops will actually be uh HED by the ncsc which will help uh provide cyber support for senior leadership teams we also have an artificial intelligence stream with two workshops one from cast and another one from we and which is essentially about what do we need to learn from each other um in the in the realm of AI and AI in action making it work for you um so there are quite a few uh a few more uh details about these sessions uh on our website and obv
iously my colleages the chat SP in the chat uh but if you're interested in you know network with peers get some expert guidance and and hopefully make make a little bit more sense of what this all means for your organization please do join us on the 21st of March uh for for these workshops in the meantime we really hope that you have a great rest of the week um and thank you all for joining and thank you to Steve Henrik and David thanks for having us thanks for having us you

Comments