Main

Artificial Intelligence in Philanthropy: Trends and Responsible Use

Join us for a thought-provoking webinar on the potential of artificial intelligence (AI) and its significance in shaping digital infrastructure and enhancing equitable outcomes for the social sector. In this session, speakers from the philanthropic sector will discuss how the power of AI and generative AI systems are transforming how social sector practitioners and foundations are doing their work and achieving their missions. This session is geared toward philanthropic leaders in decision-making roles involved in high-level strategy, organizational effectiveness, grantmaking, or operations. If you’re looking to more effectively drive the future of philanthropy through innovation, ethical practices, and compassion, this webinar is for you.

Council on Foundations

6 months ago

Trends and responsible use. Hosted by the Council on Foundation, the Technology Association of Grant Wakers, and Project Evidence. We're so glad that you're here today to learn about AI in philanthropy. We're glad to welcome people from all over the country. So go ahead and drop in the chat where you're joining us. From today, hi from Washington, DC. We'll have the chat open today because there's a great variety of wisdom in the room and we'd love to hear your perspectives and ideas. The chat is
a place to learn from each other and share how this topic connects to your work. Before we get to the content of this call, I want to share a few tips about using the webinar technology. If you have questions for the panelists, use the Q&A tool in Zoom. We love to see your questions in the chat. But using the Q&A tool helps us organize the questions and inquiries that we want to return to later in the call. You can also upload questions posed by others that you're interested in hearing us discu
ss later. We are recording the video, audio, and chat for this call. You'll receive access to the recording and all the links and resources that we share in the chat. Yeah, email later this week will also post the recording to our website. If you'd like to view auto-generated subtitles for the webinar, click live transcript at the bottom of your screen and then show subtitles or view full transcript. If you need any support during the call, please sign the direct message to me at Council on Foun
dation, or you can email webinars at COF. Org. With that, I will turn it over to Wendy Torrance. Director of strategic partnerships, excuse me, strategic initiatives and partnerships at the Council on Foundation to introduce today's speakers. You can learn more about her and I speakers today by checking out links to their bios in the chat. Thank you very much, Caroline, and welcome, everyone. The technology association of Grant Makers provides our sector with important insights related to the st
rategic, innovative, and equitable use of technology. The council is grateful to tag for its leadership in advancing the sector. Today, Tag is joined by Project Evident, a nonprofit focused on supporting data and evidence-based decisions for social impact practitioners and funders. I'm honored to introduce our speakers today. Chantal Forster leads the technology associate of Grant Makers as it's again executive director. For over 20 years, Chantal has worked across private public and social sect
ors to advance collaboration and impact. Appointed to her role in TAG in 2,018 Chantal has become a waste for philanthropic investment in digital infrastructure and an advocate for centering our work on the grantee experience. Sarah Detroit is a senior strategic advisor for product innovation at Project Evident. She brings experience at health leads and new profit to her role at Project Evidence, where her experience as an investor, advisor, and leader help her integrate market insights with the
internal change management necessary to realize new opportunities. They join us today to reflect on the challenges and opportunities that AI brings to the philanthropy sector and to share with all of you how AI and generative AI systems are transforming how we were all doing our work and achieving our missions. I invite you to offer a warm virtual welcome to Chantal and Sarah. Chantal, the floor is yours. Wendy, thank you. I'm excited to go ahead and get started. I'll share my screen. Alright,
are you able to see my screen just fine? Great. So welcome, everyone. I'd like you to raise your hand if you've played this game, whether in a meeting or in a happy hour or a dinner party the game goes like this. How long can we go before someone mentions AI? I think it's like, 18 min, maybe 5 min if it's an internal tech team meeting. So we're all talking about it. We're all talking about AI right now. But the truth is, you know, it's not new, actually. Case and point. I don't know if anyone's
ever used Duolingo. Duolingo has 16 million. Active daily users. And Duolingo has been using AI for 10 years. You've used AI yourself and everything from Siri, Alexa, Google Translate. If you've used, if you've seen autocomplete in your favorite email tool, that's AI. Google search, fraud detection by your bank and hundreds, hundreds, thousands of more applications. On a personal note, you know, I remember my father working on a master's in AI in the late eightys and then 20 years ago I worked f
or a product dev team for a predictive analytics platform that's now part of the IBM Watson studio. And we leveraged a wide variety of AI tools then. It's that old. So why are we all talking about it now? I think there are 2 reasons. One is that there have been some improvements to neural networks. They're one of the underlying. Forms of AI that power a lot of what I just described, right? Language translators, for example. So in 2,017 Google alongside researchers from University of Toronto deve
loped the transformer neural network. It's a new kind of neural network that essentially does this. It has the ability to sense the context. In which a word is found. So in the past you would feed like multiple words in sequence. And do an algorithm. It would spit back other words in sequence, but it didn't actually know the context of that word within the sentence or the paragraph. So transformer, I'm oversimplifying grossly, but the transformer neural network enabled that self-awareness or sel
f-assessment of words within a sentence, language within its context. And then the second development was the just wider availability of big data. So it was easier for us and researchers to. Train these algorithms on larger and larger data sets that were now available. So those 2 things have really changed the the dynamic and so enter chat GPT in late 2022 and that alone those version 3.5. Of the large language model chat bot that we call chat GPT, but it made a really big splash. It made a majo
r splash because it seemed human, right? It seemed to understand what we were asking it. It could generate not just sentences, but poetry and press releases and agenda and extract meaning. The marketing campaign was very well coordinated and very effective and so now we're all talking about it and as the result there's a great deal of confusion about something that you've actually been using for quite a long time. So that's why tag and project evident are here with you today. With the council to
just go a little bit deeper and chart a path forward. For philanthropy. So a bit of an overview about what we'll be doing. I'm going to spend just a few minutes talking about a recent pulse check survey on AI that Ted conducted with our members. I'll share an example of a recent pulse check survey on AI that Ted conducted with our members. I'll share an example of an AI that TIE conducted with our members. I'll share an example of an AI usage policy. I'll talk a little bit about an AI usage pol
icy. I'll talk a little bit about an AI adoption policy. I'll talk a little bit about an AI adoption framework that Sarah from Project Devon and I are collaborating on. And then I'll shift gears and Sarah will go much more deeply into a nuanced understanding of AI and some great case studies. In the social sector about how people are leveraging these tools. So with that, I'll dive right into the survey. So we last month, TAG decided to just level set with our members in philanthropy and ask them
, what what are you hearing? What are you being asked and how can we help? Everyone's talking about how AI will change our lives and philanthropy is no different. So I want to share a little bit about what we're finding in that survey. So, these are the 2 questions. You saw them on the previous slide. This is the first of the 2 questions we asked. And as you can see, most foundations right now are wrestling with 4 key questions. What are the risks? What are the ethical risks of AI? Mainly they'r
e talking about ethical risk. There are of course some other risks that this is the one that most people are being asked at their organization. Number 2, how can we use AI to streamline our grant making processes and enable us to be more efficient and effective? Operationally. Number 3, how can our nonprofit partners and grantees be enabled. By AI in how can we make, how can AI make their lives more easy in grant applications and reporting for example. And then the fourth bucket is actually my f
avorite bucket, which is, you How can AI, how can we invest in AI to extend the mission of our nonprofit partners? How can they do more with the dollars through the use of AI? And then the second question. We asked was, what do you need? What would be helpful for you right now? And so you can see that people would really like some basic education, some transparency on what is AI, what are these algorithms doing, they'd like guidance on how to navigate the ethical considerations to ensure that AI
's usage improves inequity. They're also asking for practical information. They need some case studies on how to use AI responsibly, understand the practical applications of it. They also like more strategic conversation. We're going to get there in a minute, but they'd like place and space to have more strategic conversations about. The role of AI in our sector and who should we partner with. On how do we partner? What does it look like to invest on the longer horizon in AI? And then lastly, yo
u know, they'd really like a cookbook of assessing risk. So looking at a. Horizon of opportunities and understanding what the risks associated with each are and determining how to then to proceed. Now, note that this survey was conducted before yesterday's somewhat historic bipartisan meeting on AI with Elon Musk, Zuckerberg Gates, Sam Altman, and others. Led by a bipartisan group of senators if it's helpful i actually shared a recap on linkedin I'll post that in chat I share a recap on LinkedIn
of that meeting yesterday. It was pretty historic and. And you'll see that the government is exploring its role. Very deeply in minimizing risk and maximizing benefit for our societies. So, while the government. Wrestles with these philanthropy is also really wrestling and we're wrestling in this tactical reactive space. But we're also starting to, and I hope we'll go deeper in grappling with philanthropy's role in workforce education. Minimizing, future bias and harm and then you know investin
g in what Bill Gates called the transformational in innovation, transformational innovation with AI. The AI for Good Space. So yes, this may be new for philanthropy, is it much as it is for government? But Senator Schumer said yesterday that we have to try to act as difficult as that may be. And so we're all doing our part. I invite you and your organizations to help address some of the needs that foundations to help address some of the needs that foundations are asking for. Related to AI. Here'
s some of what my small organization are asking for related to AI. Here's some of what my small organization is doing. You can find the survey, to AI. Here's some of what my small organization is doing. You can find the survey, complete survey findings at this URL, the solution showcase the recording of an AI solution showcase we did last month and then some plans for an AI adoption framework, which I'll talk about in a minute. So I'm gonna spend just a couple more minutes sharing some of these
examples of these resources. I'll delve into a couple of them. Many of your organizations are wrestling with AI usage within your orgs, by staff. By grantees. Some of you are creating full blown usage policies. I'll share one in a minute. And then some are naming just naming concerns. And so as you see on this particular slide, this is a foundation that has not gone so far as to define usage, but they are simply codifying within their org and awareness of the concerns that they that exist around
the use of AI. And so much of philanthropy is talking about bias. You might have seen the horse. I'm sure you've seen the horror stories. But Bloomberg did a bit of research using the stable diffusion image generator, generative AI. Image generator to create thousands of images related to job titles and crime and You know the results are as bad as you might guess with skin tone and gender being a very strong of how a particular job gets displayed like portrayed like doctor, lawyer, housekeeper,
for example, highly gendered, highly skin toned, results from that, from that, generative AI tool. And so Sarah from project evidence going to go a little bit deeper on ethical issues. So we'll just simply post that. Study in chat if you're curious. So in addition to bias, privacy and ownership or are also important, you know, what happens, for example, to the privacy or intellectual property of content that's discussed during a meeting. If there's an AI notetaker that's in attendance. Tag has
members who have restricted the use of AI. Notetakers at all meetings for the organization. So we're there's a wide array of reactions. I'm sharing one that's very hands off. I've just mentioned one that's very hands on. And restricting any AI note takers, for example, at their meetings. And then here's a third way. So this is a large foundation. In Tags membership that has created an AI usage policy for staff and board. And they've categorized the data. The data that the foundation has into 3 b
uckets, public, non-sensitive, and then sensitive data. And they've essentially created a policy that governs the use of AI tools based on those 3 buckets of sensitivity. And so I'll share. I will share an example of each. Public data according to this foundation's policy is anything that you could already see on the internet. And they are completely fine with staff board vendors using generative AI tools like Chat GPT. To with any of that this text so a staff member can paste any of this inform
ation in chat DPT and ask for feedback, for example. Or barred another alternative AI tool. Nonsensitive data. It may not be publicly available, but it doesn't reveal any proprietary information or any confidential intimate in information about a grantee or a beneficiary or a staff member for example. And so there are some guidelines. Related to what you can do with this bucket of information. In in AI and generative AI tools. This last bucket of sensitive data. This could be tricky. Probably yo
u accidentally before we started talking about all this, you might have pasted a grantee application. That into chat GPT and said, hey, can you summarize this for me? Or you might have pasted, you know, something else related, you know, an NDA or contract and ask it to highlight issues for you. This would be considered sensitive data that is restricted for use in these tools. If you go to Open AI's website and explore, how they store your data, they fully admit that there is an exposure risk, wh
at they call an exposure risk to the data that you input into chat GPT. The data stored by Open AII on servers in the United States for 30 days. They also use the data that you input to analyze trends. They say they do not use that data to train their AI. But again, this bucket of very sensitive data is a risk not worth taking. As the, the sector and the platforms continue to evolve in my opinion. So shifting gears a little bit. We're, we've shared on the previous few slides a couple of reactive
steps that foundations can take right now. Things that you can use to address questions you're getting from your CEO, maybe like, you know, why are we using AI or are we already using AI or should we be using AI? Some of what we've previously shared might help you begin to address those questions. At the same time, it's important that we consider the long horizon. So TAG has recently announced a partnership with Project Evident. To provide a more strategic lens on AI adoption. I think much in t
he same way that you heard at yesterday's Senate hearing, you know, answering questions around long term for our societies. How do we minimize risk? And how do we invest to maximize benefit. So I'm going to talk a little bit about this AI adoption framework for the next minute or 2. So that you can understand what we're working on and maybe help us way in and refine this framework. So what you see here on the screen is a, will be working on this concept over the next couple of months, but this i
s the loose concept. This is not a big framework. Please don't post this anywhere. Yeah, we're still working on it. The horizontal swim lanes. Show is a way of condensing all the factors into opportunities, risks, and then considerations for ethics and deployment. Those horizontal swim lanes then would be the consideration that you need to consider for each of the vertical elements. And the top of that framework shows individual use by staff board, etc. In the middle operational effectiveness, h
ow can you use AI to be more efficient and effective in your operations and then at the far right how AI can be used to achieve the mission of philanthropy and the missions of our nonprofit partners. So we will be working on this over the next several months. We have 2 different design committees. 2 different committees I should say that are guiding the development of this framework including stakeholders and researchers from throughout the sector. And so we've established the set of principles
that you see here. November, we'll be testing this framework, Sarah and I, at an in-person workshop at the TED conference in Nashville. This November. And so you'll see a version one working framework released publicly in December, 2023 again we're just trying to equip the sector to assess opportunity risk ethics deployment and a variety of spaces so that we can move forward without either overly embracing or overly shunning the use of AI in our sector. And so our hope is that that the framework
helps philanthropy make day-to-day decisions. Like, should we allow note takers, AI not takers to attend our meetings? As well as more strategic ones like how can we invest in AI to scale. Our operations so that the dollars we give go further. And then you know, my sincere hope is that we also use the opportunity to pause and ask. What is our responsibility? In philanthropy. You know, if we, if we can rally to spin up 200 million dollars and pulled funds for clean energy or 500 million for loca
l news innovation. What would it look like if we established a pooled fund for AI innovation grants for nonprofits or deepened our investment in orgs like AI now or the partnership on AI. Inside philanthropy had a. A recent article on the history of AI funding. It's not that we've never funded research and ethics before in this space. Orks like Mcgovern, Schmidt Futures, Macarthur, Omidyar and more and more have been doing that for quite a while. So we'll post that in the chat so you can see in
that article a really great history of the work that's been done to date. So you are welcome to download all of these resources, track these issues at this webpage at the TAG website. In the meantime, I'm going to turn it over to Sarah, one of the most. Thoughtful people I've encountered in Teg's work on AI in the past year. Sarah is a fellow traveler on the road of the Third Way, navigating between really irrational fear and unfounded exuberance regarding AI and then also, you know, a fellow be
liever in the promise of ethical AI to to unlock potential and bridge the dividend digital divide so Sarah I'm gonna stop sharing and turn it over to you. Thank you, Chantal. Okay, I'm going to move to my screen. So first of all, I really wanna. I wanna celebrate everybody who's joined. Because We wrote a blog post for, talking about philanthropic engagement with AI, with the Center for Effective Philanthropy. And the first step was educate yourself. Educate yourself about AI. And so I really ap
preciate folks are here in the room educating themselves about AI and wanting to learn more. As we imagine that again, project evidence, we exist because we believe data can be used not in a thumbs up thumbs down capacity to understand whether or not we're achieving our missions, but it can really be used in an ongoing R&D capacity. To improve and to get better results for communities and we believe that's true for nonprofit practitioners as well as for foundations. You can imagine how excited w
e were. At AI became more commonly. Understood and awareness was building the nonprofit sector and particularly how excited we were to be in partnership with Tag because we really see this as a way forward for people to be able to use data in a way that is always on for learning inside of their organizations and improving equitable outcomes. So just a quick overview of AI and how it can be used. I want to share some case studies with you that framework that Chantal was talking about which was th
e individual use. Can I use chat GPT as an individual on my work all the way towards Midcan attainment strategic use? I want to give you some examples of the mission attainment because I think those are the examples that are most exciting. For where our sector will be going in the next. 5, 10 years, if not sooner. So first, just to clean up some, sometimes from line with confused and around AI. AI is the capability of a computer to mimic human cognitive function because problem solving. You prob
ably have heard past performance is not a predictor of future behavior. Actually that's how AI works. Is using past data to predict what might happen in the future. That is different than generative AI. So, Dali, an image generator, chat, GPT, barred, narrative generators are also sound and video generators. If you think about AI as I love books about mushrooms. What are other people who like booked about mushrooms? Can you recommend to me what that might look like for me? Cause I'm part of this
subgroup. Generative AI would be writing me a new book about mushrooms that's focused on the mushrooms of South America, right? It's based on prior information. It's been trained on but it's been in its skills using prediction but essentially it's generating something. New and is creative as opposed to analyzing craft data to predict future. Outcomes. So I'm gonna give you 4 examples of different organizations, one of whom at the funder that is using a machine learning application within AI or
generative AI to enhance their outcomes. So first, predicted analytics. Chantal, I loved your examples. Yes, we have as a consumer and our society you have not been engaged with as a as a one size fits most. For more than a decade. You have been experiencing customized engagement if a offline retailer you have a credit card with them they have data on you in terms of how you purchase in your habits if you purchase with any online retailers they have a bevy of information about you and that means
that data can be analyzed and you can be placed in a subgroup so that they can cater to your potential in person needs once. So Target has a very famous case example of this. They obviously have hundreds of millions of data points across their shoppers and what they buy. They were interested in looking at. Women who were suddenly beginning to buy diapers. And began to think about what did the purchasing habits look like. Month before that. Could we actually predict if somebody was pregnant? And
in fact, they could. They could see if you were buying a combination of, I think, 27 products. You're including Q tips and unsented hand lotion. You were more likely to be buying diapers 9 months from now. And that was incredibly important. It's target because they could begin. Offering you customized opportunities in terms of coupons or other ways they might want to engage with you so they could capture more of your wallet. That's a sort of a bold example of what's happening every day when you
're shopping online. So an organization called Gem of Services, which works as about a hundred young girls every year and provides long-term psychiatric care outside of their homes. They were interested in whether or not they could move towards something that looked more like Target. Were there subgroups that were not possible to see with like the naked eye? For their counselors to be able to break the girls that they were working with into sub-groups and provide a more customized experience wit
h their program model. And in fact, by looking at historical data. And using a version of predictive analytics called precision analytics, they were able to uncover new subgroups. So that meant a program that I would experience was different than the program than Chantal would experience was different than the program that Wendy would experience. They still had a program model that had over a dozen elements. But the reality is some of those elements were more important to me and my likelihood of
achieving success. Then they were to chantel. So what happened in 2022 after implementing this model for a year, they had a 92% reduction that's almost cutting in half. The risk was chant like the acuity score of girls who were entering the program which meant that they were less likely to show up in an emergency room. And less likely to show up back in psychiatric care. It also created a almost 90 day reduction in length of stay. So young girls were back in their families almost 3 months in ad
vance. So huge outcomes. Not by changing their program model, but by tailoring it and customizing. I think one of the things we really lose when we do a one size fits most. Programmatic model. Is that we we lose being able to tailor to the ends of the bell curve. And often those folks at the end of the bell curve are the folks who are most in need of support. Interestingly, Gemma has taken their algorithm. They're now looking to sell it to other organizations that similar program model so they d
on't have to build their own algorithms. That will actually become a new revenue source for Gemma. Another machine learning algorithm would be natural language processing. You're really familiar with it. The Chantelle was saying, Hey, Google, a theory, hey, Alexa, those are all ways in which we might. Gage with natural language processing. So there is a funder, a very large founder that sits on. Thousands of grantee reports. And they were frustrated that the information that in those grantee rep
orts is essentially locked in the PDS. And that knowledge wasn't available to their program officers to inform their work with current grantees or make decisions about future grantees. Do they end up using an algorithm with natural language processing to read and access all of those documents? So what happened is that staff could quickly query, historical, grantee reports to inform their decision making. What do we know about hunger and early childhood development? What do we know about? Post se
condary support for black men aged. 24 to 30. So. In a way being able to happen to the knowledge that the foundation really already had, it just wasn't accessible to the folks who needed it. They're also looking at a potential opportunity, but what would it mean if they opened? The ability to query their database. To their practitioners. Oops. That's my theory going off on me. So, so when we think about the what works clearing that are static. And maybe have not delivered the type of insights an
d the type of impact I think we were all hoping when they were developed. The reality is each foundation within their granting reports is fitting on their own What Sports Clearing House, they're sitting on an extraordinary amount of information that could really help enhance equitable outcomes. This is a way that possibly you could make that available to those in your community. And finally, it was gonna help program officers really ensure that new grants that they were making weren't reinventin
g the wheel. Right, so we have program officer turnover, things change over a number of years. Sometimes idea sounds really exciting, but it isn't in fact new, but there may be a way that the idea could be. Current evolve into something that's building on prior work. So that's the power of natural language processing. Of recommendation engines. Something that we are all experiencing and using my network homepage looks entirely different from Chantelle's entirely different from yours. Netflix has
over 2,000 individual hate profiles for movies on which they build recommendation pages and on top of that is in learning based on your own action. That's how you get recommendations. I was quite surprised when Korean language TV started showing up in my recommendations. Boy, they were watching closely. They knew me. So, equal opportunities school, they look to find help. School district fine students that should be enrolled in AP or IB classes but are being overlooked for some reason. And they
actually look at a pretty tremendous amount of data. They collect their own data through surveying of students and teachers. They marry that to standard data that the schools are already collecting from. Disciplinary actions to attendance to, grades. They also then look at data from outside the school that's available from. Census crack data about the community in which the cast and which schools are pulling and then an individual is assigned to that school to help identify these folks. And Equ
ipop Opportunity School said like, quickly build something to look at all of this information and make a recommendations to our staff members about who's being overlooked. Not being the determinant. Of who's being overlooked. But make a recommendation to our staff that there's still a human in the loop. Now this is something they're still in the midst of planning, but the goal. Is that that would allow them to potentially reconcile this concern their organization that uses a lot of data, but the
re's a little bit of a reference either organization between a value conflict of being data driven and oriented to equity. And they really saw this as a project that could marry those 2 things and help their organization move forward. They believe it will also increase the accuracy and efficiency of the student identification process and when they gain efficiency there, that meant that the individual who is spending a significant amount of time finding the students has more time to support the s
tudents. When they're in the A, B and IB classes. So shifting my my work day so it's at a more impactful place then something that can be aided. By the use of a recommendation engine. And ultimately, this is going to enable them to scale the program while preserving impact. I want to give you one more example because none of those are about Canada AI and that's the we're all the hype cycle is sort of right now and generative AI. Generative AI, based on large language models, that's for a, a lang
uage AI like chat, GPT. So you might be aware that chat,BT version 3 was trained on. All the information that was on the internet up to 2,021 like so that's the data set it used to be trained on. Crisis tech line, which is a, is a triage service for people who are in crisis using text for disease using phone lines. They actually use AI in a couple of different ways, but the way they're using Genitive AI is around training volunteers. The problem they were trying to solve is they train about 10,0
00 volunteers a year. That's a significant amount. They all go through, a 30 h training program. But part of what they were learning is that once they went through the training program, not everybody felt comfortable. Going immediately onto the platform and serving folks that were at higher risk. And maybe comfortable chatting with somebody who had a breakup, but I'm not comfortable chatting with somebody who might want to. So they used their past experience. Hex conversations with past, clients
. They then wrote fictional scripts. So they didn't upload real client data. They uploaded fictional scripts. Around low risk and high risk scenarios and created a chat bot. That was entirely about training. So it's a crisis tech line, boundaries training bot. It is not available on chat KPT. It's based on fictional script. But I'll let the volunteer engage with that tap spot as if they're in a real testing conversation of somebody who's at low risk or at high risk. Thereby increasing my confide
nce, increasing my skills and making me perform better. One live on the platform. So again, more effectively training close to 10,000 volunteers building that volunteer confidence and they hope reducing churn of folks who go through the training and then say I'm not comfortable, right? How do I build that comfort? So I hope that's exciting to you. I mean, these are the types of examples, the kind of light Chantelle and I up in terms of folks really using AI to go towards mission attainment beyon
d just deficiencies. But of course, yeah, the thing that is incredibly important is thinking about AI and bias. I think this is when you look at the tools and how the technology was built. It's not that people didn't care about ethics, but the commercial sector is compensated. We're caring about scale and profitability. And so that kind of the core criterion that was used for developing these tools. So first I just want to remind people that all data and all models have bias associated with them
because they're involved, humans are involved. It's not just AI model. So I love that we have the emphasis now. And you know, really looking at where is bias possibly happening in our organization and around AI. I would just say open the aperture. Because it's around all data and all models. When you think about AI, bias can really be introduced at 2 levels. So one is the data set that you're using to train a model. So Chantal is a wonderful example from Bloomberg around who's whose image comes
up and we talk about certain jobs, that's because the data set it was trained on, which might have been the internet. You know, has its own bias. Around what are the pictures. And what are the skin tones of people that we're showing you? In those jobs. So if you want to make sure that you're hiring more women in your workforce and you want to find women who are likely to be successful, that you want to create an algorithm to go through your resumes and yet the ponderance the folks in your organ
ization are male and you use your male training data to try and find women applicants, you're gonna find your women applicants are actually getting weeded out. Right? Because you're using a data set that's not fully, encompassing. Of the breath of humanity. The second way you'll see, bias is it's actually hard coded into the system. So it's not the data set that was trained, it's the values that we waited in the system. So an interesting case for women's world banking as they were looking at mic
ro lending and why microlending was going to more men versus women, even though women had people credit risk. And they'd realize that the algorithm that had been written underweighted work that was done in the home or in the retail sector versus work that was done outside the home and in other commercial sectors. Well, there's a pretty big gender split in terms of where you might find women gaining their work experience. So while it wasn't intentional, it was hard coded into that algorithm that
women, even though they had equal credit scores, were going to be deemed a higher risk for micro lending. So that's a way that you could actually see. An algorithm coded with bias and it wasn't that the algorithm said we don't want to lend to women. It was something else that was correlated to women that was allocated that was coded into it. So I think the, you know, the challenge beyond just data and model for AI is the fact that, you know, models are highly scalable when we get into things tha
t are technology backed. You know if you have an individual in your organization who's not value the line that could be a challenge. But if you have a AI. Chat bot, virtual assistant that is interacting with your community and it's not values aligned. The potential for that damage is just much greater because of the scale. Also with machine learning, which is a number of these applications I've been discussing. They're not a they're not stagnant. Each new piece of data that comes through them, t
hey get smarter and they get smarter and they get smarter. And if you're not careful about the data that is being fed to those algorithms, they will learn biases. Without any explicit direction from you. So I'm just gonna look, we still have a little bit of time. I wanna talk. Just a bit about the tools that you might find available to you. And I want to talk a little bit about the cough. And test. It's sort of the ability to test and transparency trade-offs. Chantal, do you want to jump in? You
know, Sarah, we, will have time, but you do have a question related to the example of the under using an LP, through the grants report. Yeah. So let's allocate a little bit of time for that one. Great. Let me look in the Q&A and see what that question is. Oh. Yeah, I'm happy to share. They're just wondering if there's a cake study and what the funder was looking for in their grants. Database essentially, if there were any, you know, lessons learned. Was there any public share out? No, there's a
reason why it's called Funder X. There is no public yet. So unfortunately we're at the beginning of a folks who are doing this. So what was interesting for that funder is they hired a third party evaluation firm. To go through and to call. What were the learnings from a set of Graham? And they basically wanted to look at what was the cost and time investment having a third party do that and how much of that could be done. With an algorithm. And what they learned is it could absolutely be done.
And that while the initial setup was somewhat similar in terms of the cost of paying a third party to go through at once. Once they had the algorithm, it was going to be an always on opportunity for that. Now, that's not to say that there isn't any additional investment that needs to be made once you bring an algorithm to your organization because they can drift. So you do have to continue to tune them. But you know, essentially they realize that this was a, this was going to be a way to bring t
his type of knowledge into their organization in a much more efficient manner. Thank you. Sarah. I'll track the other questions. We can answer them after after you're done. Sounds good. So I think one of the, you know, when we've talked to nonprofits and I think this, you know, shows up for foundations as well. Some of the real concerns about accessing AI tools and knowledge is cost. And his talent. And one of the things the for-profit sector is quite good at is figuring out how do I whole thing
s into less expensive, less complicated tools to use. And you are seeing that with Amazon Web Services, Salesforce, others creating tool packages. Where you don't have to have data scientists on staff, you don't have to have coders on staff and you can access a tool and upload your data into that pool and keep it as sort of a private boundary instance for your organization. So there's a challenge with those tools. It brings down the cost and it brings down the talent requirements. But those priv
ate no code tools. Cannot be tested for user bias because they they are intellectual property. Of Amazon or Salesforce or Microsoft. So understanding how those tools were created, how they were tested for bias. The data that was used. In order to train those algorithms. That is the type of kind of buyer beware. And vendor screening that our sector needs to bring in to buying these tools. We have tremendous purchasing power. We have over 400 billion dollars. The purchase power in our sector. That
purchasing power is largely on the timeline right now. And without our purchasing power there. The tools that are being developed. People aren't asking those questions. Or they're not asking those questions with the same level of. Okay, and urgency that our sector asked those questions. So really important to do that. The challenge for large language models and things like Chat GPP are free, is that they have become so complicated you really can't test them for bias. So when you talk to data sc
ientists around open AI, the answer is like, we don't actually understand how this is working anymore. It is the the work of a large language models are so complex. So again, that brings in a challenge around bias. And if you think about the fact that Jackie BT was trained on everything that's on the internet up until 2021. Think about what's on the internet. Think about what that represents the best and I would say the worst of our societies. Think about how much of that information is about th
e global north. Versus the global style. So, you know, the biases and the data that have been training very, very hard to think about how you, how you go after that. A large language model. I think the good news about open source or private custom tools is that they're transparent. They can be tested. The bad news about private custom tools is that they are more costly. And the challenge with open source is that you do have to have some expertise on your staff to be able to modify those open too
ls and be able to use them with your data. But when you hear that term black box. That's what people are talking about with private no code tools. No code means don't have to. No code to use them. Or large language model. They it is you cannot test them. For, for bias. And that's that's a real challenge in terms of our accessing those tools. I'm gonna stop my share there. Sarah, I would love to, there's a couple of questions that, you know, we can get to and death, but I'd love to open up the co
nversation with you around. Hmm. The power inherent in the ability to adopt these tools. So one of the questions using AI for fundraising. There's another question in the Q&A about should we enable grantees to use AI to write an application? Yeah. Bye. And I think something you and I have talked about is. Left alone. The orgs that will adopt AI are already the orgs with power and money. That's right. And so the smaller nonprofits, the less equipped nonprofits, those who actually could desperatel
y use AI to write a grants because they don't have a grant writer on staff. Hmm. That's right. Those are the people that will be left behind in their ability to adopt. And so I've wrestled with what is philanthropy's responsibility to enable nonprofits. Yeah. To adopt AI so that they don't fall further behind on the smaller community based end of the scale. Right. And I think about that in both of those questions we see in Q&A. The ability to write shortcut a grant application or report the abil
ity to use AI for fundraising. I don't know if you have thoughts there as well. Bye. Well, I, so first of all, I'm gonna entirely, if you know, Chantal, since we've had this conversation and agreement and I think we only have to look to the past of how the internet was adopted. Originally the nonprofits or how Yeah, around. Customer relationship management systems were adopted. In in the world of nonprofits and you just see it's what you would consider elite organizations that have access to tha
t technology first. And then the use cases for that technology get defined by those organizations. And you are an elite organization if you have relationships with. Top 100 funders or you have relationships with pro bono relationships with large 500 corporations. And then, yeah, the suite of those projects, the product again, get developed for the largest and the most technologically sophisticated of the nonprofits as opposed to community-based ones. So I think we can look to the path. We have a
pretty bad history of that's the way technology caves into or change. Written large kind of caches into the sector. I look at the opportunity with AI. Some way, in some ways, similar to the calamity. Of COVID where there was a heat shift. That required funders to think differently and to think comprehensively. Not just about the largest organizations, but also about the community organizations. And I'm really aching for funders to step forward in a coalition. That is thinking broadly about the
infrastructure that is required for AI. To be adopted. Broadly by the nonprofit sector. I the last thing I'd like to leave on that topic is the is the irony of We have concerns about nonprofits using rantable or chat GPT to write a grant. We also have grant management software. That is excited about using AI to read. To help rent, yeah, help grant makers. Make sense of the flow of grants that they're getting. And a world in which we have AI on one side to help write the grant. And on the other s
ide to help read and process the grant. I think the question is. Maybe we should change the grant. Maybe this isn't a technology problem. Maybe this is a human problem. Sarah, I could not agree more and it's interesting in my role as ED of Teg. Yeah. I often have product providers in our space ask me to provide a sounding board for ideas that they have and early in the AI hype cycle. Yeah. Yeah. We had a product provider come to me and say hey we really want to in you know implement chat GPT for
grant makers in the first case study was because of course the purchases are of their software are the foundations of the first case study was the first usage was. Yeah. Reading the grant, simplifying the reading, the processing of the grant applications. And one of the things I urged them to do was not go to market with just something to benefit the funders, to go to market, something that also benefits the nonprofits. And so when you look at this particular platform they have chat GPT in A or
open AI enabled on both sides of the equation. Hmm. So a nonprofit. Use Open AI to generate an application based on a set of inputs and then the funder can also use it to process on the other side. But it begs it begs the question of what's the point of the application all together. Now the flip side is if you leverage AI then to make 2 matchmake, as it were, you go back to the bias problem. Yeah, that's right. That's right. So, we talked to you spoke a little bit about, AI fundraising tools. Y
ou know, I, I would love, to go to throw a wild card question out there. Yeah. Yeah. We haven't spoken about this, but in yesterday's Senate hearing, you heard in the debrief, senators talking about whether open source, whether releasing these AI tools, source was a good thing or a bad thing. Right, there's it increases transparency, but at the same time then these very powerful tools are in the hands of what we might call bad actors for as noble or oversimplication. Hmm. I have mixed feelings a
bout this. You know, I haven't really landed in my own position on whether open source release of AI is net good or bad. I think there's a great deal of governance and regulation that needs to be put in place, but ultimately I'm usually fall on the side of transparency. I don't know if you have any emerging thoughts. This is such a great question. And certainly Meredith much deeper, thinking on my part and I look forward to this conversation with you and without the Chantal. So I think there are
2 things that I have been thinking a lot about. One is human in the loop. You know, what does it mean to insist that there's always a human in the loop? When an algorithm is concerned. And I just feel like increasingly. I feel like without having a human in the loop, we are really we're really putting ourselves at risk. The other piece is the transparency versus IP. And I, I think I'm, I think I'm still landing on the place of transparency. Because in the absence of transparency I don't think w
e'll get to, I don't think the good actors will have tools. That they'll be able to be confident in and that people will be able to have trust across the board in the tools. This is a tremendous technology, but it has the possibility of completely eroding trust in our population. I think a lot about social media. You know, and what has been the curve on social media from fun to. Pretty destructive to a lot of burnt bridges in terms of who wants to be a part of social media any longer and how peo
ple are using those tools. I will hope to see AI write a similar curve. Because of it engendering a complete lack of trust. In populations. So I, think transparency is going to be one of the only ways that we can really ensure that algorithms are not racist. That the data is inclusive. And potentially equip a much broader set of people to use good algorithms. That's where I am today, but I check check back. Yeah. Yeah. Yeah. The cautionary note about the the metaphor around social media right gi
ves a rise to what my concerns the privatization of a public sphere in the online discourse right and so in what sense do we want another element of technological advancement of our society to be totally privatized. No. Okay. So our friends at the council, we've gone way off into a session 2 point. But I I don't see any other outstanding questions. I'm checking chat. I'm checking. Effective training available on how to use AI for grant writing. That's a question from Cheryl. You know Candid actu
ally has a really interesting session on using Grant Right, Grant AI for Fundraising. I did post a tool, fundwriter. AI. Cheryl to answer your question about grant writing. And then I'm seeing if there's anything else here. I think we're all caught up. So with that, Good. Well, I'm, I'm gonna plug. I'm gonna plug something. So those case studies that I had the pleasure of sharing with you, we had to find those, by, you know, kind of sorting through the sector and trying to figure out what was go
ing on. There is currently an active survey. That will link in the chat, with Stanford Institute for Human. Third AI and Project Evident. Well, we're wearing funders and nonprofits about their AI use. And it's both about even if they're not using it, we'd like to understand that as well. We're hoping that we will get a robust response to this survey so we can come back to you and say, these are how foundations are thinking about AI technology. This is how they're thinking about funding it, wheth
er it's in within existing program areas that are actually its own technology area. This is what's happening inside of nonprofits. This is what different roles inside of a nonprofit, how they're using AI or not using AI and what are the gaps or barriers that they have. Around AI use. So this would be a tremendous act that I think for all of us as a factor to get our arms around what's going on. So I really invite you to take the survey, invite your colleagues to take it, send it to your portfoli
os. Because we do wanna have folks from across organizations represented, we don't just want one individual from an organization or a foundation. We'd love to have broad number of people really filling that out. So I hope you'll join us in learning. Wonderful. Thank you very much, Chantal and Sarah, for joining us, joining us today. We are eager to have feedback from those attending about what other questions you might have as we continue to, join the conversation about AI in the sector. And wit
h that, thank you very much to our speakers. Thank you for attending and have a wonderful rest of your day, everyone. Great to be here. Thank you.

Comments