The following is a conversation between James Hodson, Co-founder and CEO of AI for Good, and Denver Frederick, the Host of The Business of Giving.
Denver: The AI for Good Foundation brings together the best minds and technologies to solve the world’s most urgent problems. It does this through the lens of the global sustainable development agenda, a set of goals adopted by 150 countries to end poverty, protect the planet, and ensure prosperity for all. And here to discuss that and more, it’s a pleasure to have with us, James Hodson, the co-founder and CEO of AI for Good.
Welcome to The Business of Giving, James!
James: Thank you, Denver. Very happy to be here.
Denver: The organization was founded in 2015. Tell us how it got started and a little bit more about your mission.
James: Absolutely. So I’m an AI researcher by trade. I’ve run fairly large industrial AI research labs. I’ve been involved on the academic side, on the research side. And it got to the point where I realized that myself, along with many other people in our community, had a lack – a hole in the things that we were doing.
So we realized that a lot of the outlets that we may want to work on – some bigger challenges, some social challenges, some of the big problems facing the planet, some of the big problems facing us as a society – we weren’t getting that opportunity with the AI research funding landscape that existed, with the industrial pressures that existed, or with defense pressures that exist also in terms of where the money is flowing.
And we started to talk more and more back in 2013, 2014 about: How do we make it possible for researchers, make it possible for the scientific community in general, to think about how their work impacts…not just recommending the best movie on Netflix– not that there’s anything wrong with Netflix, and I’m sure we’re all very appreciative for its existence over the last year. But how do we move some of our work over to actually impacting those big challenges?
And it culminated in a series of workshops that we ran at Stanford University in 2014, where we invited around 20-, 25 people, some of the bigger names in artificial intelligence but not an exhaustive list, to have an intimate conversation about: Where do we go as a community? And how do we build the necessary support infrastructure, the access, the knowledge in order to drive forward these priorities? And the AI for Good Foundation came out of those sets of meetings.
One of the big things that we decided back then is that the Sustainable Development Goals from the United Nations, which were at that time being signed into law across 196 nations, seemed like a very good scaffold for us to build upon. And that’s how we got started.
We don’t necessarily see artificial intelligence as the solution to all of these problems. It’s an additional paradigm that can be used at an appropriate moment when it makes sense.
Denver: There you go. And would it be right to say you’re kind of a bridge then between these researchers and these academics, and the people who are on the ground trying to solve these problems?
James: That’s exactly how we think of ourselves. So, it’s important to note, we don’t necessarily see artificial intelligence as the solution to all of these problems. It’s an additional paradigm that can be used at an appropriate moment when it makes sense. But we’re definitely creating the necessary infrastructure so that when AI is an appropriate element in the solution space, it’s easy to see how it can actually be applied and we have the right network to make it happen.
Denver: Let me get back to the SDGs. There’s 17 of them. These are the Sustainable Development Goals. What has been the impact, James, of the pandemic on some of these goals? I would imagine some have been adversely impacted and maybe, ironically, some of them have been positively impacted. What would your assessment be of that?
James: So, definitely it depends on each SDG headline in the news. What you’ll see quite frequently is that obviously due to industrial shutdowns, due to decreased commuting and travel in general, emissions have decreased significantly in some areas over the last year and a half. And certainly, that has a beneficial impact on our journey towards net-zero emissions, or at least kind of the climate compact that are being signed.
On the other hand, there are social dynamics that came out of the pandemic that I think have impacted our ability to make progress on some of the other goals. Well, on the positive side, we’re not in workplaces anymore physically, which has, in some sense, provided for less discrimination within workplaces, more kind of equality of opportunity. Everybody’s on the same Zoom platform, or similar.
Denver: And we’re all the same size, too. The boss doesn’t have a bigger box.
James: Exactly. And I think that’s had some equalizing effects, especially on perspectives of people in the workplace.
On the other hand, when you look at, more broadly, economic opportunities, when you look at the challenges posed by actually building the infrastructure that we need for a cleaner economy and actually moving ahead on most of these goals, not being able to do those physically has been a barrier.
And certainly, several of the SDGs have been impacted negatively from that, the primary one, of course, being good work for all and kind of economic resilience, where obviously, you can argue it in many ways – Is it because of the pandemic? Is it because of the solutions that we came up with for the pandemic at short notice? But we have made an enormous wreck of some very important economic areas over the past 18 months, which will take a long time to patch.
Denver: Probably more than 18 months. That’s for sure.
I also noted that you went into a partnership with Predli recently to help assure a more equitable and sustainable future. Tell us a little bit about that partnership and what you hope to be able to do together.
James: Absolutely. I’m happy to. So, Predli is a consultancy, a great little company out of Sweden that’s been growing massively over the past couple of years. Started by researchers out of UC Berkeley and a network of institutions across Europe.
What we’re doing is, on our diversity, equity, and inclusion program side, which obviously covers three or four of the sustainable development goals, we’re now going into the area of workplace equality. So this is looking at how individual companies and industries have adopted technology, how they’re changing their structures to ensure more equitable opportunity set for each of their employees, and how they talk about that also in the media. So basically, relating what they say they do to what they’re actually doing, and helping bring more transparency around that information.
So, Predli has been helping us with how to get that to market. We want to eventually get to the point where, for millions of companies around the world, we can tell you what they’re doing about gender equality, around ethnic diversity and other diversity metrics within their workplace, and bring transparency for future employees, investors in those companies, and their consuming public at large.
Imagine you go to the Target website, and you go and you buy whatever it is that you’re buying… a tricycle or something, and it tells you in your browser: Target is an A-plus company for diversity. Feel good about the purchase that you’re making. Or you go to a different website and you buy a different product, and it tells you: Actually, this company, they want to do it, but it’s not looking like they’ve made much progress in the last five years.
Denver: Their press releases are great, but that’s about it. So it’s a little bit, it sounds to me like what JUST Capital is doing in terms of trying to evaluate and grade these companies. But bringing AI is just going to be at a much, much higher level. And you’re right, James. That’s what’s been missing in this space – is accountability. And if this can bring that kind of accountability, I think it’s going to change a lot of things.
Let’s talk about a couple of the SDGs. Poverty, extreme poverty. The World Bank has defined that as living on less than $1.90 a day. It has really dropped dramatically over the past couple of decades… a lot of it having to do with China, I’m sure. But for the first time in human history, it was down below 10%, but we all know that it could be reversed with this pandemic and those economic things you were talking about before. And it may be going up on the way up. How is AI being used to help address that challenge?
James: That’s a very important question. It’s also probably one that we could spend hours, or if not days and weeks talking about. So my first point whenever we talk about poverty is to go back to the definition of poverty. So the measures that we use, the KPIs, if you will, around poverty are based on these relative benchmarks.
It’s a bit like a consumption basket for measuring inflation. It’s very difficult to relate those particular numbers to quality of life of the individuals who live just above that band, versus just below. The distribution of these people – some are struggling massively. Others are freeholder farmers that actually are doing very well. They just don’t need to go out and buy goods with dollars. So they don’t have cash, and therefore you don’t see income, and you don’t see that actually, that’s not the area of focus that you might want to put your resources towards.
The UN is talking more and more about multidimensional poverty. So that’s looking holistically at human beings – what do they need? And along each of these dimensions, trying to understand where people have more, where people have less, and how can we attack the individual components, rather than the approach over the last 50 years on average has been if somebody is living on less than a dollar a day, let’s give them a dollar a day and see what happens. And in the vast majority of cases, it hasn’t turned out that well. Some people argue that South Korea was a great success for just this kind of aid and entrepreneurial growth type of intervention. There are arguments on both sides around that.
What I would say is the main thing that we’re doing in this area is actually working with developing nations and helping them to understand the role of infrastructure and technology in the development plan that they have over the next 20 years or so. We don’t have many programs that really impact the individual directly. Like we don’t have fieldwork where we go and we talk, or we would go and dig a well or install the toilet, and things like these are very much broader.
We’re working with countries like Ethiopia to develop their national strategies for artificial intelligence, and that’s working both on the economic opportunity side, healthcare, education, judicial systems, the whole widespread of different areas. And each of these impacts the country’s ability to generate value and an individual’s ability to get on board with that, build new businesses, employ people, and build actually a cycle, which is the best way that we know to get people out of these either unbanked, unknown kind of poverty levels, or actual poverty where they are lacking the access to the basic services that they need to survive.
Denver: So, it’s not direct support per se as you say, it is really highly leveraged activities that if they were implemented by these governments, they are going to have a multiplier effect, which is going to be profound.
James: Yes. And that’s not the only solution. You also need people on the ground doing the individual things. We’re a part of a big ecosystem.
Denver: And to your point about the $1.90 a day, which is the way the UN has always defined it. It’s like $1.90 a day in Fargo, North Dakota. How far does that go compared to $1.90 a day where you live in the Bay area? It’s completely different depending on the part of the world, and one goes a lot farther than the other.
Well, let’s talk about another one of these issues, and that’s hunger. We had about 800 million people who were undernourished, with children under five I think being about 22% of that total. I had the head of the World Food Program USA on the program just recently, and obviously, what we’re looking at now between the pandemic… but also between climate change and the drought, those numbers could go way, way up. So along those same lines as poverty, what are you doing in the field of hunger?
James: So hunger… or basically food security was one of our first programs as an organization. Of course, it’s not like you solve food security by having a program around it, but it is an area where we were very active.
We had a very interesting collaboration with IBM and Syngenta as the main kind of anchor partners back in 2015 where we were looking at the rate of yield improvements that could be achieved for farmers across the US. And this was, basically, if you go back to the ’90s and you look at how the US food system was expanding, a lot of it was based on the promise of genetically modified foods.
So, the idea was that you don’t need to crossbreed two plants anymore. You can design the plant that is going to be less susceptible to droughts, less susceptible to certain types of pests and have the yield to maturity, which you want for your particular climate zone. Obviously, a broad set of reasons that’s no longer a tenable approach to food in the United States in particular, although the rest of the world has moved significantly in the same direction.
And as a result, farmers and corporations developing seeds and the whole ecosystem were sent back to a five-, six-, sometimes longer years cycle in order to develop the seeds that they needed with disease susceptibilities that were appropriate for the climate zone that they were in and not losing your huge crops to drought. And as we have higher variability in climate going forward, you need to have the plant varieties that are very, very, very optimized for the specific fields that they’re being planted in. Now, five- or six years to get what was on average a 1% yield improvement, not good. Not good given the population projections that we’re seeing.
So, what we did was look at: Can we use artificial intelligence algorithms within the experimental cycle in order to propose the experiments that have a higher likelihood of being successful? That’s the basic idea. So, you have a hundred fields available. What are the 100 experiments that you should be running in order to increase yields faster? That led to a doubling of the yields and the experiments that were run back then. So a 2% yield improvement over five years, which is already a much better place to be in than where we started.
Now, of course, getting that out to freeholder farmers in Nigeria – that actually requires a whole infrastructure that is different from just an algorithm. It’s actually a set of education programs. It’s a set of processes around the food ecosystem that you need to build in order to make it a viable improvement across all farmland that we have. That was a start.
In terms of stating facts about this, Ethiopia was one of the largest food-producing nations in Africa up until about 30-, 40 years ago when a lot of the farming policies were changed in the country, and the government basically bought back or acquired a lot of land from freeholder farmers in order to communalize that output.
Now, Ethiopia is a net buyer of food to feed its population. And it’s not because they don’t have the land or the fertile soils to be able to grow the food; it’s because of the geopolitical realities that have happened in the regions over the last decades. And that is a very important aspect of food security that goes far beyond just: Do you have the right infrastructure in place and the right trucks carrying grains between cities? It really comes down to a mechanism design problem in the land of economics.
AI is still in its infancy as a technology deployed at scale in society. And we know that any of the things that we do, in order for them to actually penetrate and have value, it will take time. It’s not just a case of more people going and digging. It’s really about setting up basic infrastructure in a lot of these countries in order to have an impact.
Denver: Yes, and leadership. But it sounds like a lot of what you’re talking about is that AI is allowing you to tighten feedback loops and have a lot more of those loops than were there before in order to make improvements.
And I also know from what you say, James, is that you look at a longer-term perspective than I think a lot of other organizations do, which are looking at, like: What are we going to do next year? And how many more people are we going to help? You’re beginning to look at these things in a much more longitudinal way and say: How can we deploy AI to really try to help build societies over the course of years or decades?
James: Well, I guess the truth is, Denver, that AI is still in its infancy as a technology deployed at scale in society. And we know that any of the things that we do in order for them to actually penetrate and have value, it will take time. It’s not just a case of more people going and digging. It’s really about setting up basic infrastructure in a lot of these countries in order to have an impact. But yes, we do see ourselves as a very strategically-oriented organization relative to others in the field of sustainable development goals.
Denver: Let me ask you about another initiative, and that would be intelligent cities. And I asked that because you just launched or I think you’ve launched the Urban Assessment Tool, correct?
James: Yes. So we’ve been developing, in partnership with several cities and over the past couple of years, a counterpart to our national AI strategies initiatives with countries for local municipalities. The idea being that a lot of the change that needs to happen should actually be at the municipal, urban unit level rather than at the national level. A lot of the problems that we see are city-specific rather than federally organized.
And for that reason, obviously, it’s not the same as doing a national AI strategy. We’re not going to be tackling universal health care within the bounds of the city budgets. But there are many, many, many things that you can do with better technology within city limits that there is budget for, and that people should be thinking about as plausible alternatives to how they allocate money today.
Denver: Let me ask you a few general questions about artificial intelligence, if I can. And I’m a pretty casual observer of this to say the most. I’m not that up to speed, but I do read, and I tell you: I’ve been very troubled by the language around artificial intelligence, and that is one of a competition between the US and China. And I see all the time – Who’s winning the AI arms race? Who’s going to control the future? Who’s going to win the 21st century?
Tell us what you and your colleagues in the field think about that kind of language, and maybe a way we can hopefully approach it in a much more unified fashion.
James: We echo those sentiments exactly. We don’t think it’s useful at a geopolitical level to really be posing technology development as a kind of arms race. But at the same time, this has happened before. Artificial intelligence is not the first tool to be weaponized in some way between countries, and it probably will not be the last.
Now, one of the reasons we started the AI for Good Foundation was actually because as a community, and I can’t say I speak for everybody within the artificial intelligence community… I think that would be unfair. But certainly the people that I was working with and kind of our close partners, we all felt very strongly that we had completely lost control of the social narrative around the technologies that we were developing. Politicians, rich business people and the film industry– were doing a far, far, far better job at characterizing the work that we were doing versus how we would talk about it.
Now, it’s not to say that artificial intelligence researchers are qualified to fully understand the impact of the work that they can do on society because actually, I would argue that almost nobody is really qualified to predict the future in that way. And it’s a difficult problem. However, we strongly believe that there are enormous benefits that can come from the appropriate and judicious application of AI in the right verticals. In the right sectors, there is a lot of impact that AI can have.
It is certainly also true that artificial intelligence can be used for a broad set of negative outcomes. Negative outcomes might include, you know, things in the defense industry that go wrong. It can also mean certain bad actors who decide to design a system that has artificial intelligence components in it that help them to achieve bad ends. It’s not the AI itself part of this, I think, that is the problem necessarily. But we are, as a global community, building technologies that are far, far more scalable. The cost of production and prototyping is tiny relative to where it would have been 20-, 30 years ago. As a result, the teams that you need, the budgets that you need to do something good or bad, have fallen. And it’s not about AI. Any technology, you can design it for good or bad, and now you can design it for bad much more cheaply than you could before.
Is it fair for us to say that as a result, AI is in urgent need of strong regulations globally and strong ethical frameworks? Well, maybe in general, all of our disciplines that involve creating new technologies should be subject to certain limits and constraints. We have nuclear pacts for this very reason. I don’t think singling out artificial intelligence as a special case is necessarily useful. I don’t think that it’s necessarily posing a larger threat to humankind than other threats that we’ve faced in the past.
On the economic side, we collaborate with Chinese institutions. We collaborate with Indian institutions. We collaborate with scientists in Russia. We collaborate with people in Australia and Germany and France and Brazil. As an organization, we are global. We were started in the United States. But there are not that many highly qualified AI researchers around the world. It might be that the media makes it sound like there are hundreds and hundreds of thousands or millions of AI specialists. The people who are actually having a big impact number in the hundreds or low thousands.
As a planet, we don’t have the capacity to be building AI everywhere right now. The biggest thing that people are doing is training AI scientists to do the work. So a lot of it is kind of empty discussion to some extent, because when you think about China and the US competing against each other, you’re really talking about 10 to 15 labs on both sides…. each of those labs, having 20 people inside them. Yes, these are really smart people, but it’s hardly the race to the moon quite yet.
Right now and for the foreseeable future, it’s the human’s responsibility to actually think about those problems, and it’s society’s responsibility to design the paradigms that allow for certain problems to be solved or not solved.
People often assume that algorithms are biased, and we’re using algorithms just to scale a human process that was unbiased. Well, most of the time, what we’re trying to do is fix a human process that was prone to much worse biases than a consistent machine will be prone to.
Denver: I find strange things with AI, like for instance, algorithms which can really determine whether you get a job or you can get a mortgage or things of that sort. When one of those algorithms isn’t just right, we blame the algorithm as opposed to the people who created the algorithm, where we wouldn’t blame our toaster if something was wrong with the toaster. This just seems to be a different set of standards that the way we look at AI and things related to it, and I don’t know if that’s just me, but I always find that to be very peculiar
James: I think people characterize artificial intelligence differently to other inanimate objects that they have come into contact within their life. And to some extent, that’s the right thing to do because as humans, our brain has only a limited amount of working memory that it can use, and it can only characterize a certain depth of representation. So, either something is acting logically and reasoning about its environment, and therefore, it has some amount of unpredictability, or it’s not.
If it’s a toaster, we have the representation of a toaster in our head. It’s easy. You put a piece of bread in. You set the timer. It pops up. But if it’s not doing that, or if it did something weird, like refusing to give you back your bread until you paid it or something, then you would start to get more suspicious about your toaster. You view it as a reasoning object. And although humans don’t have that much experience with AI, they’ve been told that AI is a reasoning object that’s going to be more of the latter kind than of the former kind. So, people are then wary of the kinds of decision-making that that modified toaster would make.
Again, it is definitely true that you can design algorithms to be less susceptible to bias in training data. You can collect data that is more appropriate for a particular problem that you’re trying to solve. But ultimately, right now and for the foreseeable future, it’s the human’s responsibility to actually think about those problems, and it’s society’s responsibility to design the paradigms that allow for certain problems to be solved or not solved.
So, the fact that we have hiring algorithms that are biased or less biased has both to do with what we mean by bias in society and what we’re actually aiming for as a culture, just as much to do with that as it does to what data was collected, what algorithm was used, and who was implementing it. And I think right now, we trick ourselves that we have very clear ideas of what we mean by discrimination in the workplace, what we mean by access to healthcare. I think we’re quite a ways away from having defined them well enough for the human process to be devoid of bias.
People often assume that algorithms are biased, and we’re using algorithms just to scale a human process that was unbiased. Well, most of the time, what we’re trying to do is fix a human process that was prone to much worse biases than a consistent machine will be prone to. There are instances where that’s not the case where a consistent machine actually ends up biasing you more. But in general, you have a bad situation that you’re starting with, and you might end up with a bad situation that you’ve implemented algorithmically. But it’s unfair to hold it to different standards than the original problem.
Denver: Well, we do that all the time though, James, because I think if an algorithm makes a mistake, it’s unforgivable. If you make a mistake, it’s forgivable. For instance, if we get into a car crash, well, that’s one thing. But my goodness, if an autonomous vehicle gets into one car crash, that’s it. Done. No more autonomous vehicles. And you’re saying, well, it’s like 1/1000th of what people do, but we accept people. We do not accept machines.
James: All of this ultimately comes down to legal frameworks and insurance.
Denver: Yes. You’re right. That’s what it gets to.
James: It takes everybody some time to get used to new paradigms in society. And I think we’re living in the time where change has accelerated to the point where every year we’re getting used to living slightly differently. And we assume that everybody is on board with this. But I think there are lots of people who find it difficult to adapt and lots of industries that find it difficult to adapt. And therefore, on a political level, you have lobby groups that, you know, don’t understand what their role is as things are changing.
So rather than everybody innovating and saying, “Great. There is change. It’s a new opportunity.” Of course, you’ll have a lot of conservative mindsets that are wanting to avoid having to rethink how they do business or rethink how they live their lives. And a lot of the pushback will come from that side. But it’s inevitable that we will have fully autonomous vehicles on roads at some point in the coming decades that doesn’t require human drivers to sit in the driving seat.
Even without artificial intelligence, we would still do it. We would just have roads that are connected to the vehicles, and you would enter your destination at the beginning, and the car would take you there. You could have done this without any kind of AI just by connecting things appropriately with circuits. Now, we’re doing it with AI, which allows it to be more scalable, allows you to have many different vehicle manufacturers, many different road systems, more unpredictability. We’ll get there. So better to start embracing it now than to fight against it.
Denver: I’ve always found when all these things happen, the first question people ask of themselves is: What’s going to happen to my job? And that’s where they stop. And you know what? Some other job is going to come that’s even going to be better.
So let me see if I can take this down to the individual level of a nonprofit organization. Because I have a lot of people who’ve been on the show over the last year or two, and they tell me that with the magnitude of the problems we have out there– and they put that against what their organization is doing– and they say, “Oh my goodness! We’re not making a dent! We’re helping maybe 10,000 people. We’re going to go to 12,000 this year; 40 million people have this problem. How can I scale? How can I scale to have greater social impact?” And then artificial intelligence always comes up in those conversations.
What would you suggest to an organization that wants to have a greater impact? Is there a way they can begin to start thinking about AI that would be useful?
James: The short answer is, yes. The long answer is that it really depends on the problem that’s being solved. So, there are some problems where technological solutions can be scaled faster than human solutions because they’re cheaper, because they’re faster to ship, and you can work fairly good at scaling training. So we can train a lot of people fairly quickly to do a procedural task.
And so, if what you’re doing is just a case of reaching more people, then probably there are ways to use technology to scale that solution faster. And if a lot of it involves decision-making up to a certain level of social complexity, then maybe AI can help you make those decisions more efficiently and scale them faster. However, we only build AI solutions as an organization. So, in that argument, we should be infinitely scalable and have solved all problems. However, it turns out that the people in the institutions that we work with are the biggest friction to making the changes that we want to make.
We have a plan, for various developing countries, how to get very efficient telemedicine and resource management for healthcare out to the most rural areas in those countries. It’s not a difficult solution from a technological perspective. If you could have a thousand doctors in your country, but they’re all in an urban area, and you want to scale them so that they can provide critical care to people in rural communities on demand…and therefore solve some of the scale issues that the healthcare system has in those countries, it’s not that difficult to design the networking infrastructure… and some of the kind of interactive decision-making algorithms needed in order to get people to use that kind of infrastructure. It is still near to impossible to do this in a short amount of time due to the frictions that exist with existing institutional structures.
So, from our perspective, we have scaled quite successfully some programs that were more in the private domain, which didn’t have to necessarily deal with government infrastructure and government systems. But a lot of the biggest challenges that we’re facing– government sits at the center of those infrastructures. And you cannot move them without having the right decision-makers in place and having it trickle down all the way through the government infrastructure.
Denver: They are at the levers of change. There’s no doubt about it. I’ve talked to a lot of social entrepreneurs. They don’t want to deal with government because they move too slowly, and they realize three years later, they have to if they want to really have any kind of impact. It’s just inevitable.
What kind of funding sources are available for artificial intelligence that seeks to help create positive social impact? And what has been the impact of the pandemic on those funding sources?
James: So a lot of our funding sources are institutional. So from federal sources, European Union sources, kind of large national programs that deal with this kind of technology-driven change. And so, certainly, we’ve been more on that side. We have been developing a lot of our corporate sponsorship programs, membership programs, however you want to term that side. And certainly, there are many corporations that now see a benefit to partnering with an organization that is technologically advanced and trying to make progress on the SDGs. That’s because a lot of corporations are now seeing how they are placed with respect to the SDGs. That might be a marketing thing. It might be because they are actually core to one of these supply chains.
Take Ikea as an example. When it comes to sustainable forestry, they are a key stakeholder in that at a global level. It’s such an enormous operation, enormous supply chains. They care enormously about sustainability initiatives. And so, it’s not going to be a surprise that they might also care about it being technology-driven, scalable and what can you do in those areas.
Well, just to finish the thought off, individual donations have been practically impossible for us to make scalable enough for our organization. We’ve tried on multiple occasions. We’ve tried crowdfunding campaigns. We don’t have stranded kittens that people can kiss, and we don’t have soup kitchens where you have the individual contact. And we found that people just don’t– they don’t give on an individual basis to us. Other organizations have advantages in that area, I’m sure, and maybe we’re not very good at that. But it hasn’t worked for us as an organization, which was a pity for me personally, because I actually… when we started… I was adamant that I wanted us to be individually funded because I wanted us to remain independent of the pressure.
James: But it turned out that that just wasn’t going to be.
Your second question about the impact of the pandemic over the last year – it’s been fairly severe. For us, it’s been very severe. We were lucky that we are very lean as an organization, and we had been very careful financially up until when things hit… and we’ve also been lucky with some of our core supporters. But I’d say around half of our institutional funding dried up completely and almost immediately because it was redirected to specific COVID relief initiatives, which is I think societally, we have to make decisions about where funding goes. And emergencies happen. And if ever an emergency happened, it was definitely in the past year and a half.
But for us as an organization, because we were not specifically focused on things that are really relevant to getting vaccines out to people faster… or preventing COVID spread in the community, that’s just not how we operated as an organization. And I don’t think pivoting would have made much sense for us. As a result, we saw a lot of our funding sources dry up mechanically over that period.
AI is a vector to being able to open conversations in society that previously were very difficult to start.
Denver: I’m sure you’ve tried this, but when you’re talking about individual donations, I think of the cryptocurrency crowd because this seems to be more likely to support your kind of organization than a lot of others. I also think of the gaming crowd. Those are two demographics who have views and perspectives of the world, which are pretty untapped to a certain degree that you just wonder if they might give a little bit, a more careful look to the kind of work that you’re doing.
But let me close with this, James. You have said that AI allows us to have the conversations that we could previously not have had. What kind of conversations would those be?
James: This is one of my principles that we use within our organization: That AI is a vector to being able to open conversations in society that previously were very difficult to start.
Now, for example, questions around diversity, equity, and inclusion. So, the idea that we talk openly about gender disparities in the workplace. If you go back 10-, 15 years, these were almost taboo subjects. There are cultural norms. You don’t really talk about it. It’s exploded over the past decade. You’re now having the media talk about these issues 24/7. We’ve got programs from all levels of government. We’ve got many nonprofits that have been set up to tackle these issues. everything from gender equality, to recognizing the range of different personalities that people can take on, both within the workplace and at home, and allowing for cultural diversity, allowing for ethnic diversity, and really promoting the positive from all of these things.
We found that using AI to start those conversations has allowed us to not necessarily put an AI solution into the space, but to actually get people talking openly about: What the real problems are there? What is it that we want? What do we want from our culture in terms of pushing women to stay in the workforce? How does it work with children? How do you want your healthcare system to work?
It used to be that in the US, questioning the fact that we had an insurance-based healthcare system was very difficult. Now, it’s easy. Now everybody is questioning whether it’s the right thing to do. But a lot of those conversations from our perspective have been facilitated by just an acceleration of new technologies that can change the paradigms.
So again, we don’t like to put AI into every space. In fact, we have economists. We have sociologists. We have people who are trained in all different disciplines and a lot of the things that we recommend are not necessarily AI first. But we are in those conversations, and I feel like having the technology spine actually means people listen; people are willing to talk about whether it should be different, and we found that very useful.
Denver: Yes. It becomes a catalyst. It becomes a lubricant. It becomes something to get a conversation started, and that is profound.
For listeners who want to learn more about the AI for Good Foundation, or maybe even financially give you a couple bucks, tell us about your website and what they can expect to find there.
James: So you can go and visit ai4good.org, the number four. Basically, you’ll find that our different programs that fit under education, policy, and research. Those are our three kind of verticals that we do everything on.
There are ways to get involved with us on a volunteer basis. There are ways to get involved, and that’s whether you’re a technical person or a non-technical person. We look for expertise across a broad range of areas.
There are ways to get involved with us– building our programs out directly. If there are people with particular skills that they want to apply or areas where they’re interested. If you’re a university, you can take our SDG and AI launchpad curriculum, and you can actually implement it free of charge within your university, along with all of the resources needed to do that.
And really, if you do want to support us as well, then there are obviously lots of ways to get involved from a corporate perspective, from an individual perspective. And they’re all there on the website. And if you want to give us some cryptocurrencies, there are ways for that to happen as well.
Denver: You can do that too. Well, I’ve been to your website quite a few times, and it is indeed rich. Thanks, James, for being here today. It was a real pleasure to have you on the program.
James: Thank you, Denver. It was a real pleasure to talk to you.
Listen to more The Business of Giving episodes for free here. Subscribe to our podcast channel on Spotify to get notified of new episodes. You can also follow us on Twitter, Instagram, and on Facebook.