Can foundations use artificial intelligence to improve their grant management? Can AI support the internal capacity of organizations?
This special edition on Philanthropy & AI delves into the role of grant making philanthropy, and wealthy patronage, in the development of cybernetics and AI. Learn more about how philanthropy can contribute to AI development, preventing misuse and promoting democratic culture and critical thought during times of social and ecological transition. Below is the opening editorial by Jean-Marc Fontan, PhiLab Co-Director. You can explore the full special issue here.
“Whether or not we’re ready for it, AI is here. AI brings with it the opportunity to do tremendous good in the philanthropic sector. We cannot let what we don’t know prevent us from harnessing its full potential. Instead, let’s take the long view and explore how we can use AI in a sustainable and equitable way to best serve our communities.”
Andrew Chunilall, Chief Executive Officer, Community Foundations of Canada
Editorial: Philanthropy and Artificial Intelligence
By Jean-Marc Fontan , PhiLab Network Co-director
History is filled with twists and turns. The Luddite movement began on March 26, 1811. Luddites were workers or artisans in English factories who rebelled against mechanization in the textile industry. They ransacked factories and smashed looms with hammers.
At the turn of the 19th century, mechanized factories led to deteriorated working conditions and were causing poverty. From being masters of their trade, workers, artisans and craftspeople became assistants—aged eight years and over—at the beck and call of high-performance, noisy and dangerous machines. Alongside the Luddite movement, workers in England and continental Europe had mobilized to demand labour code reforms from lawmakers.
[Translation] The mechanical looms led to degrading living conditions for the old hand weavers, whose income dropped from 21 shillings in 1802 to 14 shillings in 1809. In 1807, over 130,000 workers signed a petition to set a minimum wage. (Hérodote)
On March 29, 2023, 212 years after Ned Ludlam’s followers rose up, major figures in the world of new technology issued an appeal in an open letter on the dangers or abuses of artificial intelligence (AI).
[Translation] AI systems that could compete with human intelligence can pose a great risk to humanity, claim the 1,000 or so people who signed the open letter published by the Future of Life Institute, an organization which seeks to limit the risks associated with the development of new technologies. (Radio-Canada)
Their appeal called on governments to act to keep AI development under control. It’s no longer a question of doing away with looms, like in the days of the Luddites, but instead, it’s dealing with AI and using it safely for the future of humanity.

The debate between AI defenders and opponents has spilled over into the public sphere. In Quebec, Yoshua Bengio, one of the people who signed the open letter, is also a key player in AI development. Also in Quebec, Yann Le Cun, another AI luminary, demystified the dangers associated with AI development in a recent interview with Radio-Canada. He opposes all intent or attempts to hinder its development.
[Translation] We can imagine a future where everyone will access the digital world through a smart assistant that will help us in our day-to-day lives,” he said. “It’s just like having a human assistant.” (Radio-Canada)
Being at the service of a mechanized loom in the 19th century was a loss of skill, a transfer of part of the human expertise to a machine. According to Yann Le Cun, supporting AI wouldn’t imply the same loss of skill, as other trades would emerge. Human labour will adapt and continue to support the development of increasingly intelligent, self-learning and self-generating machines.
[Translation] Certainly, as with any technology or technological transformation, trades will shift. Trades will disappear, and others will appear. […] The best way to build an intelligent machine is to let it build itself through learning. (Ibid.)
The terms of the debate are clear. On one side, Yann Le Cun believes in increasing reliance on AI-based machines, which will be beneficial as long as our democratic institutions remain healthy. On the other side, Yoshua Bengio thinks we need quick and firm control over the progress of this technology to prevent AI from getting out of hand and being used by the governments to monitor and control populations. An AI that would enable the production of increasingly sophisticated and destructive weapons.
There’s also a third option behind them. It’s the root of another debate. Should we support or prevent the development of a Big Tech AI civilization that would be smarter, more rational and more reasonable than humans? An AI civilization that wouldn’t need us to exist and develop or could even turn us into slaves.
The Matrix Trilogy (written and produced by the Wachowskis in 1999) and The Blade Runner saga (two films and a TV series in production), based on a Philip K. Dick novel, Do Androids Dream of Electric Sheep? (1968) are the framework for the third option. One where an advanced AI, ethically and aesthetically superhuman, struggling for survival against a class of humans seeking AI extinction for fear of being surpassed and downgraded…
The issue raised by the two cinematic works brings up the question of who is more human: a cyborg or a human? An AI that is an evolutionary leap over us, homo sapiens. Or an AI that subjugates humanity and makes it an energy supply—the master is now reduced to a slave.
Between those two scenarios, we find a concrete and equally questioning option, the one longtermism and transhumanism provides.

It’s clear that, despite calls for caution, history will take its course. Hence the importance of precaution (Yoshua Bengio) and maintaining a democratic culture (Yann Le Cun). In this regard, the media, university institutions and philanthropic foundations have a critical role to play in building what will become history.
The dossier we prepared explores the place and role of grant-based philanthropy, and wealthy patronage, in the development of cybernetics and AI. It explores this universe by examining its possibilities and the fears it raises.
Our dossier looks at how foundations can use AI to better manage their philanthropic grants and the requests for donations made to granting organizations. Specifically, it questions the role of AI in supporting organizations’ internal capacity by asking: how can AI tools be used to support administrative tasks?
The dossier also covers key political topics. The current challenges are being discussed in various articles and podcasts. They do so by answering various questions on AI governance and what’s being used to regulate its development, while focusing on the issues related to its definition and the policies around it.
Lastly, we explore philanthropy’s role in the development of AI: to prevent misuse of AI and to ensure the maintenance of a democratic culture and the development of critical thinking in times of great complexity and social and ecological transition.