The evolution of this technology, however, into a tool for the emancipation of our democratic societies must be ensured as quickly as possible. This can be done through the development, deployment, and adoption of AI literacy in all structures, including civil society organizations (CSOs).
What is AI literacy?
The concept of literacy, which has its roots in the ability to read and write, regularly appears in the public debate. At the heart of this approach is the desire to equip every citizen with the knowledge and skills necessary to navigate autonomously in a constantly changing technological environment. We therefore consider AI literacy to be the ability to understand, use, and critically evaluate AI systems. This democratic goal can be broken down into five operational objectives:
Understanding the basic principles of AI, which determine its capabilities and limitations, without necessarily mastering the more complex technical aspects. The aim is to demystify these technologies in order to gain a realistic understanding of what AI systems can and cannot do, and to distinguish between likely developments and speculative promises. For instance,CSOs working on human rights understands that a generative AI tool can help summarise large policy documents, but also knows that it sometimes does not “understand” context or values, and therefore cannot replace expert legal or ethical judgement when analysing sensitive issues.
Use these devices effectively, appropriately,and safely, developing practical skills that enable you to take advantage of opportunities and limit risks. A CSO can use AI tools to draft grant proposals or translate campaign materials more efficiently, while ensuring internal guidelines on data protection are respected.
Critically evaluate AI outputs and their societal effects, understanding first and foremost that AI responses can be false (hallucinations), biased, limited, and can have an impact on the misinformation and manipulation of audiences and users. For instance, a media watchdog organisation double-checks AI-generated summaries of online narratives before publishing them, aware that the tool may reproduce dominant biases.
Make informed decisions about the use of AI. In practical terms, this means adopting responsible AI practices, both in how AI is used and how it is deployed. Today, much attention is given to the “greatness” of models, particularly those with hundreds of billions of parameters. However, the so-called “sweet spot” typically lies around 40–50 billion parameters. Models of this size are sufficient for most targeted use cases. Much like software, there is little value in paying for an entire suite if you only need a fraction of its features
Contribute to democratic debates, if only by describing and explaining its uses, and being able to express needs or expectations in light of the above points. This dimension should be approached from a civic perspective. The objective is to ensure that everyone is well-informed and able to think critically in order to form their own opinions. For CSO leaders, this also provides a means of better understanding the strategies of economic actors and public policies that may affect their work directly.
Legal Guidelines on AI Literacy
The term “literacy” has long been present in French and European public debate. In fact, as early as 2004, Stuart A. Selber proposed a framework for understanding computer literacy. Since then, several developments have taken place in the field of artificial intelligence, notably with the addition of the ethical dimension (UQAM, 2024).
These theoretical concepts have also inspired guidelines such as those recently produced by the OECD and the European Commission in the report “Empowering Learners for the Age of AI” which even foreshadows international standardization of AI literacy. These guidelines can also be compared with, or complemented by, those developed by UNESCO specifically for the era of generative AI, which integrate algorithmic literacy into a comprehensive approach to media and information literacy. Although these developments are welcome, they fall short of providing a training and acculturation program that truly matches the challenges posed by artificial intelligence models.
Nevertheless, these reflections and guidelines have been echoed in the legislative sphere. The adoption in June 2024 of the European Regulation on Artificial Intelligence (AI Act) marks a major step towards providing a structured framework for the use of AI in the European Union. Beyond classifying systems by risk level (unacceptable, high, limited, minimal), Article 4 4 introduces (for the first time) an obligation of “AI literacy” for all actors involved in the life cycle of a system. This applies to suppliers, integrators, deployers, and end users, however the details of it remain unclear. This article also specifies the responsibility of AI system suppliers and deployers, who must therefore “take measures to ensure, to the extent possible, a sufficient level of control over AI.” Additionally, the European Commission presented a comprehensive digital simplification plan (digital omnibus), while proposing to water down the “AI control” requirement included in the AI Act, reducing it to a simple encouragement left to the goodwill of companies.
AI literacy and Economic, Social and Democratic Development of CSOs
At the same time, in France, the digital divide remains very much present and could add to (or even amplify) the lack of AI literacy within organizations. Indeed, more than 16 million people currently remain digitally excluded, posing a major challenge: making AI a factor of inequality, rather than a lever for inclusion . The ability to embrace generative AI thus becomes a matter of social and democratic justice. But technology only transforms society if it is truly adopted, which requires lifelong support for individuals in the face of diverse uses and resources.
The adoption of true artificial intelligence literacy is a major strategic lever for the development of your CSO, based on three complementary issues:
Economic: The history of technology shows that productivity gains benefit those who are able to widely disseminate the skills necessary for their use. Investing in AI training therefore,offers a higher return than Research and Development (R&D) support alone. A partial reorientation of the massive investments expected towards upskilling employees and users would reduce inequalities in access to the benefits of AI innovation. For CSO, this challenge remains valid for the development of their activities, regardless of their size and the budget that can be allocated to training.
Social: AI literacy makes it possible to reconcile rapid adoption of technologies (not “missing the boat”) with the development of a critical eye. It addresses the tension between the economic imperatives of innovation and the need for inclusive appropriation, which is essential to avoid exclusion and strengthen social justice.
Democratic: Crucial in today's world, the importance of educating citizens to understand and question AI systems is essential to preserving democratic action in the face of the rise of algorithmic decision-making. AI literacy thus becomes a democratic imperative, ensuring that AI serves to empower rather than reinforce inequalities.
In conclusion, AI literacy is emerging as a key issue, not only in terms of training but also in terms of democracy and citizenship. It is a question of supporting employees and managers in adopting (or rejecting) technologies that are now ubiquitous, but above all of enabling informed decisions to be made about their use. This increase in skills will determine the possibility of free, responsible, and critical choice, both individually and collectively.
Within your organization, don't risk AI becoming an opaque technology. Investing in AI literacy means choosing to use technology to promote autonomy, social justice, and democratic vitality.
___________________________________________________________________________________________________________________________________________________________________
The article was written by Jean-François LUCAS, General Manager at Renaissance Numérique and Martin LEPINETTE, Research and Project Manager at Renaissance Numérique.
Disclaimers
This piece of resources has been created as part of the AI for Social Change project within TechSoup's Digital Activism Program, with support from Google.org.
AI tools are evolving rapidly, and while we do our best to ensure the validity of the content we provide, sometimes some elements may no longer be up to date. If you notice that a piece of information is outdated, please let us know at content@techsoup.org.
"AI literacy and CSOs: legal guidelines and the main challenges", by Jean-François LUCAS and Martin LEPINETTE, 2026, for Hive Mind is licensed under CC BY 4.0.