In the previous article (AI Literary and CSOs: Legal Guidelines and the Main Challenges) we established that AI literacy is not merely an addition of technical skills, but both a democratic imperative and a strategic lever for your organization. While the European legislative framework, through the original version of the AI Act, highlighted the obligation for organizations to ensure a certain level of control and competence over these systems, the risk remains that AI could deepen the digital divide and reinforce social inequalities. As we have seen, literacy involves understanding, using, and critically evaluating these systems to ensure that decisions remain human-driven and well-informed. But how can CSOs embed this cultural shift concretely?
To turn generative AI into a driver of empowerment rather than a source of dependency, we propose exploring six key solutions from semantic rigor to intellectual autonomy, from the individual level to the collective one, as well as including social dialogue and structured competency frameworks.
At the individual level, the challenge is to equip each member of the organization (employees, volunteers, and even beneficiaries) with the intellectual tools needed to engage critically with AI systems in their daily activities.
1. Avoid lumping technologies together
It is important to consistently specify which type of AI is being discussed and to preferentially use the terms “AI system” or “AI framework” rather than “AI tool.” This distinction underscores that generative AI systems integrate training data, computing infrastructure, economic models, social representations, and technical standards into a complex socio-technical system. Overly simplistic languages obscure their systemic nature and the power dynamics they embody.
For example, a CSO working on social inclusion might organize internal workshops where staff distinguish between a chatbot interface and the broader model behind it, helping teams better understand issues such as data bias or dependence on large tech providers. This simple clarification can already shift how these systems are perceived and used.
2. Adopting rigorous evaluation standards
The evaluation of AI systems currently suffers from what journalist Kevin Roose described as a “veritable mess”, with inconsistent and opaque methodologies. CSOs must therefore develop a culture of critical assessment, avoiding ad hoc comparisons and demanding transparent and shared evaluation criteria.
Concretely, a humanitarian CSOs using AI to prioritize aid distribution could implement an internal review protocol. That means before adopting a solution, teams would systematically examine the source of performance claims, the datasets used, and the limits of the system. This not only improves decision-making but also strengthens organizational accountability toward beneficiaries and donors.
3. Distinguish between actual uses and promised uses
While the discourse around generative AI is often enthusiastic, real-world adoption remains uneven and sometimes limited. Understanding this gap is essential to avoid misguided investments or unrealistic expectations.
For instance, a CSO in the education sector might hear that AI can revolutionize personalized learning, but teachers could also use it sporadically due to the lack of training or trust. By conducting internal feedback sessions, the organization can identify what is truly being used versus what is merely being promoted, and tailor its training strategy accordingly.
AI literacy must therefore be seen as a learning continuum that encompasses the ability to understand, use, and critically evaluate AI systems. This approach also offers a unique opportunity to address broader issues of digital literacy, including data awareness , attention economy and platform governance. These dimensions should be adapted to each organization’s mission and audience.
Moving from individual practices to organizational transformation, the collective level plays a decisive role in structuring, sustaining, and scaling these efforts across the CSOs ecosystem. Here are three more pillars to consider:
4. Share a harmonized competency framework
To structure skills development, CSOs should rely on national or European frameworks such as Cadre de référence des compétences numériques (CRCN) for France or DigComp 3.0 on a European level. Other guides, such as "The Responsible AI Roadmap for CSOs" by Ayşegül Güzel on Hive Mind, are ready for you to use. This helps move beyond the current “far west” of training offers and ensures a common baseline of knowledge.
For example, a federation of CSOs could adopt DigComp 3.0 as a shared reference and map the skills of its staff and volunteers against it, allowing for more coherent training pathways and facilitating collaboration between organizations.
5. Amplify professional training through redistributive mechanisms
The history of general-purpose technologies (steam engine, electricity, digital technology) shows that strengthening human capabilities yields greater returns than focusing solely on technological tools.
For CSOs, this can mean allocating part of their budgets specifically to training end users and leveraging mechanisms such as mutualized funding models. A small association, for instance, might partner with others in its network to co-finance AI literacy workshops, making high-quality training accessible despite limited resources.
6. Rely on local digital mediation
CSOs should build on existing community ecosystems (for instance, community centers, or local digital hubs) to deliver accessible AI literacy activities (such as workshops, debates, or one-on-one mentoring).
For example, a CSO working with vulnerable populations could collaborate with a local library to organize introductory sessions on AI, helping beneficiaries understand and experiment with these technologies in a supportive, human-centered environment.
Conclusion
After exploring why AI literacy must be implemented in CSO’s strategic vision, our second article has focused on how to make it a reality. Embedding AI literacy is not a one-off initiative but a strategic transformation that requires both individual engagement and collective structuring.
By combining semantic clarity, critical evaluation, grounded usage analysis, shared competency frameworks, investment in human capital, and local mediation, CSOs can turn AI into a lever for empowerment rather than exclusion. Implementing the six pillars will help with AI literacy in your CSO, but most importantly, will reaffirm the core mission of civil society organizations: equipping individuals and communities with the knowledge as well as agency they need, in order to thrive in an increasingly complex world.
Your Feedback Matters
What did you think of this text? Take 30 seconds to share your feedback and help us create meaningful content for civil society!
The article was written by Martin LEPINETTE, Research and Project Manager at Renaissance Numérique and Jean-François LUCAS, General Manager at Renaissance Numérique
Disclaimers
This piece of resources has been created as part of the AI for Social Change project within TechSoup's Digital Activism Program, with support from Google.org.
AI tools are evolving rapidly, and while we do our best to ensure the validity of the content we provide, sometimes some elements may no longer be up to date. If you notice that a piece of information is outdated, please let us know at content@techsoup.org.
