This observation can also be applied to civil society organizations (CSOs) that over the last decades in Italy, as well as other European countries, have contributed to generating the "third sector" alongside public institutions and market companies. This large (in Italy there are over 170 thousand third sector entities) and varied there are associations, foundations, social enterprises, etc.) sector, makes it particularly relevant to ask with what level of maturity these subjects are facing the new frontier of technological innovation. 

The question is not so much whether they should integrate artificial intelligence tools into their processes, but with what level of awareness and with what organizational resources.

From this point of view, AI is re-introducing the conflict between logics in third sector organizations and precisely between that of operational efficiency and cultural transformation. This tension has historically accompanied the development of the sector, but is now crossed by a pressure for technological innovation. This generates dynamics of organizational change that will be analyzed, starting from lessons from open innovation programs dedicated to social enterprises and other third sector subjects.

Two perspectives around the same tension

The governance of adoption: asking the right questions

Looking at the governance dimension, what is striking is not so much the resistance to AI as it is the opposite. The urgency to adopt often translates into decision-making processes without a clear vision of the reasons: what concrete organizational need are we trying to respond to?

Answering this question is the premise of any integration path that has any chance of taking root. Organizations that are able to make this transition are characterized by an ability to translate the external pressure to innovation into an internal questioning of their foundational processes.

The time dedicated to reflection on the deep reasons for adoption, in most cases, is not clearly identified. The need to be up-to-date so as not to fall behind leads to prioritizing speed over depth. The result is that many organizations have AI tools introduced but not rooted, with experiments started and then abandoned. It creates an additional effort that overlaps with the already existing one, without producing the expected effects.

There are also questions concerning the quality of adoption decisions: AI produces results that depend, to a large extent, on the quality of the data and information on which it operates. An organization with disordered document processes, fragmented databases, poorly structured information flows, cannot expect an AI tool to solve these fragilities: it will rather tend to amplify them. Data quality is therefore an organizational skill prior to using AI tool.

Space for Learning and Mistakes

Looking from the side of those who provide services, manage projects and coordinate work teams within third sector entities, the context of AI adoption presents a complexity that cannot be reduced to a mere resistance to change.

The third Sector organizations, especially the more evolved ones, find themselves in a life cycle characterized by organizational and managerial complexities; a condition accentuated in recent years due to the combined effects deriving from the crisis of public welfare systems, the increase in the complexity of needs and a reduction of resources. In this context, asking operations to devote time and energy to experimenting with new tools clashes with a compressed reality: the space available for learning is minimal.

The paradox is structural: you would like to use AI to reduce operational overhead, but operational overhead prevents you from investing in AI adoption well enough to produce real results. Organizations that push on the adoption accelerator, without simultaneously easing operational burdens risk, adding effort to effort.

Furthermore, there is a cultural challenge concerning the relationship with failure. Experimenting with AI tools means dealing with unsuccessful attempts, adjustments, trial and error. CSOs are often structured to guarantee continuity and quality of services to users and are often bound by reporting logics that reward short-term results rather than transformative effects and impacts in the medium and long term. Thus, they struggle to build spaces in which failure is recognized as a legitimate part of learning. Not because of a lack of will or ability, but because institutional mechanisms and external expectations tend to penalize error rather than value it as a resource.

Adoption, Mediation, and Narrative

If the tension between governance and operations represents the deep structure of the problem, the question that arises is what organizational conditions can allow it to be effectively applied.

The first element concerns the nature of the adoption process. The integration of AI into organizational processes is not a linear event. It is a recursive process, made up of iterative cycles in which the learnings of one phase feed the choices of the next. During it, the subjects with different roles and perspectives bring evaluations that do not always converge, but nevertheless must find a viable synthesis. Recognizing this recursive nature is not just a descriptive observation. It is an organizational skill that must be cultivated, because it opposes the natural tendency of organizations to seek stability and predictability. For example, when using AI, work teams should take under consideration the guidelines between tacit knowledge and respect for privacy, when sharing sensitive information about users of social, welfare or educational services. From this point of view, an unexpected ally can be represented by internal compliance. In recent years, the third sector organizations have shown the ability to use norms, protocols and standards in a sense that is not strictly regulatory but also enabling and learning-oriented. This very expertise could also be brought into play in the internal governance of AI.

The second element concerns the construction of internal mediation figures, a sort of "AI ambassador". These are people who do not necessarily have advanced technical skills, but who have curiosity, willingness to compare, as well as translation skills between different languages and organizational cultures. These figures are responsible for accompanying their colleagues in experimentation, collecting difficulties, reporting progress, and keeping the space of collective attention to the AI change alive. These figures can make it possible to re-actualize changemaking roles that these organizations periodically try to cultivate, especially in the older ones that are struggling with generational transitions.

The key is for CSOs to view gradualness as a conscious strategy and not as a fallback. In organizations, not only in the third sector, where the integration of AI has produced more solid results, there is almost always an initial choice to focus on small and circumscribed problems, with rapid verification cycles. Starting from a concrete and solvable problem allows you to build trust, to tangibly verify the usefulness of the tools, and generate motivation to use AI, which overall is the most valuable resource in the processes of organizational change.

Finally, there is the question concerning the narrative of the Third Sector organizations about artificial intelligence. The most widespread risk is not to overestimate the potential of the tools, but to unconsciously adopt a framework interpreted differently by the members of organizations. AI as a tool for efficiency and optimization is a partial representation for organizations whose raison d'être is to produce relational value, support life paths, and build community ties. The competence required is not only to use AI tools well, but to remain active in defining the criteria with which these tools are evaluated, criteria that include, but are not limited to, those of operational efficiency. This awareness opens up an unprecedented look at the narratives of AI and other "for good" technologies that often appear to be redistributive on the part of resource holders (the so-called “big tech”). In this case, however, it is the management skills of social actors that indicate the best ways for these technologies to be compatible with purposes of general interest and the creation of shared value.

A Work in Progress

Thinking about skills for the integration of AI in the Third Sector means, in short, thinking about what type of organization you want to be while going through this transition. It is a practical question, which guides concrete choices on where to invest, what to experiment, or what alliances to build.

The skills that these organizations need most are not only of a technical and training nature. Rather, it is a question of skills for governing change in conditions of uncertainty: the ability to ask questions before adopting solutions, to build organizational spaces in which failure is a resource and not a stigma, to mediate between different internal perspectives, to keep alive a collective reflexivity on the processes in progress.

These are, in many ways, the same skills that the most mature experiences of the CSOs have developed over decades of work at the frontier between public welfare, market and community. The open question is whether these skills, matured in contexts of co-production of social value, are transferable to the processes of technological adoption, and whether the speed with which this transformation is taking place risks making them momentarily invisible, precisely at the moment when they would be most needed.

Your Feedback Matters

What did you think of this text? Take 30 seconds to share your feedback and help us create meaningful content for civil society!


About the Authors

Riccardo Naidi is Digital Transformation Manager at Consorzio Nazionale CGM, and co-founder of Factory2030 ETS, a youth-driven and civic-tech organization. He works at the intersection of artificial intelligence, digital transition, and the social economy, designing and delivering AI literacy programs for social cooperatives and Third Sector organizations. His practice combines hands-on experimentation with organizational change processes, with a particular focus on making AI adoption meaningful and sustainable for civil society actors.


Flaviano Zandonai
is a sociologist and Innovation Manager of the Consorzio Nazionale CGM, one of Italy’s major networks of social enterprises. For over twenty years he has worked in social innovation, social enterprise, and Third Sector transformation, operating across research, education, and project development. He is considered a reference point for the integration of community-based welfare and technological innovation.

Disclaimers

This piece of resources has been created as part of the AI for Social Change project within TechSoup's Digital Activism Program, with support from Google.org.

AI tools are evolving rapidly, and while we do our best to ensure the validity of the content we provide, sometimes some elements may no longer be up to date. If you notice that a piece of information is outdated, please let us know at content@techsoup.org.

This content was created with AI assistance and has been reviewed and edited by Riccardo Naidi.