Between January and February 2026, I facilitated three AI Integration Labs for the AI for Social Change initiative; a Google.org-supported project, run by TechSoup and Digital Activism Program Partners, working to equip 10,000 civil society professionals with practical, responsible AI skills across 10 countries in Europe and Africa.

Each workshop examined the same underlying question through a different lens: how is AI fundamentally reshaping the operating environment for civil society organizations?

  • Workshop 1: AI & Information Integrity

  • Workshop 2: AI & Effective Communication

  • Workshop 3: AI & Digital Safety and Security

Roughly 38 participants across the three sessions, CSO representatives from Poland, Spain, Italy, UK, Nigeria, Ghana, South Africa, Kenya, Slovakia, Lithuania, and beyond, alongside subject matter experts in each domain.

The workshops weren't designed to teach AI. They were designed to map what has already changed.

Before the sessions, the TechSoup team had conducted extensive field research: 10 Country Mapping reports, key informant case studies, and a global online survey. I synthesized this research into a set of "shifts", specific, evidence-based statements about how AI has changed the landscape, and presented them in an "Old Reality → New Reality → Strategic Insight" format

Then we asked: does this mirror your reality? What did we miss?

What emerged was striking. These aren't three separate problems. They are three manifestations of one systemic transformation. Disinformation exploits the communication channels CSOs depend on, and both exploit the security vulnerabilities CSOs are struggling to close.

Here are the 17 shifts we identified.

I. The Information Integrity Delta: 4 Shifts

The disinformation landscape hasn't just gotten worse, it has fundamentally changed shape. AI hasn't invented disinformation (it's as old as politics), but it has removed every friction point that used to slow it down.

1. The Volume Shift: Troll Farms → Infinite Content

Old reality: Disinformation was expensive and slow. Troll farms required human labor, language skills, and coordination.

New reality: AI generates infinite, unique content for virtually nothing. Bad actors can test multiple campaigns against multiple targets simultaneously.

The insight: CSOs are now direct targets. When it costs nothing to launch a smear campaign, every organization that challenges power becomes a potential target. In Germany, the term "civil society" itself has been framed as an "enemy figure."

2. The Quality Shift: Obvious Fakes → Hyper-Reality

Old reality: We could spot typos and bad photoshop. Disinformation had a telltale cheapness.

New reality: Deepfake audio and video are approaching indistinguishable from reality. AI-generated text has no grammatical fingerprints.

The insight: The verification gap is widening. Bad actors now have better tools than the average CSO program officer. And perhaps more dangerously, the existence of deepfakes creates doubt about all information, even genuine, credible content. Researchers call this the "Liar's Dividend": you don't need to fake something if you can make people doubt everything.

3. The Reach Shift: Major Languages → Hyper-Localized

Old reality: Disinformation was mostly in English, French, or other major languages, limiting its reach.

New reality: AI instantly translates hate speech into local dialects. Threats move to "dark social", WhatsApp groups, Telegram channels, where moderation tools don't reach.

The insight: The threat has moved to closed networks. Disinformation campaigns are now globalized in production but hyper-localized in content, outsourced to one part of the world but targeting micro-communities in another.

4. The Defense Shift: Manual Checking → AI Shields

Old reality: Human fact-checkers manually debunked claims one by one.

New reality: The flood is too high for humans alone. The volume of AI-generated content exceeds any team's capacity to review.

The insight: AI must become a shield, not just a threat. Manual checking is causing burnout. CSOs need automated early detection tools, not to replace human judgment, but to protect human capacity.

II. The Communication Delta: 6 Shifts

If the disinformation story is about threats, the communication story is about opportunity, with traps. AI hasn't just changed how fast CSOs can communicate. It has changed what communication means.

5. The Production Shift: Creating Content → Curating Authenticity

Old reality: Creating content was the bottleneck. CSOs invested significant resources in producing professional-looking materials.

New reality: AI generates content instantly. When everyone can produce polished content, generic means invisible. The value of generic content has dropped to zero.

The insight: When everyone can generate, only authenticity stands out. CSOs must shift from trying to look "professional" to proving they are "real."

6. The Reach Shift: Broadcasting → Narrowcasting

Old reality: CSOs communicated in major languages to reach donors and international audiences, often leaving local beneficiaries behind.

New reality: AI enables instant, zero-cost translation into local dialects and audio formats. Voice messages and local-language content are now within reach.

The insight: Radical inclusion is finally possible. CSOs can talk to rural communities in their own language via WhatsApp. But this requires human oversight, AI translation can flatten cultural nuance if left unchecked.

7. The Donor Shift: Mass Marketing → Hyper-Personalization

Old reality: Fundraising relied on generic appeals to mass email lists. Personalization was reserved for major donors.

New reality: AI allows hyper-personalization at scale. Every donor can receive a tailored thank-you note, impact report, or appeal.

The insight: The Trust Paradox. The tool that enables personalization may destroy the trust it was meant to build. Donors can sense when something feels "off." Funders have flagged AI-generated grant applications as immediate red flags.

8. The Ethics Shift: Capturing Reality → Generating Reality

Old reality: A photo of a beneficiary was proof of impact. Authentic images required field visits, consent processes, and professional photographers.

New reality: AI can generate synthetic images of suffering, impact, and communities. The line between documentation and fabrication is blurring.

The insight: Ethical guardrails are the new brand safety. One exposed fake image could destroy years of credibility. The question every CSO should ask: "Would I be comfortable explaining how this was created to my donors, beneficiaries, and the press?"

9. The Verification Shift: Writing the Draft → Checking the Draft

Old reality: Quality control meant having a colleague review your work. The bottleneck was writing the first draft.

New reality: AI generates first drafts instantly, but they require careful verification. The bottleneck has shifted from creation to quality control.

The insight: Speed without verification is dangerous. CSOs need new workflows that pair AI generation with human quality control, not just for accuracy, but for voice, tone, and values alignment.

10. The Workflow Shift: Siloed Channels → Integrated Content Systems

Old reality: Communication happened in silos. The social media person handled one channel, the program team created reports. Each operated independently with duplicated effort.

New reality: AI enables integrated content ecosystems: one piece of content automatically adapted for multiple channels, audiences, and formats.

The insight: "Write once, adapt everywhere." The demand isn't just for faster content, it's for smarter content systems. But this requires a mindset shift: thinking about content as modular building blocks, not one-off productions.

III. The Security Delta: 7 Shifts

This was the workshop that hit different. While disinformation was urgent but analytical, and communication was optimistic, the security session was deeply personal. Participants are living through these threats daily. The post-workshop conversation revealed the physical and emotional toll, blackouts, surveillance, stress manifesting in the body.

And unlike the other two workshops, where certain shifts resonated more than others, here participants voted on nearly all seven shifts equally. Digital safety isn't experiencing a single crisis. It's experiencing a compound, simultaneous crisis where everything is urgent at once.

11. The Attack Surface Shift: Hacking Systems → Hacking Humans

Old reality: Attacks targeted systems, breaking firewalls, guessing passwords.

New reality: Attacks target humans. AI generates flawless, personalized phishing emails and deepfake voice clones at scale.

The insight: Staff are the new firewall. The primary defense is no longer software, it's behavior and verification protocols. As one participant put it: "We need more psychologists, not more IT guys."

12. The Data Shift: Big Data (Asset) → Toxic Data (Liability)

Old reality: "Big Data" was the goal. Collect as much as possible to demonstrate impact.

New reality: "Toxic Data" is the risk. AI makes it easier to re-identify anonymized data or scrape sensitive information from leaks.

The insight: Radical data minimization is the best defense. The safest data is the data you don't collect. This is particularly urgent for CSOs working with vulnerable populations; activists, refugees, at-risk communities.

13. The Accessibility Shift: One-Time Purchase → The Poverty Penalty

Old reality: Security was a product. You bought an antivirus license once a year.

New reality: Security is a subscription. Constant paid updates, cloud security fees, and managed services.

The insight: Small CSOs are being priced out of safety. Enterprise-level security licenses can cost 8x standard prices. The most vulnerable organizations have the weakest shields, creating a dangerous equity gap.

14. The Trust Shift: Digital = Progress → Paper = Safety

Old reality: Moving from paper to digital was the definition of progress.

New reality: Digital threats feel overwhelming and uncontrollable. Some organizations are abandoning digital tools entirely.

The insight: Fear is driving CSOs backward. In sensitive sectors like child protection and anti-trafficking, organizations have reverted to paper-based systems out of sheer mistrust. Training must restore confidence, or digital transformation will fail.

15. The Response Void: Call Your IT Person → No One to Call

Old reality: If hacked, call your IT person or tech-savvy volunteer.

New reality: No one knows who to call. No recovery plan, no contacts, no playbook. Systems go dark for weeks.

The insight: CSOs need "emergency kits" before an attack happens, incident response contacts, backup protocols, recovery playbooks. Prevention is important, but preparation for failure is essential.

16. The Supply Chain Shift: Tools You Own → Tools That Own Your Data

Old reality: Software ran on your computer. You controlled where your data lived.

New reality: Every tool you integrate could be a data leak vector. Cloud services, AI plugins, and third-party add-ons all access your sensitive data.

The insight: Data sovereignty matters. Cybersecurity experts shared evidence of finding organizational documents, complete with participant lists and event locations, in public AI outputs. Staff had uploaded sensitive documents to AI tools without anonymizing them first.

17. The Convergence Shift: Protecting Data → Protecting People

Old reality: Digital security and physical security were separate concerns.

New reality: A leaked location, intercepted communication, or deepfake impersonation could endanger staff lives. Digital failures have physical consequences.

The insight: For CSOs in hostile environments, AI-powered surveillance by state or non-state actors is a direct threat to human safety. This was the highest-voted shift in the workshop, reflecting the lived reality that for many participants, digital security is literally a matter of personal safety.

What Cuts Across All 17 Shifts

After synthesizing the findings from all three workshops, six themes emerged independently in every session, without prompting, across different countries, different topics, and different expert validators.

The human is the weak point. Not the technology. Every workshop, whether discussing disinformation, communication, or security, converged on the same truth: AI literacy is a mindset shift, not a technical skill.

Ethics is the gateway, not the afterthought. All three workshops placed ethics among their top priorities. CSOs will not adopt AI if it threatens their integrity. Ethics isn't a module to add at the end, it's the foundation everything else rests on.

The foundational understanding gap is real. Participants don't want to become AI engineers. But they want to understand who built these tools, who benefits, what happens to their data, and what the consequences of engagement are. This critical understanding is the prerequisite for everything else.

The poverty penalty is systemic. Debunking lies costs more than creating them. Enterprise security costs 8x standard pricing. Every AI tool has a subscription fee. Small CSOs are being priced out of both safety and opportunity.

Community, not just courses. This was perhaps the most consistent finding. In all three workshops, independently, unprompted, participants said the same thing: don't just give us self-paced training. Give us a space to figure this out together, in real time, with peers who understand our reality.

The environmental elephant in the room. Environmental costs of AI were raised in every session. Participants consistently noted uncertainty about the real impact and suspicion that AI companies are not being transparent. For environmental CSOs, this creates a direct ethical conflict in adopting the tools.

What Comes Next

TechSoup's team will now take these findings into the BUILD phase, developing training modules, community infrastructure, education content and a Training-of-Trainers program. The detailed curriculum recommendations are in the hands of their education and community teams.

From me, these workshops reinforced something I keep encountering in my work on responsible AI governance: the organizations closest to the communities most affected by AI are often the ones with the fewest resources to navigate it. And when you give them space to design what they actually need, they don't ask for another course. They ask for each other.

The 17 shifts are not predictions. They are descriptions of what is already happening. The question isn't whether civil society will be affected by AI. The question is whether civil society will have the tools, the community, and the governance frameworks to respond on its own terms.

Your Feedback Matters

What did you think of this text? Take 30 seconds to share your feedback and help us create meaningful content for civil society!


About the Author

Ayşegül Güzel is a Responsible AI Governance Architect who helps mission-driven organizations turn AI anxiety into trustworthy systems. Her career bridges executive social leadership, including founding Zumbara, the world's largest time bank network, with technical AI practice as a certified AI Auditor and former Data Scientist. She guides organizations through complete AI governance transformations and conducts technical AI audits. She teaches at ELISAVA and speaks internationally on human-centered approaches to technology. Learn more at https://aysegulguzel.info or subscribe to her newsletter AI of Your Choice at https://aysegulguzel.substack.com.


"AI Integration Labs: 17 Shifts Reshaping Civil Society" article summarizes the work done as part of "AI Integration Labs" workshops which were co-developed by Ayşegül Güzel and are a part of the AI for Social Change project within TechSoup’s Digital Activism Program, with support from Google.org.