Our recent multi-country research, including surveys across nine nations and extensive case studies, revealed a stark reality: only 8% of CSOs currently have an AI policy, and nearly 40% struggle to ensure data privacy when using these tools. As one representative from Ghana shared with us, access to clear policy and guidance isn't just a luxury; it’s vital for survival and responsible growth.

This roadmap is our response—a step-by-step process for CSOs ready to move from uncertainty to intentional, values-driven action.

"The Responsible AI Roadmap for CSOs" guide is for civil society organizations ready to move from AI anxiety to intentional action.

Whether you're a small advocacy nonprofit, a large international NGO, or a community foundation, the process of defining your Responsible AI principles is fundamentally the same: it begins with honest conversation and ends with shared commitment.

This guide draws on methodology developed and tested by the author through hands-on work with different organizations, whose cases appear throughout as a practical example.

What you'll have by the end of this process:

  • A clear understanding of your organization's current relationship with AI: the tools you're using, the tensions you're feeling, the values at stake

  • A prioritized set of 3-7 Responsible AI principles tailored to your mission and context

  • Definitions that make those principles meaningful and actionable for your team

  • Momentum toward governance structures and policies (which we'll cover in subsequent guides in this series) / (which you’ll find here)

The approach: Look → Create → Build

This guide follows a three-phase methodology:

  • LOOK: Opening space to see what is - mapping your current AI landscape, surfacing tensions, gathering existing wisdom (regulations, benchmarks, your organization's existing values). This phase can largely be done as preparation before your team comes together.

  • CREATE: The messy, collaborative space - where you synthesize insights into principles, define governance structures, and document policies. This is where the team works together.

  • BUILD: Implementation and operationalization - where principles meet practice through risk assessment, mitigation strategies, and ongoing governance controls. This is where the governance structures you've defined take ownership, with designated individuals or teams responsible for ongoing monitoring and accountability.

A note on pace: This work cannot be rushed. Defining principles in a single one-hour meeting produces documents that gather dust. Plan for preparation time (LOOK) plus at least one 90-minute session for collaborative creation, with follow-up time for synthesis and refinement. The investment is worth it. These principles will guide every AI decision your organization makes.

Throughout the next weeks, come to Hive Mind to read the LOOK, CREATE, and BUILD part of the guide to help you and your CSO work out Responsible AI principles.

Roadmap Publication Schedule

We are releasing this guide in weekly parts to allow your team time to digest and implement each stage.

Phase 1: LOOK

Phase 2: CREATE

Phase 3: BUILD

  • Part 6: Introduction to Risk Assessment – Understanding the safety landscape.

    • Part 6.a: Guide to Risk Assessment – Deep-dive tools for ongoing monitoring and accountability.

Your Feedback Matters

What did you think of this text? Take 30 seconds to share your feedback and help us create meaningful content for civil society!


This article introduces a series, "The Responsible AI Roadmap for CSOs," developed as part of the AI for Social Change project within TechSoup’s Digital Activism Program, with support from Google.org.

About the Author

Ayşegül Güzel is a Responsible AI Governance Architect who helps mission-driven organizations turn AI anxiety into trustworthy systems. Her career bridges executive social leadership, including founding Zumbara, the world's largest time bank network, with technical AI practice as a certified AI Auditor and former Data Scientist. She guides organizations through complete AI governance transformations and conducts technical AI audits. She teaches at ELISAVA and speaks internationally on human-centered approaches to technology. Learn more at https://aysegulguzel.info or subscribe to her newsletter AI of Your Choice at https://aysegulguzel.substack.com.