Whether you're a small advocacy nonprofit, a large international NGO, or a community foundation, the process of defining your Responsible AI principles is fundamentally the same: it begins with honest conversation and ends with shared commitment.

This guide draws on methodology developed and tested by the author through hands-on work with different organizations, whose cases appear throughout as a practical example.

What you'll have by the end of this process:

  • A clear understanding of your organization's current relationship with AI: the tools you're using, the tensions you're feeling, the values at stake

  • A prioritized set of 3-7 Responsible AI principles tailored to your mission and context

  • Definitions that make those principles meaningful and actionable for your team

  • Momentum toward governance structures and policies (which we'll cover in subsequent guides in this series) / (which you’ll find here)

The approach: Look → Create → Build

This guide follows a three-phase methodology:

  • LOOK: Opening space to see what is - mapping your current AI landscape, surfacing tensions, gathering existing wisdom (regulations, benchmarks, your organization's existing values). This phase can largely be done as preparation before your team comes together.

  • CREATE: The messy, collaborative space - where you synthesize insights into principles, define governance structures, and document policies. This is where the team works together.

  • BUILD: Implementation and operationalization - where principles meet practice through risk assessment, mitigation strategies, and ongoing governance controls. This is where the governance structures you've defined take ownership, with designated individuals or teams responsible for ongoing monitoring and accountability.

A note on pace: This work cannot be rushed. Defining principles in a single one-hour meeting produces documents that gather dust. Plan for preparation time (LOOK) plus at least one 90-minute session for collaborative creation, with follow-up time for synthesis and refinement. The investment is worth it. These principles will guide every AI decision your organization makes.

Throughout the next weeks, come to Hive Mind to read the LOOK, CREATE, and BUILD part of the guide to help you and your CSO work out Responsible AI principles.

Have a look at Part 1: LOOK - Preparing the Ground, now!

______________________________________________________________________________________________________________________________________________

This article is the first in a four-part series, "The Responsible AI Roadmap for CSOs," developed as part of the AI for Social Change project within TechSoup’s Digital Activism Program, with support from Google.org.

About the Author

Ayşegül Güzel is a Responsible AI Governance Architect who helps mission-driven organizations turn AI anxiety into trustworthy systems. Her career bridges executive social leadership, including founding Zumbara, the world's largest time bank network, with technical AI practice as a certified AI Auditor and former Data Scientist. She guides organizations through complete AI governance transformations and conducts technical AI audits. She teaches at ELISAVA and speaks internationally on human-centered approaches to technology. Learn more at https://aysegulguzel.info or subscribe to her newsletter AI of Your Choice at https://aysegulguzel.substack.com.