Before you dive into this part, make sure you've read through the article on How CSOs can lead on Responsible AI, the Introduction to this Practical Guide, Part 1: LOOK - Preparing the Ground, Part 2: CREATE - Identifying and Defining Your Principles, and Part 3: Putting It Into Practice
A program officer at a mid-sized environmental nonprofit asked her colleague: "Is it okay if I use ChatGPT to draft our beneficiary communications?"
The colleague shrugged. "I think so? Maybe ask IT?"
IT said, "That's not really our call. Try leadership?"
Leadership said, "We should probably have a policy on that. Can you look into it?"
The program officer went back to her desk. She had work to do. She used ChatGPT anyway.
This is where AI governance dies, not from bad principles, but from unclear ownership.
If you've followed the first article in this series, you've done meaningful work. You've gathered your team, surfaced tensions, and identified the principles that will guide your organization's AI journey. Perhaps they're even documented, shared, celebrated.
Now what? You've completed the LOOK phase, and now you're entering CREATE, the collaborative space where principles find their structures.
Why Ownership Matters, And Why Now
AI adoption in civil society is accelerating faster than governance structures can keep pace. Your team members are already using AI tools, some you know about, many you don't. Every day without clear ownership is a day of inconsistent decisions, mounting risk, and principles left without guardians.
Here's the uncomfortable truth: "Everyone is responsible" means no one is responsible.
Your governance is already at risk if:
Your principles document has no named stewards
Teams use the phrase "I assume someone approved this"
No one knows who to ask when they hit a grey-area scenario
Different programs are making contradictory AI decisions
For CSOs with limited resources, this ambiguity is especially costly. You can't afford duplicated effort, inconsistent messaging to communities, or the reputational damage of an AI mishap. Clarity isn't bureaucracy, it's survival.
Governance and Principles: A Parallel Conversation
Here's something I've learned from guiding organizations through this work: don't wait until your principles are "finished" to identify who will steward them.
The governance conversation should happen in parallel with defining your principles, not after. Why? Because the people who will hold responsibility for your AI governance should be part of shaping the principles themselves. They need to understand the tensions, the trade-offs, the reasons behind each commitment.
Organizations can spend months crafting beautiful principles, only to struggle when it comes time to write policy. Without named stewards, there's no one to drive the process forward.
Naming your governance body early, even in draft form, can begin practicing responsibility immediately: facilitating the principles conversation, gathering input, and building the muscle of collaborative decision-making. By the time the policy can be writtenthe team’s already developed the trust and working rhythms needed to guide it.
What Ownership Looks Like: The Tree That Holds the Conversation
Picture a tree in a village square. People gather beneath it. Conversations happen in its shade. It doesn't speak, but it holds the space where important things are discussed.
This is the model for AI governance ownership: The Tree—the small group of people who holds space for the ongoing conversation about AI in an organization. They don't do everything related to AI. They don't make every decision alone. They are the container, the hub, the grounded presence that ensures the conversation continues. The Tree's role is stewardship, not control.
In practice, “The Tree” is:
The first place people go when they have AI questions or concerns, a clear "front door" that didn't exist before
Gathering insights from across the organization about what's working, what's confusing, and what's risky
Ensuring that the agreements made in your principles are actually followed, not just at launch, but continuously
Creating space for the ongoing conversation, so AI doesn't become a topic people avoid or handle alone
Connecting the dots between different teams' AI use, noticing patterns that no single team would see
Holding institutional memory about past decisions and their reasoning, so not to start from scratch each time
What does this look like in different organizations? Those that succeed with AI governance don't invent foreign processes, they integrate ownership into how they already work.
Khan Academy demonstrates how governance becomes embedded in existing rhythms. A Responsible AI Steering Group, is for leadership representing Product, Data, and User Research facilitates strategic alignment. A Responsible AI Extended Working Group evaluates upcoming capabilities and monitors launched features. They integrate assessment into product design, evaluating features through demos and feedback loops, and maintain continuous communication with stakeholders. AI governance feels like their way of working, simply applied to a new domain.
ChangemakerXchange (CXC), a global network of social entrepreneurs, offers a model scaled for smaller organizations. Their AI Stewardship Circle, a small dedicated team, acts as keepers of their Mindful AI Manifesto and Policy. Day-to-day, Circle members are the first point of contact when anyone on the team has an AI question or concern. They're where insights get collected; what's working, what's confusing, where people are uncertain. They ensure the team stays aligned with their principles, not just in theory but in daily practice.
More formally, the Circle maintains twice-yearly document updates, guides impact assessments of new tools and workflows, and escalates major or contentious decisions to the entire team. Notably, their Circle can include a community member or external AI expert—bringing outside perspective into governance.
Both examples point to a crucial insight: governance should be a graft, not a transplant. Your existing decision-making pathways, communication styles, and team structures are the foundation. Before designing anything new, ask: How do we already make important decisions here?
A fatal mistake is imposing a governance structure alien to your culture. It will be rejected like a foreign organ. Instead, find where AI ownership naturally fits within what already works.
Different Ownership Models
There's no single "right" structure. The best model depends on your organization's size, complexity, AI maturity, and crucially, your culture.
The Duo: A partnership of at least two people, ideally someone with technical comfort and someone grounded in your mission and values. Together, they navigate both "Can we do this?" and "Should we do this?" The duo ensures no one carries this alone. Best for small organizations and early AI adoption.
The Circle: A cross-functional group of 3-7 people representing programs, operations, communications, and leadership. Like CXC's Stewardship Circle, they convene regularly and when significant decisions arise. Consider including community representation. Best for mid-sized organizations with multiple programs using AI.
Hub and Spoke: A central coordination point provides guidance and standards, while implementation responsibility lives with individual teams. The hub equips teams to make good decisions themselves. Best for larger organizations with federated structures.
Start with the simplest model that honors how you actually work. You can always add complexity later.
Collaboration and the Conditions It Needs
If there's one insight that has emerged clearly from responsible AI practice, it's this: collaboration is the essential operating model for responsible innovation.
AI touches everything: data, programs, communications, relationships with communities. No single person holds all the expertise needed to navigate it well. One person carrying AI governance alone will either burn out, become a bottleneck, or make decisions without the diverse input that responsible AI demands.
A simple tool makes collaboration concrete for tasks that genuinely need multiple perspectives, like approving a new tool or developing policy. Clarify who is Responsible (does the work), Accountable (the final decision-maker), Consulted (provides essential input), and Informed (kept updated).
Collaboration means multiple people are Responsible (doing the work) and Consulted (providing essential input). But Accountable, the person who makes the final call, must be one person. When accountability is shared, it disappears. No one wants to be the one to decide, so no one does. Having a single accountable person isn't about hierarchy; it's about ensuring decisions actually get made.
For example, when approving a new AI tool for community outreach the roles in the team would be distributed in the following way:
Accountable: Your Stewardship Circle lead
Responsible: The program team requesting the tool, a technical team member
Consulted: Community representatives, legal/compliance if relevant
Informed: Leadership, broader staff
Notice how community consultation is built in, not as an afterthought, but as essential input.
However, collaboration only works within the right culture. Using the metaphor, your Tree needs the right soil to grow in:
Safe spaces and psychological safety. People at all levels must feel genuinely safe to speak up about AI risks, potential biases, or ethical concerns, without fear of retribution. The concerns that go unspoken are often the ones that matter most.
Deep listening and inclusive dialogue. Governance bodies can't operate in a vacuum. They must actively seek out and value diverse perspectives, technical and non-technical, internal and external. This includes the served communities , whose lived experience is irreplaceable insight.
Constructive conflict resolution. Discussions around AI ethics inherently involve differing viewpoints and genuine trade-offs. The ability to navigate disagreements constructively, seeking common ground or making principled, transparent decisions is critical.
Fostering change at every level. Effective AI governance isn't just top-down policy; it's a catalyst for cultural evolution, encouraging critical thinking about technology and better collaboration across teams.
Making It Real: The Charter
A charter is not red tape. It's the single document that gives your Tree roots; defining its purpose, power, and process.
Your charter should answer five questions:
Why do we exist? (Your mandate)
What will we do? (Key responsibilities, and what you won't do)
Who are we? (Membership, selection, terms)
How do we decide? (Process for decisions and escalation)
How do people work with us? (The "front door" for questions)
The accompanying template provides a charter framework ready to adapt. But remember: the template is inspiration, not prescription. The real goal is to answer these questions in a way that fits the CSO’s unique culture.
An Invitation to Ground Your Principles
Organisation’s AI principles represent real work; honest conversations, surfaced tensions, shared commitments. They deserve more than a document. They deserve a home.
Identify the Tree. It might be a duo willing to hold this conversation together. It might be a small circle of trusted colleagues. Name them now, even before your principles feel "finished." Let them grow into the role by practicing governance through the policy-writing process ahead.
The principles are waiting to become practical. Give them a Tree to grow beneath, and people to tend it together.
Next in This Series:
AI Policy - From principles to practical rules
Risk Assessment - Protecting what matters through risk identification and mitigation
Companion resource: Making It Real: A Charter Template to Guide Your Thinking
Your Feedback Matters
What did you think of this text? Take 30 seconds to share your feedback and help us create meaningful content for civil society!
This guide is part of "The Responsible AI Roadmap for CSOs," developed as part of the AI for Social Change project within TechSoup’s Digital Activism Program, with support from Google.org. All resources are published under Creative Commons Attribution 4.0 International license.
About the Author
Ayşegül Güzel is a Responsible AI Governance Architect who helps mission-driven organizations turn AI anxiety into trustworthy systems. Her career bridges executive social leadership, including founding Zumbara, the world's largest time bank network, with technical AI practice as a certified AI Auditor and former Data Scientist. She guides organizations through complete AI governance transformations and conducts technical AI audits. She teaches at ELISAVA and speaks internationally on human-centered approaches to technology. Learn more athttps://aysegulguzel.info or subscribe to her newsletter AI of Your Choice athttps://aysegulguzel.substack.com.
