Before you dive into this part, make sure you've read through the article on How CSOs can lead on Responsible AI, the Introduction to this Practical Guide, Part 1: LOOK - Preparing the Ground, Part 2: CREATE - Identifying and Defining Your Principles, Part 3: Putting It Into Practice, and Part 4: CREATE - Defining AI Governance Ownership for CSOs

In some organizations, the AI policy starts with a traffic light check any team member can run in under 60 seconds. In different ones, the policy includes a principle called "Not Led by the Machine"; a phrase that came directly from a team member during a workshop. People reference it constantly, not because they were told to, but because they recognize themselves in it.

Compare this with the more common experience: someone is asked to "write the policy," and three months later a 20-page document sits in a shared drive that nobody opens.

The difference? The first two policies were written with people. The third was written about them.

This article, the third in The Responsible AI Roadmap for CSOs series, explores how civil society organizations (CSOs) can create AI policies that teams actually use. Not compliance documents that gather dust, but living records of shared agreements.

What an AI Policy Actually Is (And What It Isn't)

An AI policy is not a legal document imposed from above. It is the written version of collaborative agreements, the ones an organization has been building through the earlier stages of this roadmap.

The progression is natural: In Article 1, tensions became principles, the values that anchor every AI decision. In Article 2, those principles found a home in the governance body — the Tree — the people who steward the ongoing conversation. Now, the Tree's first major task is to document the agreements that guide the daily AI use. The policy is simply writing those agreements down in language the whole team can act on.

And if an organization hasn't yet gone through those earlier stages? Starting with a policy conversation is still valid. In practice, it naturally surfaces the principles and governance questions that need to be addressed, just in a different order.

In the LOOK → CREATE → BUILD methodology from Article 1, policy development is the final act of the CREATE phase; writing down the agreements before moving into BUILD, where risk assessment makes them operational.

A useful distinction:

  • Principles say what we believe.

  • Governance says who holds the conversation.

  • Policy says what we've agreed to do.

This framing matters because it changes the emotional weight of "writing a policy." It's not about inventing rules. It's about documenting decisions the team has already made together.

One important note on scope: Most organizations begin their AI policy focused on generative AI use, practical guidelines for tools like ChatGPT, translation services, or image generators. That's a valid and sensible starting point. As an organization matures and begins conducting risk assessments (covered in Article 4), those findings naturally flow back into the policy, expanding it into a broader governance document. Starting with GenAI use is not "doing it wrong", it's doing it in the right order.

Why Does It Matter Now?

Two realities make this task urgent for civil society organizations.

  • The shadow AI problem. Without clear, accessible agreements, people make decisions in isolation, exactly like in the scenario that opened Article 2 of this series. In organizations that have gone through a diagnosis process, one of the most common discoveries is AI tools being used across the team that had never been formally discussed. This isn't a failure of individuals; it's a failure of clarity. Policy makes the invisible visible, not to punish, but to create shared ground.

    The risks are concrete: a volunteer pastes beneficiary details into a free AI tool without realizing the data may be used for training; a grant report goes out with AI-hallucinated statistics that no one verified; an AI-translated legal document gets a critical detail wrong. These are not hypothetical, they are the scenarios CSOs are already encountering.

  • The trust problem. CSOs hold uniquely sensitive information: beneficiary stories, case details, donor relationships, community trust built over years. A policy that the whole team understands and owns is what protects that trust. Not a static document filed away, but a living reference people consult when they hit a grey area.


Who Creates It (And How)?

The governance body (the Tree) leads the process, but the policy should carry the fingerprints of the people who will live with it every day. In a small team, the whole organization might be in the room for the policy conversation, and that's actually an advantage. Fewer layers mean faster alignment and stronger ownership.

In practice, two approaches tend to work well.

1) The manifesto-to-policy approach works when an organization has already articulated its principles in a public-facing document. The governance body translates those principles into practical daily guidance, a traffic light system, rules of the road, incident reporting channels, and the full team reviews and refines the draft. ChangemakerXchange's Mindful AI Policy is a strong published example of this approach

  1. 2) The facilitated co-creation approach works when the policy needs to be built from the ground up. The process typically includes a diagnosis session to map existing AI use and the regulatory landscape, a principles workshop where a small group representing different roles surfaces hopes, fears, and non-negotiables, and a co-creation workshop to confirm the draft together.

Both of the approaches share the critical design decision: use participants' own language in the final document. When someone's exact phrase becomes the name of a principle or a rule, the policy stops being "management's document" and becomes something the team owns. People follow agreements they helped shape.

A few committed voices can shape a strong first version but it shouldn't stop there. Sharing the draft broadly for feedback, and using training sessions as another opportunity to surface questions and gaps, widens the circle deliberately before the policy is locked in. The more diverse the input, the fewer blind spots in the final document.

The pattern holds regardless of size: Surface what matters. Make agreements together. Write them down in language the whole team recognizes. The governance body (the Tree) leads this process, but the whole team's voice should be in the document.

What Goes Into It? The Essential Components

Rather than listing a dozen sections, the most useful way to think about policy content is through the three questions every team member will ask.

1) "Can I use this?" — The Traffic Light Framework

Organizations that have built effective AI policies consistently arrive at some version of a traffic light system. It works because it replaces the paralysis of uncertainty with a simple, actionable check:

🔴 Red Light (Stop): Non-negotiable boundaries flowing directly from principles.

For example: Confidential data never enters public AI models. AI never makes consequential decisions about people without a human review. Sensitive personal testimonies (such as from asylum seekers, survivors, or people in crisis) are never processed through public tools. These are the organization's absolute lines that can’t ever be crossed.

🟡 Yellow Light (Pause and Ask): This is where the governance actually happens.

For example: Reserved for high-stakes decisions, new tools not yet vetted, sensitive data in any AI context. These trigger consultation with the governance body before proceeding. Yellow is not "no", it's "let's check together."

🟢 Green Light (Go): Approved uses within established guidelines, giving people confidence to act without seeking permission for routine tasks.

For example: using an approved AI tool to draft a first version of a public newsletter, or generating a list of discussion questions for a community workshop using only non-sensitive, public-facing information. Of course this will depend on the agreements done internally.

A crucial nuance: The traffic light applies to the combination of tool + use case + data sensitivity, not the tool alone. The same tool can be green for drafting a public blog post and red for summarizing confidential case notes. All these details are organization specific!

2) "What are the rules of the road?" — Daily Use Guidelines

Beyond the traffic light, every policy can have baseline agreements that apply to all AI use, even green light workflows. These are starting points. Each organization will develop rules that reflect its own principles and ways of working.

  • Fact-check everything. AI-generated content can be wrong or entirely fabricated. Treat every output as a first draft, not a final source.

  • Protect data. Treat information entered into external AI tools as if it were being published publicly. Anonymize sensitive details.

  • Maintain authenticity. Ensuring AI-assisted content reflects the organization's voice, CXC calls this guarding against "AI slop."

  • Be transparent. Disclose AI use when it substantively shaped content, especially in external communications or evaluations.

  • Be proportional. Not every email needs an AI draft. Consider whether the task justifies the tool.

3) "What if something goes wrong?" — The Learning Process

A policy is incomplete without a clear path for when things go wrong, framed as "flag it, don't hide it." A named person or channel for reporting, clarity about what information helps, and a commitment to learning rather than punishment. Whatever fits the culture, a chat thread, an email alias, a named lead, the point is that it exists and everyone knows about it.

The Structural Elements

Around these three core questions, a few structural elements hold the policy together: a clear scope (who the policy applies to), the governance connection (who owns the policy), the AI inventory (the living tool list from Article 1), and a review commitment (when and how updates happen).

Additionally, there is one last element that's easy to overlook: training. A policy that no one has been walked through is a policy that won't be followed. This doesn't require elaborate programs, a 30-minute team session explaining the traffic light, walking through real scenarios, and answering questions can be enough. For example, it can be a frontline team member chosen to deliver training to peers, someone who understood the daily reality of the work. The format should fit the culture: a team meeting, a short video, a one-page quick reference card, or scenario-based walkthroughs. The goal is that everyone knows the red lines, understands the traffic light, and knows where to go with questions.

A Policy That Grows: From The First Version to A Living System

The most important thing about version one is that it exists.

Organizations that treat their AI policies as living documents make sure to build in specific evolution mechanisms: scheduled reviews every six months, triggers when a new tool is adopted or a significant incident occurs, and feedback loops where yellow light consultations and training conversations continuously surface what needs to be clearer.

The growth path is natural. Version one focuses on generative AI use. As the organization begins conducting risk assessments (Article 4), those findings feed back, yellow light scenarios get examined through impact assessment, mitigation strategies emerge, new guardrails are added. The document grows because the team's understanding deepened, not because someone decided to make it longer.

An Invitation to Begin

The principles are defined. The Tree is named. The policy is simply writing down what has already been agreed, and giving the whole team clarity to act with confidence.

Start with the red lines. Build the traffic light around them. Version one doesn't need to be comprehensive, it needs to be honest, collaborative, and useful. The rest will grow.

The companion resource; "A Practical Guide to Creating Your AI Policy" provides the full template: guiding questions for each section, example text to adapt, and a process for working through it as a team.

Next in this series: The policy's yellow light scenarios are exactly the starting point for risk assessment. The next article will explore how to examine those cases more deeply, assess potential harms, and build the guardrails that flow back into future policy versions.

Your Feedback Matters

What did you think of this text? Take 30 seconds to share your feedback and help us create meaningful content for civil society!


This article is part of "The Responsible AI Roadmap for CSOs" developed as part of the AI for Social Change project within TechSoup’s Digital Activism Program, with support from Google.org. All resources are published under Creative Commons Attribution 4.0 International license.

About the Author

Ayşegül Güzel is a Responsible AI Governance Architect who helps mission-driven organizations turn AI anxiety into trustworthy systems. Her career bridges executive social leadership, including founding Zumbara, the world's largest time bank network, with technical AI practice as a certified AI Auditor and former Data Scientist. She guides organizations through complete AI governance transformations and conducts technical AI audits. She teaches at ELISAVA and speaks internationally on human-centered approaches to technology. Learn more at https://aysegulguzel.info or subscribe to her newsletter AI of Your Choice at https://aysegulguzel.substack.com.