This article is not about hallucinations. It is about a quieter mistake: the text may look perfectly fine, but the data handling behind it may be risky. For civil society organisations (CSOs), that matters a lot. We work with information that has real consequences: beneficiaries, safeguarding, finances, donors, staff issues, internal conflicts, and security details. In other words: exactly the sort of material that should not casually become “chat context” in a tool you do not fully control.

The goal is not to scare you away from AI. The goal is to make its use predictable—so your organisation gets the benefits of speed, clarity, and better drafts without turning “efficiency” into a data incident with excellent grammar.

Begin with a boring habit that prevents exciting problems

When someone asks, “Can we use AI for this?”, start with three very unglamorous questions:

  1. 1. If this leaks, who gets hurt?
    Think beneficiaries, staff, partners, donors, and your organization's credibility.

  1. 2. If the answer is wrong, who gets hurt?
    A weak social media caption is annoying. A wrong decision about services, eligibility, safeguarding, or money is a governance problem.

  1. 3. Can we verify it cheaply?
    If checking accuracy takes expert time, source documents, or legal review, the task is higher-risk than it first appears.

These questions work because they force you to stop thinking only about the task and start thinking about consequences. That shift alone prevents a surprising number of avoidable mistakes.

Sometimes the safest oversight is “don’t”

AI is excellent at drafting, rewriting, simplifying language, and generating options. It is not excellent at carrying out responsibility. That part still belongs to you.

Many organisations try to solve this with one sentence: “Human review is required.” Sensible, yes. Sufficient, no. If the input contains sensitive personal data, or if the result may influence decisions about people, review is not always enough. Sometimes the safest control is to not put the data into the tool at all. Or to redesign the workflow, so the AI only sees sanitized, low-risk material.

“Safe tasks” are often not safe in real NGO life

Some tasks sound harmless: summarising, translating, rewriting emails, cleaning up notes. In practice, these are often exactly the tasks where sensitive data sneaks in.

A meeting summary may contain names, conflicts, salaries, health information, or safeguarding concerns. A translated email may include locations, identifiers, or contextual details that still make a person identifiable. A rewritten donor note may reveal more about a beneficiary than you intended.

So, the real question is not: “Is this task okay for AI?” The real question is: What is in the input, and what happens to it once it leaves your systems?

AI data risk has three layers

When organisations discuss AI risk, they often get stuck on one question: “Does the vendor use our data for training?” That matters, but it is only one of the three layers.

Layer 1: Training use. Will your prompts and outputs be used to improve the model? Business and enterprise offerings from major vendors often say customer content is not used for training by default, unless the customer opts in. OpenAI states this for ChatGPT Business, Enterprise, Edu, Healthcare, Teachers, and the API. Anthropic states the same for commercial products such as Claude for Work and the API. Google says Workspace with Gemini content is not used for model training outside your domain without permission. Microsoft says prompts, responses, and Microsoft Graph data in Microsoft 365 Copilot are not used to train foundation LLMs.

Layer 2: Retention and logs. Even if training is off, content may still be stored for some time for abuse prevention, support, security, or legal reasons. “Training off” does not mean “stored nowhere.” OpenAI, for example, separates training use from retention and admin controls in its business privacy materials.

Layer 3: Access paths. Who can see the data because of features and configuration? Sharing links, admin tools, audit logs, connected apps, plugins, and agents all create extra routes for exposure. Microsoft explicitly warns that Copilot only surfaces content users already have permission to access, which sounds reassuring until you remember how messy permissions often are in real organisations.

Free, paid personal, and enterprise: what actually changes?

This distinction belongs in every serious AI policy, because people often assume “I pay for it” means “it is safe for work.” It does not.

Free or consumer plans are usually fine only for low-risk, non-sensitive, redacted work. They may offer weaker admin controls, looser sharing boundaries, and less organisational oversight. Some consumer products also allow training on user content unless the user opts out. Anthropic, for example, separates consumer and commercial data handling. OpenAI likewise distinguishes personal plans from business offerings.

Paid personal plans are better thought of as “consumer plus.” You may get more features, faster models, or fewer limits. You do not automatically get enterprise privacy, contractual safeguards, admin controls, or policy enforcement. That is an important difference.

Business, team, or enterprise plans are where serious organisational use begins. These products typically offer no-training-by-default policies, stronger admin controls, better access management, and clearer contractual support for compliance work. But even here, the service does not make you compliant by magic. You still need governance, minimisation, and sane internal rules.

For example, even in an enterprise workspace, a staff member can paste sensitive case notes into a private chat, and months later that content may still exist in account history, be reused in the wrong context, or be copied into an output shared more widely than the original notes ever should have been.

The “Can I paste this into AI?” flow, in plain language

Here is the shortest useful decision flow.

If the input does not contain personal data, ask one more question: is it still confidential? That includes strategy, finance, security details, or sensitive partner discussions. If yes, use an approved organisational tool and keep the input minimal. If not, you are generally in a lower-risk zone: AI can help, and your main task is to review the output.

If the input does contain personal data, only consider using AI if your organisation has an approved tool and setup for processing personal data — typically a business or enterprise service with appropriate contracts, controls, and internal rules. If that is not in place, personal data should stay out of AI tools.

If the data includes special-category data, minors, safeguarding, health, or social cases, the bar is higher still: use AI only in a secure, explicitly approved environment, and only when there is a clear reason to do so. Under GDPR, compliance depends on the legal basis, the contract, the setup, and the purpose of the processing — not on a vendor slogan or the fact that someone pays for a subscription.

If it is “regular” personal data, that still does not mean “safe by default.” Use only approved tools, keep the content minimal, redact where possible, and make sure the organisation understands the contractual and technical setup.

Which LLM tools can be used in a GDPR-aligned way

GDPR compliance is not a badge you buy with a subscription. It depends on the tool tier, the contract, the settings, the workflow, and your organisation’s own governance.

Still, some tools can support GDPR-aligned use more credibly than consumer chatbots:

  • OpenAI – ChatGPT Business / Enterprise / Edu / API
    Training: no by default for business products
    GDPR-aligned use: yes, if used on business tiers with appropriate admin, retention, and contractual controls

  • Microsoft 365 Copilot
    Training: no for prompts, responses, and Microsoft Graph data
    GDPR-aligned use: yes, inside a properly managed Microsoft 365 tenant with permissions and compliance controls in place

  • Google Workspace with Gemini
    Training: no outside your domain without permission; Workspace protections apply
    GDPR-aligned use: yes, in a properly managed Workspace environment with admin controls and internal governance

  • Anthropic – Claude for Work / API
    Training:
    no by default for commercial products
    GDPR-aligned use: yes, on commercial tiers with clear organisational rules around retention, access, and feedback sharing

The trapdoor: connectors, plugins, and agents

The moment an AI tool can access your Drive, email, Teams, SharePoint, or project systems, the risk profile changes. You are no longer managing only what someone pasted into a prompt. You are managing what the tool can access, retrieve, surface, and potentially expose.

This is also where permission mistakes become expensive. Microsoft and Google both stress that enterprise protections depend on existing permissions and admin controls being properly set up. If your file hygiene is messy, AI will not heal it through the power of optimism.

There is also prompt injection: hidden instructions inside a document or webpage can influence how the AI behaves. In plain English, the tool can be tricked by the material it is reading.

Your organisation needs an AI Codex

If you want safe AI use to scale beyond a few careful individuals, you need an internal AI Codex. Not a fifty-page monument to bureaucracy. A practical document people will actually use.

It should define which tools are approved, which are not, what kinds of data may or may not be used with AI, how redaction works, what human review is required, how connectors and agents are handled, what to do after an incident, and when the rules are reviewed.

This is boring governance. Good. Boring governance is what stops the same preventable mistake from happening twelve times in twelve slightly different ways.

Safe AI is mostly predictable habits

AI can genuinely help NGOs but it does not carry the consequences of misuse. Your organisation does.

That is why safe AI use is rarely about clever prompts or magical settings. It is mostly about habits: minimise what you share, choose the right tool tier for the risk, control access, check the GDPR compliance of the AI you’re using, verify what matters, and write the rules down so they survive staff turnover.

And when in doubt, keep one principle in reach: make the input less sensitive, not the tool more magical.


Your Feedback Matters

What did you think of this text? Take 30 seconds to share your feedback and help us create meaningful content for civil society!


Disclaimers

This piece of resources has been created as part of the AI for Social Change project within TechSoup's Digital Activism Program, with support from Google.org.

AI tools are evolving rapidly, and while we do our best to ensure the validity of the content we provide, sometimes some elements may no longer be up to date. If you notice that a piece of information is outdated, please let us know at content@techsoup.org.

"Before You Paste: A Practical Guide to Data Security When Using AI", by Radka Bystřická 2026, for Hive Mind is licensed under CC BY 4.0.