NGOs work for the common good, often on behalf of vulnerable groups or handling sensitive issues. That’s why it’s crucial to use AI thoughtfully — respecting privacy, being transparent, and staying aware of the risks. Below, we share nine best practices to help you use AI wisely and safely.
Key Principles:
Always protect personal data.
Treat organizational information like a secret — even AI.
Be open about using AI.
Humans make the final decisions.
Double-check before sharing.
Use inclusive language.
Plan and assess AI’s impact.
Think about the planet — AI leaves a footprint, too.
Unsure? Ask!
1. Always Protect Personal Data
Let’s start with the basics: protecting personal information. NGOs often have access to sensitive data — names, addresses, personal IDs, health details, life stories of those they support. This data must never be fed into AI without first anonymizing it.
Before entering any data into an AI tool, remove anything that could identify a person. Name? Replace it with “Person 1.” Address? Not necessary. Personal ID? Absolutely no. AI doesn’t need these details to generate content or analysis, and you avoid the risk of serious privacy breaches. Why is this so important? Because trust is built on people feeling safe. Let’s not put that at risk.
2. Treat Organizational Information Like a Secret — Even towards AI
AI can help create promotional materials or analyze projects, but that doesn’t mean you can share everything freely. Details about partners, budgets, internal processes, or contracts should stay confidential.
For example, if you want AI to polish a project description, don’t include exact details like “Project for homeless people at Shelter Aid in Łódź with a 100,000 PLN budget.” A more general description, like “Social support project in a large city”, works just as well and lowers the risk of leaking sensitive information.
Many AI tools learn from the data you input, especially the free versions. We don’t always know where or how this data is stored or used. So be cautious and protect your organization’s confidentiality.
3. Be Open About Using AI
Using AI? Don’t be shy about it! Transparency builds trust — with your team, partners, and the people you serve. If a report, presentation, or graphic involves AI, mention it clearly — even briefly, like in a footer or at the end.
Transparency means not just saying “we used AI” but explaining how much it helped. Did AI suggest ideas or polish the wording? Or did it create the entire text or graphic? Maybe it was just a spark of inspiration, with the final work being your own?
This shows you’re in control, not handing over responsibility to a machine. It also helps your audience understand what they’re seeing, avoiding misunderstandings and proving you take this seriously.
4. Humans Make the Final Decisions
AI might seem like it “knows best,” but remember: we’re responsible for decisions made with its help. AI can suggest solutions, support data analysis, and provide arguments — but the final call is yours.
We work with and for people, and it’s human empathy, intuition, and experience that decide what’s right. AI doesn’t grasp emotional context or subtle human nuances, nor can it take responsibility. Always think critically about AI’s input and treat it as a helper, not a boss.
5. Double-Check Before Sharing
AI can write beautiful texts, but sometimes… it makes things up. This is called “hallucination” — generating convincing but false info. It might invent sources, mix up dates, or attribute actions to people who never did them.
Before using AI-generated text in your work, fact-check it. Compare with reliable sources, consult your team. This is crucial in NGOs, where we deal with sensitive topics and real people. Misinformation can harm both your organization and those you support. Better safe than sorry.
6. Use Inclusive Language
AI learns from huge datasets that often contain stereotypes and biases. Even if AI tries to be “neutral,” it may unintentionally ignore diverse perspectives or repeat harmful simplifications. NGOs, which stand for equality and fighting discrimination, must be especially careful.
Language matters — AI-generated text may miss the social or emotional context we work in. Before publishing, read it carefully and ask: Is the language neutral and respectful? Does it avoid simplifying or labeling people in harmful ways?
For example, AI might say, “We help poor people who have nothing.” While it sounds empathetic, it reinforces stereotypes. A better, more inclusive version is: “We support people experiencing poverty or life crises.” A small change shows how important language sensitivity is.
7. Plan and Assess AI’s Impact
Thinking about adopting AI in your NGO? Great! But before jumping in, pause and consider: how might it affect the people you serve, your team, and your mission?
Schedule regular check-ins — review how AI tools are working. Are they helpful and intuitive? Or do they create more hassle? Sometimes what’s meant to speed things up actually slows us down. On the other hand, AI might surprise you with unexpected benefits.
8. Think About the Planet — AI Leaves a Footprint Too
Using AI might feel “invisible” — just typing and clicking — but every AI query consumes significant energy. If your organization cares about environmental responsibility — saving paper, recycling, sustainable transport — remember that technology has its own carbon footprint.
What can you do? Choose lighter, less energy-intensive tools. Use AI only when really needed, not “just in case.” Check if your AI provider uses green data centers or invests in renewable energy. It helps keep your eco-friendly commitments consistent — because technology and ecology can go hand in hand.
9. Unsure? Ask!
You don’t need to be an AI expert to use it. If you’re unsure about what data is safe to enter, whether a text violates privacy, or how to protect your work, just ask.
Maybe someone in your organization already knows the ropes. If not, look outside — many institutions offer free online courses, webinars, and guides on responsible tech use. Consider creating an internal learning space — a simple guide or short training to help your team feel confident. AI evolves fast, but with good preparation, you can use it safely and effectively.
A Few Words of Encouragement
AI isn’t the enemy. Used responsibly, it can genuinely ease NGO work — saving time, aiding analysis, inspiring new ideas. But like any tool, it demands caution, knowledge, and ethical care.
No need to fear AI — but stay alert. Remember, the nonprofit sector serves people, and that comes with responsibility. Let’s use AI smartly, respectfully, and with common sense. The future of technology is our future too — and it’s up to us how we shape it.
DISCLAIMER: This text was prepared with AI assistance during editing and proofreading, but all content was ultimately reviewed and approved by a human.
Author: Ewa Patyk
Background illustration: Andrii Zastrozhnov