This article provides an insight into a more useful conversation, with an understanding of both the warning and the promising versions. AI tools are already part of how CSO staff works. The question is no longer whether to use them, but how to use them without losing the critical judgement that makes that work meaningful.
The tool you're already using — and may not fully understand
Think about the last time you looked something up. Depending on which browser or search engine you used, the first thing you saw may not have been a list of links but a paragraph, formatted like an answer, generated by an AI model. You may have read it and moved on, considering your search solved. Most people do.
This is one of the ways AI has quietly become an information source, not just a productivity tool. Search engines like Google, Bing, and others now display AI-generated responses before any human-curated result. The paragraph looks authoritative, and the answers don’t always include a byline, a date, or a list of sources you can check.
Similarly, using an AI model for preliminary research when working on a new topic might be more deliberate but carries similar risks. This is becoming increasingly common because it is quick and efficient. For years, people consulted Wikipedia and other similar sources in the same way, not as a definitive source, but as a first orientation when approaching something unfamiliar. However, there is a fundamental difference: Wikipedia shows you its sources and when a fact or content is controversial or contested, disclaimer is shown so you can know where the doubts or discrepancies lie. A chatbot gives you a confident narrative with no equivalent transparency.
In an even more casual and carefree situation, users ask an AI model a specific question to resolve a quick doubt. For example, What does this acronym stand for?, When was this regulation passed? or Who leads this organization? These are the kinds of questions people used to answer with a fast search or a look at a reliable reference site.Now they ask AI chatbots, and AI chatbots answer fluently, confidently... and sometimes incorrectly.
None of these uses are inherently wrong. The problem is using AI as an information source without understanding what kind of source it actually is.
How these tools actually work — and why that matters
Large language models (the technology behind ChatGPT, Claude, Gemini and similar tools) do not understand the world nor the information they use and handle. They learn statistical patterns from enormous quantities of text: which words tend to appear together, which phrases follow which other phrases. The model does not grasp the meaning of what it produces. It generates text that resembles what is typically said about a given topic.
This has a direct practical consequence that is important to remember whenever we ask a question to these tools: the model does not know whether what it says is true, it just knows whether something sounds like what is the usual and common understanding For well-documented facts that are widely represented in its training data, this distinction often doesn't matter, the answer will be correct. A perfect example would be the story of a well stablished scientific discovery, like the radium by Marie and Pierre Curie in 1898. But for recent events, local information, or anything where the available data is sparse, contested, or contradictory, the model can produce a wrong answer with exactly the same fluency and confidence as a correct one. Think, for example, about the contested results of a recent local election in a non-English-spoken country (since these are language models, language matters for its training). That is not a bug, but a structural feature of how these systems work.
There is a second structural problem: these models are sensitive to how they are asked the questions (technically, prompted). Someone who understands how they work can construct a question that leads the model toward validating a false claim. This does not require advanced technical knowledge, as a few rounds of trial and error are usually enough. The same way that social media algorithms can be understood and used in someone’s favor, so can be the language models.
AI and information integrity
The AI-related risks that get the most attention are the most visible ones: deepfakes, synthetic images, AI-generated video. These are real risks. They are also, in some ways, the easier ones to spot and to talk about, because they involve content that was deliberately fabricated.
A subtler risk is what might be called the laundering of misleading narratives. This is content that is not technically false but frames a topic in a way that reinforces a manipulated or distorted understanding. An AI model asked to summarize a contested issue may produce a summary that sounds balanced but systematically omits one side. It can also present a fringe position as mainstream, because that framing was overrepresented in its training data. Nobody fabricated anything; it was not intended. In these cases, distortion emerged from the model's statistical base.
This matters for CSO staff not just as a risk to be aware of in external content, but as a risk in their own outputs. If you use an AI model to research a topic, draft a position, or summarise a report, the biases in the model's training data may surface in what it produces — without any obvious signal that something is off. The output will sound confident and coherent. That is what makes it worth scrutinising.
Where AI genuinely supports information integrity
With those limits clearly in mind, there are many ways AI can support information integrity work, not by replacing human judgement, but by making more efficient use of it.
1. Understanding what's circulating and why
When the volume of content to monitor exceeds what a team can process manually, AI can help with classification and triage. Given a large set of social media posts, articles, or messages, a model can group them by underlying narrative, flag those that match known patterns of concern, or identify which topics are gaining traction. The model proposes those narratives, categories and groups, and the human team decides and validates, making the monitoring more efficient and the eventual verification process more strategic.
A related application is narrative extraction and analysis, asking a model to identify the implicit framing of a piece of content before responding to it. What assumptions does this text take for granted? What emotions does it appeal to? What previous belief does it reinforce? Understanding the narrative architecture of misleading content is a very useful complement to the fact-checking process, because it allows you to address a lot of different contents at once if they share the same subjacent narratives.
2. Preparing to respond
Before using a document as a source or before responding to a claim, it helps to map what is actually checkable in it. Not every assertion in a document is a factual claim that can be checked, and not every factual claim is specific enough to be verified. A model can help to quickly identify which statements are verifiable and which are not, so human focus can concentrate where it can be useful. Again: models propose, but it is the human judgement that reviews, validates and takes decisions.
If your organisation is preparing a public communication in response to a misleading narrative, AI can serve as a useful sparring partner. Ask it to find the weak points in your argument, identify unsupported claims, or flag passages that could be misread or taken out of context. This must be done with a clear limitation in mind: this is not outsourcing editorial judgement but stress-testing your work before it reaches the audience.
Finally, when faced with a source or claim you are unsure about, a model can help you structure your critical thinking: what would you need to know to evaluate this properly? What perspectives are absent? What interests might be behind it? The model might not answer those questions reliably, but it can help you ask the right ones.
Habits for using AI without losing critical judgement
Therefore, the key for using AI without compromising information integrity is not whether to use it, but how to use it. Here are some helpful tips:
1. Structured well-thought-out prompts produce better results. "Summarize this report" and "Identify the three main claims in this report; the evidence cited for each, and any claims that appear unsupported, in no more than 200 words" produce radically different outputs. The more specific you are about what you want, in what format, and with what limitations, the more useful and auditable the result will be.
2. Ask for reasoning, not just conclusions. If you ask a model to assess whether a source is credible, ask it to explain why. A model that reasons out loud produces more accurate results and gives you something to evaluate. If the reasoning doesn't hold up according to your judgement (remember: that’s the one that matters), neither does the conclusion.
3. It is not a good idea to use AI to verify facts directly. A language model can produce a confident, detailed, entirely fabricated answer to a factual question. Use AI to support and structure your verification process — to identify what needs checking, to generate the questions you should be asking — but never to close it.
4. Be aware that AI outputs carry bias. The model's training data reflects the world as it was represented in the materials used for its training, which means it overrepresents some perspectives and underrepresents others. This can affect the framing of summaries, the examples a model reaches for, and the conclusions it tends toward.
5. Recognize which cases, conclusions and contents it produces are likely to be unreliable. AI is particularly inaccurate when it comes to very recent events, local or hyperspecific information, and any situation where the key to the possible deception lies in context rather than content. If understanding why something is misleading requires knowing who the audience is, what they already believe, and what moment they are living through, that definitely requires a human check.
6. Document your use of AI. Not as a bureaucratic requirement, but as a discipline. Knowing that you will record how a tool was used changes how carefully you use it. It also builds organizational knowledge about what works and what doesn't, as well as it is increasingly expected as a basic standard of transparency.
So: can AI help CSOs strengthen information integrity?
Yes — but caution is advised. AI can help manage volume, surface patterns, structure thinking, and stress-test arguments. But it cannot evaluate context, correct its own biases, or determine whether something is true. The organisations that will use it well are those that understand that distinction clearly enough to know, in any given moment, what they want and require of it
The tools are useful but the judgement must always be yours.
Your Feedback Matters
What did you think of this text? Take 30 seconds to share your feedback and help us create meaningful content for civil society!
Disclaimers
This piece of resources has been created as part of the AI for Social Change project within TechSoup's Digital Activism Program, with support from Google.org.
AI tools are evolving rapidly, and while we do our best to ensure the validity of the content we provide, sometimes some elements may no longer be up to date. If you notice that a piece of information is outdated, please let us know at content@techsoup.org.
"AI and information integrity: a practical guide for CSO staff", by Rocío Benavente Pérez, 2026, for Hive Mind is licensed under CC BY 4.0.
