There is plenty of regular misinformation on the Internet, and to make matters worse, we are more exposed to misinformation than ever before due to the acceleration of AI-generated content online. There’s solid evidence that AI-generated content spreads faster nowadays, at an approximately a 70% higher speed than truthful content on social media platforms.
However, there is no fully consistent way to report mis and disinformation. Different countries have different laws and interpretations about what misinformation is, and generally only a tiny bit of it is considered illegal content anyway. In the same vein, the misinformation policies of each of the different platforms (i.e. their Terms & Conditions, Community Guidelines, etc) are very different and the enforcement platforms make of them can be (and often is) pretty inconsistent.
AI-generated content and deepfakes in particular are, however, a different matter as they often overlap with illegal content types: non-consensual pornographic content, data protection infringements, financial scams, impersonation, child sexual exploitation material and others. For this reason, although AI and deepfake laws are for the most part still in their infancy, countries have been more eager to regulate this type of content.
In this context, we might wonder what to do when we find things online that are untrue. For example, sometimes you might want to reply to a post explaining to the author how they got their facts wrong, or to propose a Community Note (on X) with the information we think is needed for full context. Other times, you might want to report a particular piece of content to the platform hosting it because it is either illegal in your country or against the platform’s own rules on misinformation. You might even want to notify the authorities about a piece of misinformation that is illegal, such as hate crimes or scams.
Let’s do a quick roundup of reporting options for both regular and AI-generated misinformation, that the civil society organisations (CSOs) can use. These will depend on the platform you’re using and what country you're using the service in.
Reporting misinformation in the European Union and the United Kingdom
In the EU, all hosting services—including every major platform such as X, YouTube, TikTok, Facebook, and Instagram— have a legal obligation to have mechanisms in place that allow their users to indicate they have seen something they consider illegal. Depending on the country, there are different kinds of misinformation, such as: in France, for example, the trivialization or denial of crimes against humanity can be illegal, as promoting totalitarianism is in Poland, or advertising a product by falsely impersonating a brand is in Spain.
Most of the AI-generated content, including deepfakes, are illegal only when they are used to commit particular crimes like fraud, harassment, defamation, or electoral manipulation. . However, some countries have introduced standalone legislation for AI-generated content. Italy, for instance, created a new offence (Art. 612-quater of the Criminal Code) covering the dissemination of AI-generated or altered images, video or audio where such material causes unjust harm. France is another example of a country that has created an offence relating to sexual deepfakes (Law No. 2024-449 of 21 May 2024). Sexual deepfakes are also covered under Irish law (Harassment, Harmful Communications Act 2020), which criminalizes sharing non‑consensual intimate images, including deepfakes.
Europeans from any EU member state can report a content they consider illegal in their country, regardless of where it was posted from. A reporting mechanism must be offered free of charge, and the platform must notify the author of the report alongside the possibilities for redress. Generally, if the platform agrees that the content is illegal, it will remove it or restrict it in the relevant country. That is also because the platform can later be held legally liable if a judge eventually determines the platform should have realized that the content was illegal right away.
Reporting of illegal content is also possible through particular national channels in several EU member states. In France, the Ministry of the Interior operates PHAROS, an online portal that allows reporting of certain kinds of illegal misinformation (such as defamation, scam, incitement to hatred). The cases can then be assigned to the relevant French national authority so it decides whether the report can lead to judicial action.
In Poland, the Dyżurnet.pl portal within the National Research Institute (NASK) collects reports of illegal online content, a category that includes insulting someone because of their national, ethnic, racial, or religious affiliation. In Italy, the Autorità per le Garanzie nelle Comunicazioni (AGCOM) has a tool through which it is possible to report any legal breach in digital platforms and a specific form to report online videos that contain, for example misinformation illegally inciting racial, sexual, religious, or ethnic hate. In Spain, you can also report misinformation of a sexual or violent nature, including AI-generated content, though the priority channel for the removal of sexual or violent content, run by the data protection agency. All over Europe and in many other jurisdictions, users can use the INHOPE hotlines to denounce AI-generated CSAM, and Stop NCII offers a dedicated mechanism for reporting sexual deepfakes.
Additionally, European Union law also obligates the larger platforms to allow users to make complaints based on their Terms & Conditions, and to reverse any actions they made that do not align with them. This is useful because certain kinds of misinformation are covered by those rules. TikTok, for example, does not allow AI-generated content “that misleads about a matter of public importance”, Google prohibits ads that use manipulated media “to deceive, defraud, or mislead”, and Linkedin vows to remove “claims that are demonstrably false or substantially misleading and likely to cause harm”. YouTube goes a step further by providing creators and now journalists, government officials and political candidates with a likeness detection tool that helps them detect deepfakes of their face on the platform and request their removal.
Several watchdogs and media organizations have published investigations that resulted in these companies enforcing their own rules on particular accounts: TikTok banned 20 accounts after the BBC highlighted the use of AI-generated black female influencers to drive users to sites promoting sexually explicit content and Indicator Media’s reports led Meta to take down 200 ads for non-consensual sexualized image generation services
Those policies are normally global, so you can let the platform know that they are not being followed from any country. However, in the EU, the larger services have a legal obligation to maintain a publicly updated version of their Terms and Conditions, as well as track its changes in a publicly accessible repository. Additionally, and critically, as mentioned before, they are required to reverse any decision that is incompatible with those terms and conditions.
In the United Kingdom, the law also makes it clear that the most popular platforms (those used by at least 10% of the European population for at least 6 months), must follow on their Terms of Service, especially if those prohibit a particular kind of user-generated regulated content. However, although the independent regulator Ofcom bears responsibility for enforcing those obligations, it does not investigate individual complaints and directs users to contact the platforms directly. Another example is the Internet Watch Foundation that runs reportharmfulcontent.com, in which users are advised on how to report some types of harmful content, including certain kinds of misinformation, such as impersonation.
When it comes to illegal misinformation, there are other resources. The National Police Chiefs' Council of the United Kingdom operates TRUE VISION, a specific tool to report online hate crimes. Additionally, criminal law covers specific types of harmful content online related to the dissemination of disinformation with the intention to “cause non-trivial psychological or physical harm to a likely audience”., The Online Safety Act makes it illegal to share, or threaten to share, intimate images of adults, including deepfake images, without consent. Any of these crimes can be reported in person to the police or online and anonymously through the independent charity Crimestoppers, which can then submit it to law enforcement. Underage users wishing to remove sexualized images or videos of themselves online can also use Report Remove, a dedicated mechanism for that purpose.
It is also worth noting that deepfakes frequently raise data protection issues, since using someone's image, voice, or name can be considered personal data processing. If you are in the EU, you can file a complaint with your national data protection authority. In France, for instance, this is the CNIL (Commission nationale de l'informatique et des libertés, in English National Commission on Informatics and Liberty). In the UK, the equivalent body is the ICO, which has a similar view relating to deepfakes and data protection.
Reporting misinformation in Nigeria, Ghana, Kenya, and South Africa
In Nigeria, there is a mandatory Code Of Practice For Interactive Computer Service Platforms/Internet Intermediaries that says that large platforms must tell their users on its Terms and Conditions not to create or share “false or misleading” information. The National Information Technology Development Agency (NITDA) is responsible for monitoring its application. It does not provide a particular way for users to file reports about misinformation, but it does have a general online form for complaints. The same code compels platforms to take down any unlawful content “as soon as reasonably practicable” after receiving a report.
Unlawful content online is for the most part governed by the Cybercrimes (Prohibition, Prevention,etc) Act 2015. The law establishes a framework applicable to malicious applications of the technology and criminalizes offences such as unlawful access, identity theft, cyberstalking, offensive messages, and publishing obscene content, all offences which can be committed using AI technologies. For instance, the Advertising Regulatory Council of Nigeria (ARCON) issued a warning over the increase in AI-generated ads that use public figures' fakes to promote fraudulent goods and services. One can report these crimes and other illegal content to the Nigeria Police Force or the National Cybercrime center on Whatsapp or by calling their Help Line.
The law in Ghana criminalizes the “publication of false news with intent to cause fear and alarm” and anyone who uses the Internet to “knowingly send a communication which is false or misleading and likely to endanger the safety of any person”. The Cybersecurity Act builds on this and more broadly forbids cyber deception, disinformation, identity theft, and unlawful data manipulation, including production and distribution of false or misleading digital content, like deepfakes. The Cyber Security Authority of the country operates an online incident reporting form through which you can report cyberbullying, fraud, misinformation, online blackmail, online child abuse, online impersonation, and publication of Non-Consensual Intimate Images (NCII).
In Kenya, misinformation that constitutes hate speech can be reported through an online form to the National Cohesion and Integration Commission (NCIC), that can then refer cases to law enforcement. Misinformation at large is regulated through the Computer Misuse and Cybercrimes Act 2018 which targets “false publications” by prohibiting the making and distribution of content intended to "alarm, distress, or cause fear", and includes another provision against the unauthorized manipulation of data. According to security experts, Kenyan nationals face high rates of deepfake-enabled scams focused on fraudulent investment schemes. These cyber incidents and many others (abusive content, child/sexual/gendered violence, harassment) can be reported through the Computer Incident Report Team’s webpage.
In South Africa, the Electoral Commission and the NGO Moxii Africa created the website Real411.org, in which it is possible to file reports of digital harms including mis- or disinformation. The reports are reviewed by digital experts who make a reasoned decision on what would be the appropriate action to follow. The site operates based on a voluntary Code of Conduct that the major digital platforms have not committed to yet, so the concrete effects of the reporting are unclear.
Illegal content types online are reported like any other crime by going to a police station. One can also give notice of suspicious online content through the Police’s Crime Stop (Tip-off Line) or by logging in a query with the national Cybersecurity Hub. Individuals can also use the reporting form provided by Cybercrime.org.za, an organization committed to the fight against the criminal exploitation of technologies. Additionally, certain forms of illegal online misinformation such as slander and defamation can be reported online to the Film and Publication Board.
Fact checkers in Africa also play an important role in tackling misinformation. You can submit a piece for fact-check through Africa Check, an African-wide fact-checking organization. For Ghana and Nigeria particularly, one can also log in suspicious information through Dubawa.
Your Feedback Matters
What did you think of this text? Take 30 seconds to share your feedback and help us create meaningful content for civil society!
About The Author
Carlos Hernández-Echevarría is a journalist with 15 years of experience in television as a reporter, correspondent, and program manager. He is a member of the permanent task force of the Code of Practice on Disinformation and the EDMO working group on disinformation. He holds a degree in Journalism from Universidad San Pablo CEU and a Master's in Elections and Campaign Management as a Fulbright Fellow at Fordham University.
Disclaimers
This piece of resources has been created as part of the AI for Social Change project within TechSoup's Digital Activism Program, with support from Google.org.
AI tools are evolving rapidly, and while we do our best to ensure the validity of the content we provide, sometimes some elements may no longer be up to date. If you notice that a piece of information is outdated, please let us know at content@techsoup.org.
This content was created with AI assistance and has been reviewed and edited by Carlos Hernández-Echevarría
