The two components of disinformation
Disinformation is more than just a simple falsehood; it is a deliberate creation and dissemination of false or manipulated information with the intent to deceive or mislead (NATO definition).
Understanding disinformation hence requires recognizing its two key components: the creation of false or misleading content, and the strategic dissemination of this content. It is the interplay between these elements that enables disinformation to effectively deceive and manipulate public perception.
AI commoditization of content creation
The advent of generative AI has significantly lowered the barriers to creating sophisticated and convincing deceptive content. Generative AI models can produce text, audio, and video content that is increasingly indistinguishable from human-generated content. This poses a significant challenge to maintaining information integrity in the digital age.
Furthermore, the rapid advancements in AI mean that the quality and realism of generated content will continue to improve, making it nearly impossible for both individuals and automated systems to detect and counteract deceptive AI-generated content.
As these tools become more widespread, the volume of deceptive content is likely to grow, overwhelming traditional fact-checking mechanisms and increasing the risk of widespread misinformation.
Distribution advantage of established networks
When it comes to the distribution of disinformation, actors with established content distribution networks hold a significant advantage as they provide ready-made channels through which disinformation can be rapidly and widely disseminated.
Established networks often have large and loyal followings that trust the content they receive from these sources. Once false information is introduced into these networks, it can spread rapidly due to the high level of engagement and interaction among users.
Social media platforms and search engines use complex algorithms to rank and display content. Disinformation actors with an understanding of these algorithms can manipulate them to ensure that their content gains visibility. Techniques such as keyword stuffing, the use of trending hashtags, and creating content that elicits strong emotional reactions can help disinformation rise to the top of search results and personalized feeds, increasing the reach and impact.
The use of botnets—networks of automated accounts—allows for the rapid amplification of disinformation. Bots can be programmed to like, share, comment on, and otherwise engage with content, creating the illusion of popularity and credibility.
In contrast, actors without established distribution networks face significant challenges. They must find ways to break through the noise and capture the attention of target audiences. This often involves hacking their way through ranking algorithms, leveraging viral marketing techniques, or attempting to infiltrate existing networks to gain visibility.
The distribution advantage of established networks highlights the importance of addressing not just the creation of disinformation, but also the mechanisms through which it is spread. Effective countermeasures must consider not just the content itself, but more importantly, the channels and networks used for its dissemination.
From disinformation to behavioral manipulation
The true power of disinformation lies in its ability to manipulate behavior through subtle and often insidious means. This goes beyond the binary debate of what constitutes truth versus lies and becomes a matter of psychological and social manipulation.
The aim is to shift people's perceptions and ways of thinking by influencing their behaviors and producing intended outcomes in the real world. This can range from swaying votes in elections and encouraging protests or civil unrest to manipulating consumer behavior for economic gain. The ultimate goal is to create a tangible impact on society by guiding individuals toward actions that align with the objectives of the disinformation campaign.
By consistently presenting a distorted version of events or skewing facts, these campaigns can create a new narrative that influences public opinion. This reshaping of perception is particularly effective when it taps into existing emotions and biases.
These can include cognitive biases such as confirmation bias, where individuals favor information that confirms their preexisting beliefs, or emotional triggers like fear, anger, and empathy. By exploiting these vulnerabilities individuals can be steered towards particular viewpoints or actions that they might not otherwise consider.
Behavioral manipulation for everyone
Just as generative AI has lowered the cost of creating deceptive content, it’s also democratizing the methods and tools needed for sophisticated behavioral manipulation. Large language models (LLMs) and other AI technologies offer powerful capabilities for understanding and influencing human behavior, which can be harnessed for both beneficial and malicious purposes.
When prompted with the right data, the LLMs can simulate detailed psychometric profiles of individuals and groups. These profiles include insights into personalities, preferences, fears, and motivations.
By leveraging these insights, disinformation actors can tailor their content to resonate deeply with target audiences, making their manipulative efforts more effective. For instance, if a certain demographic shows a high level of anxiety about economic instability, AI can generate and distribute content that amplifies those fears, driving them towards particular behaviors such as voting for a specific candidate or engaging in protest actions.
The scalability of AI tools means that behavioral manipulation can be conducted on a massive scale. Disinformation campaigns can be simultaneously tailored and distributed to millions of individuals, each receiving a personalized version of the deceptive content. This scalability is a significant advantage for disinformation actors, allowing them to influence large populations with relative ease.
AI systems can continuously learn and adapt based on the effectiveness of their manipulative efforts. By analyzing responses and engagement metrics, these systems can refine their strategies in real time, optimizing the impact of their disinformation campaigns. This adaptive capability ensures that the manipulation remains effective even as target audiences become more aware of and resistant to disinformation tactics.
While such targeted manipulation capabilities used to be accessible only to state actors, military contractors or specialized agencies, generative AI makes them accessible literally for everyone.
The need for cognitive security
Cognitive security refers to the protection of the decision-making processes of individuals and groups from manipulation and deception. Just as the physical safety of citizens is a core responsibility of the state, so too should the integrity of cognitive processes be safeguarded in an increasingly digital world. Hence, we should recognize a new fundamental human right: the right to not be manipulated.
While AI is clearly a catalyst of risk, it can also play a critical role in improving cognitive security by:
Managing Information Flows: AI agents can evaluate incoming content in real time, filtering out deceptive or manipulative information before it reaches the individual, helping users engage with factual and unbiased content.
Supporting Cognitive Alignment: By understanding an individual’s psychological profile, including cognitive vulnerabilities and biases, AI agents can maintain a continuous dialogue that helps users stay aligned with reality.
For example - this recent study from MIT has shown that dialogues with AI can durably reduce beliefs in conspiracy theories by tailoring the conversation to the individual’s unique reasoning and evidence.
The commoditization of AI-driven behavioral manipulation tools represents a profound challenge for society. As these technologies become more advanced and accessible, the potential for abuse increases. It is crucial to develop robust ethical guidelines, regulatory frameworks, and technological safeguards to mitigate the risks associated with AI-powered disinformation and behavioral manipulation.
Author: Josef Holý
Background illustration by: adragan
This piece was published in partnership with VIA Association