The category of harmful content has major gaps due to the lack of agreements on its definition, since it does not exclusively deal with illegal content -as one might think-. Although this concept can cover conduct prohibited by the laws of different countries - harassment, discrimination or threats - it also includes other types of behavior that can be problematic in the digital sphere, such as the promotion of eating disorders or dangerous activities.

In the absence of an established, cross-cutting legal definition for an issue that does not distinguish geographies, the platforms' community standards - the policies that establish what can and cannot be said on a social network - are a guide to understanding which behaviors are undesirable and which can be harmful in offline life. These rules, although imposed unilaterally by social networking companies, often involve security and public policy experts and members of civil society and academia. Their updating is also subject to press and public opinion pressures during emergency situations or political scandals, as was the case during the pandemic and the last U.S. presidential election.

Although platforms have developed their policies to cover large areas of unwanted online behavior, these efforts have left out other types of content that could affect users. In this post, we look at three policies aimed at sanctioning content that could cause harm in offline life, and review some of their limitations and grey areas:

Hate speech

Platforms design their hate policies to prohibit offensive content based on a person's inherent characteristics. Although each company creates its rules according to its own conceptions, they all have as protected characteristics the nationality, religion, ethnicity, sexual orientation, gender or serious diseases that a person may have.

These policies not only sanction explicitly discriminatory comments, but also the reproduction of stigmatizing stereotypes, epithets, tropes, expressions to degrade a person, as well as publications that unjustifiably link a population with criminal activities, criminal or terrorist groups.

As community standards have a global scope, their design sometimes loses sight of the nuance of the local context, which for social or political reasons may have specific vulnerable groups. This is the case, for example, of certain professions that are exceptionally exposed to risks and threats, such as journalists or human rights defenders in some Latin American countries.

The same is true for another type of discrimination: that based on a person's social class. Except for Meta, none of the major social networking platforms include class as a protected characteristic, despite the exclusionary effect it can have on a society and the relationship between classism and other forms of discrimination, such as racial discrimination.

Harassment

This is probably the most prevalent type of harmful behavior known to the average user, ranging from mockery or denial of a tragedy to insults and obscene language in general.

It is not always easy to check to what extent an aggressive comment on social networks can represent a harm to the user to whom it is addressed, whether it is a joke, a consensual exchange or angry but harmless responses between those involved in a digital conversation. For this reason, it is essential to pay attention to the context in which these types of publications are published, as provided for in some of the community rules.

Sometimes, platforms include in this type of policies the protection of victims of sexual or domestic violence or violent events such as shootings or massacres, who -as has happened with some conspiracy theories in the United States- are exposed to attacks, mockery or denial of their testimonies.

However, the rules of the platforms leave a gray area for other kinds of problematic content, such as those in which the victims themselves are blamed for what happened. This may be the case of events of gender-based violence in which some online comments go so far as to suggest or affirm that a person is guilty of her own sexual aggression or even of her femicide.

This happened, for example, with Valentina Trespalacios, a DJ who was found dead and with signs of torture in January this year in Bogota. Her case, whose main suspect is her partner, has been commented on social networks, where some users have incurred in this revictimization, for which the platforms do not have clear rules.

Incitement to violence

Social networks in turn prohibit incitement to violence, i.e., content that glorifies violent events, makes statements in their favor, wishes harm to others or incurs direct threats.

There are, however, some exceptions to these rules. In heated social and political contexts, certain displays of anger or indignation against certain people or situations may be covered by freedom of expression. Last year, for example, when the war in Ukraine started, Meta allowed users of its platforms in that country to wish Vladimir Putin dead. The same happened in January of this year, when the same company, on the recommendation of its Content Advisory Board, allowed wishing for the death of Ayatollah Khamenei in Iran, in view of the days of protests that have been taking place in that country for months.

Although EU rules tend to sanction a wide range of problematic content and behavior in the digital sphere, some of their loopholes can leave certain discriminatory or revictimizing content in circulation. In addition, the complexity of some conversations, as well as the political, social and cultural aspects of each country, highlight the importance of taking the context into account when applying any content moderation measure that seeks to maintain a balance between security and users' freedom of expression.