Generative artificial intelligence can produce text, images, audio or video content upon a given prompt, but is still not capable of fully distinguishing between fact and fiction.

Nevertheless, nine years later, Associated Press took a pioneering step in the summer of 2023 by issuing the Artificial Intelligence Guidelines, explaining that the tool cannot be used to create text and image content that could be published directly onto the news service, and AP also encourages its staff working in media to familiarize themselves with the technology of artificial intelligence.

The guidelines have already been incorporated into the Associated Press Stylebook, stating that the data generated through the use of artificial intelligence must be thoroughly verified, just as any other journalistic source. AP has also provided for its employees that photo, video, or audio material generated by artificial intelligence is not to be used except unless such a modified material is the source of a news item in itself.

In an interview for the Serbian newspaper “Danas”, Maja Sever, president of the European Federation of Journalists, states that artificial intelligence cannot replace journalists in the field. She adds that the development of technology cannot be stopped, so surely it is wiser to monitor the developments and use them as a tool to improve working conditions and the work itself. Maja Sever claims journalists and media are already using artificial intelligence in some aspects of their work, such as translation, data collection and preparation in everyday work on news reporting. The negative effects of artificial intelligence, according to the president of the European Federation of Journalists, are the hints of worker layoffs in some media houses, such as the German newspaper Bild, precisely as a result of the opportunities offered by artificial intelligence.

The European Federation of Journalists is among the 16 organizations which produced and published the Paris Charter on AI and Journalism this year alongside “Reporters Without Borders”. The document was published on November 10th of 2023 and its production began in July of 2023.

The Charter determines a set of 10 fundamental ethical principles for protecting the integrity of information in the age of AI. The most important segment of the Paris Charter states that journalism ethics must guide the technological choices made by media, while human agency must remain a central factor in the making of editorial decisions. Media must differentiate between authentic journalist content and the synthetic content produced by artificial intelligence. The ethical principle also defends journalism sustainability when negotiating with technological companies, but must also participate in the global governance of artificial intelligence.

The following are included among the ten Principles which make up the Paris Charter on Artificial Intelligence and Journalism:

  • Journalism ethics guide the way media outlets and journalists use technology;

  • Media outlets prioritize human agency;

  • AI systems used in journalism undergo prior independent evaluation;

  • Media outlets are always accountable for the content they publish;

  • Media outlets maintain transparency in their use of AI systems;

  • Media outlets ensure content origin and traceability;

  • Journalism draws a clear line between authentic and synthetic content;

  • AI-driven content personalization and recommendation uphold the diversity and integrity of information;

  • Journalists, media outlets and journalism support groups engage in the governance of AI;

  • Journalism upholds its ethical and economic foundation in engagements with AI organizations.

The Poynter Institute in the US, as a journalism research centre, April of 2023 called upon journalist organizations worldwide in order to create standards for the use of artificial intelligence, and invited media outlets to inform their audiences of this.

“This new era requires that newsrooms develop new, clear standards for what is and isn’t allowed for journalists in terms of using AI for reporting, writing and disseminating the news. Newsrooms need to act quickly but deliberatively to create these standards and to make them easily accessible to their audiences. These moves are important for maintaining trust with news consumers and ensuring accountability of the press”, says the Poynter Institute.

Transparency is key, adds the Poynter Institute, underlining that the audience has a right to know when the journalists have used artificial intelligence tools in their work.

The Nieman Foundation for Journalism, in its extensive article on the writing of standards and guidelines for the use of artificial intelligence by journalists and news organizations, explains that one of the steps that must be taken by everyone is an overview of existing codes of ethics and journalist guidelines.

“We suggest news organizations go over such codes of conduct and contrast them — one-by-one — with the potential use of generative AI and the risks that may pose”, states the Nieman Foundation.

Another step recommended for these media organizations is to adopt a risk assessment approach for the efficient development of policies to mitigate the possible risks arising from the use of artificial intelligence.

Lastly, the Nieman Foundation for Journalism recommends founding a body of experts with various backgrounds who are to participate in the preparation of guidelines for the use of artificial intelligence in journalism.

“This will allow organizations to reflect on the risks that might arise in the newsroom, and it might help to determine broader, company-wide risks that transcend the day-to-day workflows of journalists”, the Nieman Foundation recommends.

Unlike the situation abroad, in North Macedonia, we have yet to produce guidelines for the use of artificial intelligence in journalism, a step whose swift arrival is certain to be welcomed.