Q1/2020 - OSCE, Roundtable on the impact of artificial intelligence on freedom of expression

Vienna, 10 March 2020

The OSCE has launched a new research project on artificial intelligence and freedom of expression. The project is supervised by the OSCE Representative on Freedom of the Media, Harlem Désir. As a prelude, a roundtable discussion of a draft paper titled “Spotlight on Artificial Intelligence & Freedom of Expression” was held in Vienna on 10 March 2020[1]. The project aims to find out what influence AI-based algorithms have on freedom of the media and freedom of expression with regard to the evaluation of information content. This includes an investigation of the role of Internet intermediaries, social media and “information gatekeepers”. It will be examined how Internet platforms manage information content (content moderation), including content deletion (content removal), how the use of artificial intelligence in information management is becoming a new threat to freedom of expression (e.g. through permanent monitoring of individual communication) and what impact this has on pluralism and diversity in the media and opinion landscape. The draft paper includes nine recommendations for further research[2]. An OSCE expert conference is scheduled to take place in Vienna at the start of July 2020.

Mehr zum Thema
  1. [1] The OSCE Representative on Freedom of the Media, Harlem Désir, stressed the importance of understanding AI’s potential and its impact on the future of human rights, particularly freedom of expression and freedom of the media. “AI can benefit societies in various positive ways. However, there is also a genuine risk that such technologies have a detrimental impact on fundamental freedoms,” said Désir. “When driven by commercial, political or state interests, the use of AI could seriously jeopardize human rights, in particular the freedom of expression and media pluralism.”, in: https://www.osce.org/representative-on-freedom-of-media/448225
  2. [2] Siehe: Spotlight on Artificial Intelligence & Freedom of Expression; OSCE RFoM Non-paper on the Impact of Artificial Intelligence on Freedom of Expression, Wien, 10. März 2020: „This non-paper outlines the ways in which non-state and state actors deploy algorithms and AI to address concerns stemming from the online ecosystem that are able to make semi-autonomous decisions on filtering, ranking, removal and blocking of content. Automated measures engage with a wide spectrum of content, from “extremist” and terrorist content to hate speech and potentially harmful,but lawful, content. Through the process of profiling, AI curtails online public forums and decides which information users are able to access online, while exacerbating the existing risks of surveillance. Key challenges to freedom of expression stem from the lack of transparency and explainability of algorithms and AI, from the outsourcing of judicial responsibilities and protection of human rights to private entities, as well as the lack of oversight, accountability and correction mechanisms. It is essential that any measure, technological and regulatory, that seeks to manage or control “public forums” is human rights-based, proportionate, and incorporates checks and balances, in order not to limit freedom of expression, media pluralism, the free flow of information and other fundamental rights. It is therefore crucial, as a first step, to establish and promote a clearer understanding of the policies and practices in place in the use of AI. It is equally important to understand better the impact they have on the future of media and quality information and the realization of human rights online. As a next step, policy recommendations need to be developed to ensure that freedom of expression and media freedom is safeguarded when using machine-learning technologies, such as AI. Looking forward, it is crucial to: ● promote a better understanding of the algorithmic decision-making and AI policies/practices in place (by both state and non-state actors) and how they impact freedom of expression; ● initiate a multi-stakeholder dialogue (including with industry and states, addressing their legitimate concerns to address security threats and hate speech online); ● develop recommendations to mitigate the negative impacts of automated tools and to prevent the infringement of free speech and media freedom; ● research and assess how automation affects media freedom and how journalism can benefit from algorithms and AI; ● measure the impact of legislation or policies mandating removal of content in short time periods on deployment of algorithms and AI by platforms; ● explore discriminatory effects of content moderation technologies, especially in the context of digital inclusion and marginalized voices; ● conduct studies on the effectiveness of automated measures specifically designed to identify illegal content, as well as to explore alternative measures to combat hate speech, for instance, how interface design impacts users’ behaviors and how algorithms and AI could be used to counter hate speech; ● map out the current use of machine-learning technologies by law enforcement agencies and their potential impact on freedom of expression; and ● organize discussions and workshops about the positive and negative implications of automated measures for identification of illegal content on online platforms specifically targeting law enforcement in selected countries, as well as on how they impact freedom of expression.“ In: https://www.osce.org/representative-on-freedom-of-media/448225