Q1/2020 - OSZE, Roundtable „Künstliche Intelligenz und Meinungsäußerungsfreiheit“

Wien, 10. März 2020

Die OSZE hat ein neues Forschungsprojekt zum Thema „Künstliche Intelligenz und Meinungsäußerungsfreiheit“ gestartet. Das Projekt wird betreut vom OSZE-Repräsentanten für freie Medien, Harlem Désir. Als Auftakt fand am 10. März 2020 in Wien ein Rundtischgespräch statt, bei dem ein Arbeitspaper unter dem Titel: „Spotlight on Artificial Intelligence & Freedom of Expression“, diskutiert wurde[1]. Das Projekt will herausfinden, welchen Einfluss AI-basierte Algorithmen bezüglich der Bewertung von Informationsinhalten auf die Freiheit der Medien und das Recht auf freie Meinungsäußerung haben. Dabei geht es u.a. um die Rolle von Internet-Intermediären, sozialen Medien und „Information Gatekeepers“. Untersucht werden soll, wie Internet-Plattformen Informationsinhalte managen (Content Moderation), einschließlich der Löschung von Inhalten (Content Removal), wie der Einsatz von künstlicher Intelligenz beim Informationsmanagement zur neuen Bedrohungen für die freie Meinungsäußerung wird (z.B. durch permanente Überwachung individueller Kommunikation) und welche Auswirkungen das hat auf Pluralismus und Vielfalt der Medien- und Meinungslandschaft. Das Arbeitspapier gibt neun Empfehlungen für weitere Untersuchungen[2]. Für Anfang Juli 2020 ist eine OSZE-Expertenkonferenz in Wien geplant.

Mehr zum Thema
  1. [1] The OSCE Representative on Freedom of the Media, Harlem Désir, stressed the importance of understanding AI’s potential and its impact on the future of human rights, particularly freedom of expression and freedom of the media. “AI can benefit societies in various positive ways. However, there is also a genuine risk that such technologies have a detrimental impact on fundamental freedoms,” said Désir. “When driven by commercial, political or state interests, the use of AI could seriously jeopardize human rights, in particular the freedom of expression and media pluralism.”, in: https://www.osce.org/representative-on-freedom-of-media/448225
  2. [2] Siehe: Spotlight on Artificial Intelligence & Freedom of Expression; OSCE RFoM Non-paper on the Impact of Artificial Intelligence on Freedom of Expression, Wien, 10. März 2020: „This non-paper outlines the ways in which non-state and state actors deploy algorithms and AI to address concerns stemming from the online ecosystem that are able to make semi-autonomous decisions on filtering, ranking, removal and blocking of content. Automated measures engage with a wide spectrum of content, from “extremist” and terrorist content to hate speech and potentially harmful,but lawful, content. Through the process of profiling, AI curtails online public forums and decides which information users are able to access online, while exacerbating the existing risks of surveillance. Key challenges to freedom of expression stem from the lack of transparency and explainability of algorithms and AI, from the outsourcing of judicial responsibilities and protection of human rights to private entities, as well as the lack of oversight, accountability and correction mechanisms. It is essential that any measure, technological and regulatory, that seeks to manage or control “public forums” is human rights-based, proportionate, and incorporates checks and balances, in order not to limit freedom of expression, media pluralism, the free flow of information and other fundamental rights. It is therefore crucial, as a first step, to establish and promote a clearer understanding of the policies and practices in place in the use of AI. It is equally important to understand better the impact they have on the future of media and quality information and the realization of human rights online. As a next step, policy recommendations need to be developed to ensure that freedom of expression and media freedom is safeguarded when using machine-learning technologies, such as AI. Looking forward, it is crucial to: ● promote a better understanding of the algorithmic decision-making and AI policies/practices in place (by both state and non-state actors) and how they impact freedom of expression; ● initiate a multi-stakeholder dialogue (including with industry and states, addressing their legitimate concerns to address security threats and hate speech online); ● develop recommendations to mitigate the negative impacts of automated tools and to prevent the infringement of free speech and media freedom; ● research and assess how automation affects media freedom and how journalism can benefit from algorithms and AI; ● measure the impact of legislation or policies mandating removal of content in short time periods on deployment of algorithms and AI by platforms; ● explore discriminatory effects of content moderation technologies, especially in the context of digital inclusion and marginalized voices; ● conduct studies on the effectiveness of automated measures specifically designed to identify illegal content, as well as to explore alternative measures to combat hate speech, for instance, how interface design impacts users’ behaviors and how algorithms and AI could be used to counter hate speech; ● map out the current use of machine-learning technologies by law enforcement agencies and their potential impact on freedom of expression; and ● organize discussions and workshops about the positive and negative implications of automated measures for identification of illegal content on online platforms specifically targeting law enforcement in selected countries, as well as on how they impact freedom of expression.“ In: https://www.osce.org/representative-on-freedom-of-media/448225