Q4/2019 - Council of Europe
Strasbourg, 1 October 2019
In the fourth quarter of 2019, the Council of Europe published two studies that are of some importance with regard to future rules and regulations of Internet governance, in particular in relation with communication contents that is spread via the Internet and with the issue of human rights and artificial intelligence:
Study on cross-border legal issues concerning defamation on the Internet
The first study deals with “liability and jurisdictional issues in the application of civil and administrative defamation laws in Council of Europe member states”. The group of experts chaired by Luukas Ilves (Lisbon Council) and Prof. Wolfgang Schulz from the Leibnitz Institute (formerly Hans-Bredow Institute) in Hamburg found that the number of cases in which the impacts of defamation and hate speech extended beyond national jurisdictions were increasing. The group of experts criticises in particular the „forum shopping“ that is practiced in such cases. [1]. The authors of the study summarise their results in 15 so-called best-practice experiences and recommend the members of the Council of Europe to base their future action on this guideline[2].
Study on responsibilities, human rights and artificial intelligence
The second study investigates responsibility and artificial intelligence with a particular focus on the human rights perspective. The group of experts, which is also chaired by Luukas Ilves and Prof. Wolfgang Schulz, examines the human rights enshrined in the applicable UN conventions. It checks their relevance with a view to the development and application of new technologies, in particular of artificial intelligence, and investigates the consequences resulting from this for policies and regulations. The group lists potential threats and risks and asks who is responsible for them, so that control can be gained over them. In this context it points to the asymmetry of power in a democratic society and suggest a five-point catalogue of legal and non-legal measures to foster the application of artificial intelligence without human rights being abandoned. The group requests, for instance, more interdisciplinary cooperation between the developers of new technologies, social scientists, lawyers and the government. Effective and legitimate “governance mechanisms, instruments and institutions” appropriate to monitor, constrain and oversee the development without blocking technical innovation were necessary. Such institutions needed to be equipped with effective means to be able to hold responsible in a sustainable manner those who have digital power and do not meet their responsibilities[3].