Q4/2019 - Council of Europe

Strasbourg, 1 October 2019

In the fourth quarter of 2019, the Council of Europe published two studies that are of some importance with regard to future rules and regulations of Internet governance, in particular in relation with communication contents that is spread via the Internet and with the issue of human rights and artificial intelligence:

Study on cross-border legal issues concerning defamation on the Internet

The first study deals with “liability and jurisdictional issues in the application of civil and administrative defamation laws in Council of Europe member states”. The group of experts chaired by Luukas Ilves (Lisbon Council) and Prof. Wolfgang Schulz from the Leibnitz Institute (formerly Hans-Bredow Institute) in Hamburg found that the number of cases in which the impacts of defamation and hate speech extended beyond national jurisdictions were increasing. The group of experts criticises in particular the „forum shopping“ that is practiced in such cases. [1]. The authors of the study summarise their results in 15 so-called best-practice experiences and recommend the members of the Council of Europe to base their future action on this guideline[2].

Study on responsibilities, human rights and artificial intelligence

The second study investigates responsibility and artificial intelligence with a particular focus on the human rights perspective. The group of experts, which is also chaired by Luukas Ilves and Prof. Wolfgang Schulz, examines the human rights enshrined in the applicable UN conventions. It checks their relevance with a view to the development and application of new technologies, in particular of artificial intelligence, and investigates the consequences resulting from this for policies and regulations. The group lists potential threats and risks and asks who is responsible for them, so that control can be gained over them. In this context it points to the asymmetry of power in a democratic society and suggest a five-point catalogue of legal and non-legal measures to foster the application of artificial intelligence without human rights being abandoned. The group requests, for instance, more interdisciplinary cooperation between the developers of new technologies, social scientists, lawyers and the government. Effective and legitimate “governance mechanisms, instruments and institutions” appropriate to monitor, constrain and oversee the development without blocking technical innovation were necessary. Such institutions needed to be equipped with effective means to be able to hold responsible in a sustainable manner those who have digital power and do not meet their responsibilities[3].

Mehr zum Thema
Council of EuropeQ4/2019
  1. [1] Study on forms of liability and jurisdictional issues in the application of civil and administrative defamation laws in Council of Europe member states: Prepared by the Expert Committee on human rights dimensions of automated data processing and different forms of artificial intelligence (MSI-AUT) Rapporteur: Emeric Prévost, Straßburg, 1. Oktober 2019:“The term forum shopping describes the practice of choosing the court in which to bring an action based on the prospect of the most favourable1 outcome, even when there is no or only a tenuous connection between the legal issues and the jurisdiction. Such practice may be observed in various fields and is not limited to defamation cases“ in: https://rm.coe.int/liability-and-jurisdictional-issues-in-online-defamation-cases-en/168097d9c3
  2. [2] Ebenda: „With a view to supporting efforts of member states to limit the negative effects of forum shopping in defamation law cases, the study identifies 15 good practices developed in member states: Good Practice 1: Courts and tribunals have jurisdiction over a case if there is a strong connection between the case and the jurisdiction they belong to. Good Practice 2: Courts and tribunals seek to identify and recognise foreign declaratory judgements that are clearly aimed at preventing or stopping abuse of legal procedure or any other action by the claimant that could be qualified as forum shopping. Good Practice 3: Courts and tribunals generally refuse, on the basis of the public order exception, to recognise or enforce foreign judgments that grant manifestly disproportionate damages awards that were rendered in breach of due process of law or as the result of an abuse of rights. Good practice 4: Courts consistently apply the res judicata exception when asked to recognise and enforce a foreign judgment that is irreconcilable with another decision from another state’s court on a case involving the same cause of action and between the same parties. Good Practice 5: Specific and reasonably short limitation periods for defamation actions are set out clearly in national law. Good Practice 6: A single publication rule clearly determines in law the starting date of limitation period for defamation cases. reasonably short limitation periods for defamation actions are set out clearly in national law. Good Practice 6: A single publication rule clearly determines in law the starting date of limitation period for defamation cases. Good Practice 7: Courts and tribunals can lift limitation periods upon request by one of the parties, provided that objective and clearly defined conditions, as set out in relevant legislation, are met. Good Practice 8: Where the burden of proof is on the defendant, available defences should not be of the kind to impede the reversal of the onus of proof on the claimant or to make such reversal unreasonably difficult Good Practice 9: Courts and tribunals deliver default judgments only when proper servicing of international proceedings is effectively guaranteed. Good Practice 10: The amount of damages granted by court in defamation proceedings is strictly proportionate to the harm suffered by the claimant. Good Practice 11: Punitive damages, where available under the member states’ legal framework, are only allowed if strict and clearly defined in law conditions are met. Good Practice 12: Appeals solely based on the amount of damages are in principle allowed. Good practice 13: Courts consistently rely on the prohibition of abuse of rights to address the cases of manifest forum shopping. Good Practice 14: Where applicable, courts scrutinise under the forum non conveniens doctrine the relevant factual elements of the case, while identifying the forum best placed to hear it. Good Practice 15: The proximity (strong connections) principle applies in determining the law applicable to a defamation case.“ In:
  3. [3] Responsibility and AI: A study of the implications of advanced digital technologies (including AI systems) for the concept of responsibility within a human rights framework Prepared by the Expert Committee on human rights dimensions of automated data processing and different forms of artificial intelligence (MSI-AUT). Conclusion: „First, it is vital that we have effective and legitimate mechanisms that will prevent and forestall human rights violations, given the speed and scale at which many advanced digital systems operate in ways that pose substantial threats to human rights without necessarily generating substantial risks of tangible harm. A preventative approach is especially important given that such threats could seriously erode the social foundations necessary for moral and democratic orders, which are essential preconditions for the exercise of individual freedom, autonomy and human rights. This may include both a need to develop collective complaints mechanisms to facilitate effective rights protection, and to enhance and reinvigorate our existing conceptions and understandings of human rights. Second, the model of legal responsibility that applies to human rights violations is widely understood as one of ‘strict responsibility’ without the need for proof of fault. In contrast, obligations of repair for tangible harms may be legally allocated and distributed in accordance with a range of responsibility models, each striking a different balance between our interests as agents in freedom of action, and our interest as victims in rights and interests in security of persons and property. Identifying which (if any) of these models is appropriate for preventing the various threats and risks associated with the operation of advanced digital technologies is not self-evident: rather, it will entail a social policy choice. In constitutional democratic societies committed to protecting and respecting human rights, states bear a critical responsibility for ensuring that these policy choices are made in a transparent, democratic manner and in ways that will ensure that the policy ultimately adopted will effectively safeguard human rights. Third, we should nurture and support technical research concerned with securing prospective and historic responsibility for ensuring due respect for many of the values underpinning human rights protection, which may facilitate the development of effective technical protection mechanisms and meaningful ‘algorithmic auditing’. This research needs to be developed by interdisciplinary engagement between the technical community and those from law, the humanities and the social sciences, in order to identify more fully how human rights protections can be translated and given expression via technical protection mechanisms embedded within AI systems, and to understand how a human rights approach responds to problems of value-conflict. Fourth, the effective protection of human rights in a global and connected digital age requires that we have effective and legitimate governance mechanisms, instruments and institutions to monitor, constrain and oversee the responsible design, development, implementation and operation of our complex socio-technical systems. This requires, at minimum, both democratic participation in the setting of the relevant standards, and the existence of properly resourced, independent authorities equipped with adequate powers systematically to gather information, to investigate non-compliance and to sanction violations, including powers and skills to investigate and verify that these systems do in fact comply with human rights standards and values. Finally, the study concludes that if we are serious in our commitment to protect and promote human rights in a global and connected digital age, then we cannot allow the power of our advanced digital technologies and systems, and those who develop and implement them, to be accrued and exercised without responsibility. The fundamental principle of reciprocity applies: those who deploy and reap the benefits of these advanced digital technologies (including AI) in the provision of services (from which they derive profit) must be responsible for their adverse consequences. It is therefore of vital importance that states committed to the protection of human rights uphold a commitment to ensure that those who wield digital power (including the power derived from accumulating masses of digital data) are held responsible for their consequences. It follows from the obligation of states to ensure the protection of human rights that they have a duty to ensure that there are governance arrangements and enforcement mechanisms within national law that will ensure that both prospective and historic responsibility for the adverse risks, harms and wrongs arising from the operation of advanced digital technologies are duly allocated“, in: https://rm.coe.int/responsability-and-ai-en/168097d9c5