Q4/2019 - Europarat

Straßburg, 1. Oktober 2019

Der Europarat hat im 4. Quartal 2019 zwei Studien veröffentlicht, die für zukünftige Regelungen zu Internet Governance, insbesondere im Zusammenhang mit über das Internet verbreitete Kommunikationsinhalte sowie zum Thema Menschenrechte und künstliche Intelligenz, von nicht unerheblicher Bedeutung sind.

Studie zu grenzüberschreitenden Rechtsfragen bei Beleidigungen im Internet

Die erste Studie beschäftigt sich mit Fragen der grenzüberschreitenden Verantwortlichkeit und der Rechtspraxis im Umgang mit Beleidigungen und Beschimpfungen im Internet (Liability and jurisdictional issues in the application of civil and administrative defamation laws in Council of Europe member states). Die Expertengruppe unter Leitung von Luukas Ilves (Lissabonner Rat) und Prof. Wolfgang Schulz vom Leibnitz-Institut (vormals Hans-Bredow-Institut) Hamburg stellt eine Zunahme von Fällen fest, in denen Beleidigungen und Beschimpfungen Konsequenzen über die Grenzen der jeweiligen nationalen Jurisdiktion hinaus haben. Die Expertengruppe kritisiert vor allem die Praxis des „Forum Shoppings“ bei entsprechenden Fällen[1]. Die Autoren der Studie fassen ihre Ergebnisse in 15 sogenannten „Best Practice“-Erfahrungen zusammen und empfehlen den Mitgliedern des Europarates, diese als Leitlinie für zukünftiges Handeln zu Grunde zu legen[2].

Studie zu Verantwortlichkeiten, Menschenrechten und künstlicher Intelligenz

Die zweite Studie beschäftigt sich mit Verantwortung und künstlicher Intelligenz unter besonderer Berücksichtigung der menschenrechtlichen Perspektiven. Die gleichfalls von Luukas Ilves and Prof. Wolfgang Schulz geleitete Expertengruppe examiniert die in den einschlägigen UN-Konventionen verankerten Menschenrechte. Sie prüft ihre Relevanz vor dem Hintergrund der Entwicklung und Anwendung neuer Technologien, insbesondere von künstlicher Intelligenz, und sich daraus ergebende Konsequenzen für die Entwicklung von Politiken und Regularien. Die Gruppe listet mögliche Bedrohungen und Risiken auf und geht der Frage nach, wer dafür Verantwortung trägt, um diese Risiken unter Kontrolle zu bekommen. Sie verweist dabei auf existierende Schieflagen in einer demokratischen Gesellschaft (Power Asymmetry) und schlägt in fünf Punkten rechtliche und nicht-rechtliche Maßnahmen vor, wie Anwendung künstlicher Intelligenz gefördert werden kann, ohne dass dabei die Menschenreche unter die Räder geraten. Gefordert wird u.a. ein höheres Maß an interdisziplinärer Kooperation zwischen den Entwicklern neuer Technologien, Sozialwissenschaftlern, Juristen und Politikern. Notwendig seien effektive und legitime „Governance Mechanismen, Instrumente und Institutionen“, die in der Lage sind, die Entwicklung zu beobachten und eine wirksame Aufsicht auszuüben, ohne dabei technische Innovationen zu blockieren. Solche Institutionen müssten mit wirksamen Mitteln ausgestattet werden um diejenigen, die über „digitale Macht“ verfügen und ihren Verantwortlichkeiten nicht nachkommen, nachhaltig sanktionieren zu können[3].

Mehr zum Thema
EuroparatQ4/2019
  1. [1] Study on forms of liability and jurisdictional issues in the application of civil and administrative defamation laws in Council of Europe member states: Prepared by the Expert Committee on human rights dimensions of automated data processing and different forms of artificial intelligence (MSI-AUT) Rapporteur: Emeric Prévost, Straßburg, 1. Oktober 2019:“The term forum shopping describes the practice of choosing the court in which to bring an action based on the prospect of the most favourable1 outcome, even when there is no or only a tenuous connection between the legal issues and the jurisdiction. Such practice may be observed in various fields and is not limited to defamation cases“ in: https://rm.coe.int/liability-and-jurisdictional-issues-in-online-defamation-cases-en/168097d9c3
  2. [2] Ebenda: „With a view to supporting efforts of member states to limit the negative effects of forum shopping in defamation law cases, the study identifies 15 good practices developed in member states: Good Practice 1: Courts and tribunals have jurisdiction over a case if there is a strong connection between the case and the jurisdiction they belong to. Good Practice 2: Courts and tribunals seek to identify and recognise foreign declaratory judgements that are clearly aimed at preventing or stopping abuse of legal procedure or any other action by the claimant that could be qualified as forum shopping. Good Practice 3: Courts and tribunals generally refuse, on the basis of the public order exception, to recognise or enforce foreign judgments that grant manifestly disproportionate damages awards that were rendered in breach of due process of law or as the result of an abuse of rights. Good practice 4: Courts consistently apply the res judicata exception when asked to recognise and enforce a foreign judgment that is irreconcilable with another decision from another state’s court on a case involving the same cause of action and between the same parties. Good Practice 5: Specific and reasonably short limitation periods for defamation actions are set out clearly in national law. Good Practice 6: A single publication rule clearly determines in law the starting date of limitation period for defamation cases. reasonably short limitation periods for defamation actions are set out clearly in national law. Good Practice 6: A single publication rule clearly determines in law the starting date of limitation period for defamation cases. Good Practice 7: Courts and tribunals can lift limitation periods upon request by one of the parties, provided that objective and clearly defined conditions, as set out in relevant legislation, are met. Good Practice 8: Where the burden of proof is on the defendant, available defences should not be of the kind to impede the reversal of the onus of proof on the claimant or to make such reversal unreasonably difficult Good Practice 9: Courts and tribunals deliver default judgments only when proper servicing of international proceedings is effectively guaranteed. Good Practice 10: The amount of damages granted by court in defamation proceedings is strictly proportionate to the harm suffered by the claimant. Good Practice 11: Punitive damages, where available under the member states’ legal framework, are only allowed if strict and clearly defined in law conditions are met. Good Practice 12: Appeals solely based on the amount of damages are in principle allowed. Good practice 13: Courts consistently rely on the prohibition of abuse of rights to address the cases of manifest forum shopping. Good Practice 14: Where applicable, courts scrutinise under the forum non conveniens doctrine the relevant factual elements of the case, while identifying the forum best placed to hear it. Good Practice 15: The proximity (strong connections) principle applies in determining the law applicable to a defamation case.“ In:
  3. [3] Responsibility and AI: A study of the implications of advanced digital technologies (including AI systems) for the concept of responsibility within a human rights framework Prepared by the Expert Committee on human rights dimensions of automated data processing and different forms of artificial intelligence (MSI-AUT). Conclusion: „First, it is vital that we have effective and legitimate mechanisms that will prevent and forestall human rights violations, given the speed and scale at which many advanced digital systems operate in ways that pose substantial threats to human rights without necessarily generating substantial risks of tangible harm. A preventative approach is especially important given that such threats could seriously erode the social foundations necessary for moral and democratic orders, which are essential preconditions for the exercise of individual freedom, autonomy and human rights. This may include both a need to develop collective complaints mechanisms to facilitate effective rights protection, and to enhance and reinvigorate our existing conceptions and understandings of human rights. Second, the model of legal responsibility that applies to human rights violations is widely understood as one of ‘strict responsibility’ without the need for proof of fault. In contrast, obligations of repair for tangible harms may be legally allocated and distributed in accordance with a range of responsibility models, each striking a different balance between our interests as agents in freedom of action, and our interest as victims in rights and interests in security of persons and property. Identifying which (if any) of these models is appropriate for preventing the various threats and risks associated with the operation of advanced digital technologies is not self-evident: rather, it will entail a social policy choice. In constitutional democratic societies committed to protecting and respecting human rights, states bear a critical responsibility for ensuring that these policy choices are made in a transparent, democratic manner and in ways that will ensure that the policy ultimately adopted will effectively safeguard human rights. Third, we should nurture and support technical research concerned with securing prospective and historic responsibility for ensuring due respect for many of the values underpinning human rights protection, which may facilitate the development of effective technical protection mechanisms and meaningful ‘algorithmic auditing’. This research needs to be developed by interdisciplinary engagement between the technical community and those from law, the humanities and the social sciences, in order to identify more fully how human rights protections can be translated and given expression via technical protection mechanisms embedded within AI systems, and to understand how a human rights approach responds to problems of value-conflict. Fourth, the effective protection of human rights in a global and connected digital age requires that we have effective and legitimate governance mechanisms, instruments and institutions to monitor, constrain and oversee the responsible design, development, implementation and operation of our complex socio-technical systems. This requires, at minimum, both democratic participation in the setting of the relevant standards, and the existence of properly resourced, independent authorities equipped with adequate powers systematically to gather information, to investigate non-compliance and to sanction violations, including powers and skills to investigate and verify that these systems do in fact comply with human rights standards and values. Finally, the study concludes that if we are serious in our commitment to protect and promote human rights in a global and connected digital age, then we cannot allow the power of our advanced digital technologies and systems, and those who develop and implement them, to be accrued and exercised without responsibility. The fundamental principle of reciprocity applies: those who deploy and reap the benefits of these advanced digital technologies (including AI) in the provision of services (from which they derive profit) must be responsible for their adverse consequences. It is therefore of vital importance that states committed to the protection of human rights uphold a commitment to ensure that those who wield digital power (including the power derived from accumulating masses of digital data) are held responsible for their consequences. It follows from the obligation of states to ensure the protection of human rights that they have a duty to ensure that there are governance arrangements and enforcement mechanisms within national law that will ensure that both prospective and historic responsibility for the adverse risks, harms and wrongs arising from the operation of advanced digital technologies are duly allocated“, in: https://rm.coe.int/responsability-and-ai-en/168097d9c5