Q1/2021 - UN-Menschenrechtsrat

46. Sitzung des UN-Menschenrechtsrats, 22. Februar – 24. März 2021

Die 46. Sitzung des UN-Menschenrechtsrats, die vom 22. Februar bis 24. März 2021 stattfand, verabschiedete keine Resolutionen zu internet-relevanten menschenrechtlichen Fragen[1].

Bericht „Artificial Intelligence and Privacy“ and „Children’s Privacy“, 25. Januar 2021

Der Sonderberichterstatter für „Privacy in the Digital Age“, Joseph Cannataci hatte am 25. Januar 2021 dem UN-Menschenrechtsrat einen neuen Bericht unter dem Titel „Artificial intelligence and privacy, and children’s privacy“ vorgelegt. Der Bericht basiert auf umfangreichen Konsultationen, die Cannataci im 4. Quartal 2020 durchgeführt hatte.

Cannataci schlägt in seinem Bericht vor, allgemeine Leitlinien für die Nutzung von künstlicher Intelligenz im Rahmen der Vereinten Nationen auszuarbeiten. Dabei müsse darauf geachtet werden, dass diese Leitlinien in Übereinstimmung stehen mit den bereits existierenden Pflichten von Staaten, individuelle Menschenrechte, und hier insbesondere das Recht auf Schutz der Privatsphäre, zu achten und zu gewährleisten. Cannataci verweist auf die universell geltenden Rechtsinstrumente wie die UN-Charta und die Deklarationen und Konventionen zu den Menschenrechten. Dies konstituiere eine ausreichende juristische Basis, auf der man solide aufbauen kann, wenn man sich mit den spezifischen neuen Herausforderungen beschäftigt. Solche neuen Herausforderungen sieht Cannataci vor allem in Bereichen wie „the intrusiveness and potential impact of data gathering, the risk of surveillance and the increasing use of algorithms using such data sets to automate decisions that affect individuals’ lives.“ Cannataci nennt acht Bereiche, für die solche allgemeinen Leitlinien nützlich wären:

  • Jurisdiction;
  • Ethical and lawful basis;
  • Data fundamentals;
  • Responsibility and oversight;
  • Control;
  • Transparency and “explainability”;
  • Rights of the data subject;
  • Safeguards.

Wenn man sich an die Arbeit mache, um ein universelles UN-Instrument zur künstlichen Intelligenz auszuarbeiten, müsse man von vornherein auch an die Bildung von unabhängigen Aufsichtsgremien und entsprechenden Kontrollmechanismen denken[2].

In einem zweiten Dokument setzt sich Cannataci erstmalig mit dem Schutz der Privatsphäre für Kinder in der digitalen Welt auseinander. Kinder würden heute bereits in sehr frühem Alter im „Cyberspace“ unterwegs sein, wo sie besonders verwundbar sind. Cannataci macht klar, dass der völkerrechtliche Schutz der Privatsphäre auch Kinder einbezieht und verweist auf die speziellen völkerrechtlichen Verpflichtungen der Staaten, die sich auch aus der „UN-Convention on the Rights of the Child“ ergeben, die von allen 193 UN-Mitgliedern ratifiziert wurde. Dort wird in Artikel 16 festgelegt: „1. No child shall be subjected to arbitrary or unlawful interference with his or her privacy, family, home or correspondence, nor to unlawful attacks on his or her honour and reputation. 2. The child has the right to the protection of the law against such interference or attacks“. In seinem Bericht gibt Cannataci insgesamt 27 Empfehlungen, wie die Privatsphäre von Kindern im Internet geschützt werden kann[3].

Mehr zum Thema
  1. [1] 46th session of the Human Rights Council: Resolutions, decisions and President’s statements, 24. März 2021, in: https://www.ohchr.org/EN/HRBodies/HRC/RegularSessions/Session46/Pages/ResDecStat.aspx
  2. [2] Artificial intelligence and privacy, and children’s privacy Report of the Special Rapporteur on the right to privacy, Joseph A. Cannataci, UN-Document A/HRC/46/37 , 25. Januar 2021: „AI solutions, including those procured from a third party, must be under the full control of the relevant manager. From the first design idea to the final switch-off and decommissioning, it must be clear what data are processed in the AI solution, what parameters and data quality metrics provide the basis for the decision-making and how they will be balanced and weighted against each other. The results must be monitored continuously and corrected if necessary. In the area of automated decision-making solutions, no decisions are to be made based on conscious or unconscious bias. Possible bias and discriminatory effects must be checked and corrected before roll-out of a system and at regular intervals throughout its lifetime. In the case of AI for decision support systems, a similar set of controls is required for the decision maker. The manager, in conjunction with processors as necessary, must be able to stop or change the processing at any time. Incorrect results must be documented, as must the corrective measures taken, in order to mitigate any risks for the data subjects. Once their use for identification, corrective or forensic purposes is completed, incorrect results must be deleted without undue delay. During the whole running time of the AI solution, until the final switch-off, the results produced by the AI solution must be monitored against the fundamental requirements defined in the planning phase. …The difficulties of controlling all aspects of the algorithms’ operations and the constant change of algorithms during the running time of an AI solution make it essential to constantly check the results against the initial intended purpose of the solution in another feasible way to provide a point of comparison. If a deviation is suspected or observed, the data feed for the AI solution must be adapted accordingly or the solution itself stopped. To gain the benefits of new creative approaches and widen the horizon of the developer and the manager, input and feedback from privacy, cross-sectoral, cross-industry, civil society and user communities needs to be factored into the development, testing and monitoring of AI solutions. A testing facility must be established for ready-to-run AI solutions, for example, by installing a so-called black box in the Internet where the separated and self-contained solution is open to third parties to input data to ascertain the type of results the AI solution will produce, or the implementation by regulators of sandboxes within organizations involved in introducing AI solutions.“ In: https://www.ohchr.org/EN/Issues/Privacy/SR/Pages/AnnualReports.aspx
  3. [3] Artificial intelligence and privacy, and children’s privacy Report of the Special Rapporteur on the right to privacy, Joseph A. Cannataci, UN-Document A/HRC/46/37 , 25. Januar 2021: „The Special Rapporteur recommends that States: 1. Ensure that the rights and values of the Convention on the Rights of the Child concerning privacy, personality and autonomy underpin government legislation, policies, decisions, record systems and services; 2. Support comprehensive analyses of children’s capacity for autonomous decision-making for accessing online and other services, to enable evidence-based child specific privacy laws, policies and regulations; 3. Adopt age appropriate standards as a regulatory instrument only with the greatest of caution when no better means exist; 4. Promote and require implementation of safety by design, privacy by design and privacy by default guiding principles for products and services for children and ensure that children have effective remedies against privacy infringements; 5. Encourage partnerships with civil society and industry to co-create technological offerings in the best interests of children and young people; (f) Adopt the Special Rapporteur’s recommendations for protecting against gender-based privacy infringements 6. Develop comprehensive online educational plans of action based on article 29 (1) of the Convention on the Rights of the Child and the Council of Europe guidelines on children’s data protection in an education setting; 7. Ensure appropriate legal frameworks are established and maintained for online education; 8. Create public infrastructure for non-commercial educational and social spaces; 9. Remedy all legislative gaps and procedural exceptions to ensure all children in contact with justice systems have their privacy maintained throughout all proceedings, with lifelong non-publication orders for any criminal justice record; 10. Review legal frameworks to enable voluntary action by companies to lawfully and proportionately detect online child sexual abuse material; 11. Ensure that the personal data of children associated with terrorist or violent extremist groups are classified and shared only where strictly necessary to coordinate individual rehabilitation and reintegration; 12. Prior to the linking of civil and criminal identity databases, undertake human rights impact assessments on the implications for children and their privacy, and conduct consultations to assess the necessity, proportionality and legality of biometric surveillance; 13. Establish practices and laws to ensure that information provided to the media does not violate children’s right to privacy and that reporting by media and other bodies protects the privacy of children whose parents are in conflict with the law; 14. Ensure that children’s privacy is upheld in all contacts with incarcerated parents, including written, electronic and telephone communications, and prison visits; 15. Ensure that biometric data is not collected from children, unless as an exceptional measure only when lawful, necessary, proportionate and fully in line with the rights of the child; 16. Ensure that children’s personal data is processed fairly, accurately, securely, for a specific purpose in accordance with a legitimate legal basis utilizing data protection frameworks representing best practice, such as the General Data Protection Regulation and Convention 108+; 17. Ensure that those who process personal data, including parents or carers and educators, are made aware of children’s right to privacy and data protection; 18. Ensure that information is available to children on exercising their rights on, for example, the websites of data protection authorities, and ensure the provision of counselling, complaint mechanisms and remedies specifically for children, including for cyberbullying; 19. Ensure that anonymity, pseudonymity or the use of encryption technologies by children are not prohibited in law or in practice; 20. Ensure that opportunities are available to children and young people of all backgrounds to participate in decision-making and design of frameworks, policies and programmes aimed at them; 21. Prohibit automated processing of personal data that profiles children for decision-making concerning the child or to analyse or predict personal preferences, behaviour and attitudes, with exemption only in exceptional circumstances in the best interests of the child or an overriding public interest, with appropriate legal safeguards; 22. Ensure that the rights and values of the Convention on the Rights of the Child concerning privacy, personality and autonomy underpin corporate policies, management decisions and services; 23. Implement the Guiding Principles on Business and Human Rights: “Protect, Respect and Remedy Framework” and the gender guidance thereon, 24. Establish remedial and grievance mechanisms, while ensuring that they do not impede access to State-based mechanisms; 25. Provide understandable information on reporting matters of concern, including complaints, and remedial and grievance mechanisms; 26. Take reasonable, proportionate, timely and effective measures to ensure their networks and online services are not misused for criminal or other unlawful purposes that are harmful to children; 27. Engage with law enforcement authorities to support the legal identification and prosecution of perpetrators of crimes against children.“ In: https://www.ohchr.org/EN/Issues/Privacy/SR/Pages/AnnualReports.aspx