Q4/2020 - Freedom Online Coalition (FOC)

Joint Statement on Artificial Intelligence and Human Rights, 5. November 2020

Im November 2020 veröffentliche die „Freedom Online Coalition“ (FOC) zwei Statements zur Rolle der Menschenrechte beim Einsatz von Künstlicher Intelligenz und zur Verbreitung von Desinformationen:

In dem „FOC Joint Statement on Artificial Intelligence and Human Rights“, das am 5. November 2020 beim vIGF präsentiert wurde, werden Staaten aufgefordert, bei der Gestaltung, Entwicklung, dem Einsatz und der Anwendung von Künstlicher Intelligenz darauf zu achten, dass die Bestimmungen der internationalen Menschenrechtsabkommen und rechtsstaatliche Prinzipien eingehalten werden. Um Möglichkeiten und Risiken von Internet-basierten Anwendungen im Bereich Künstlicher Intelligenz einschätzen zu können und entsprechende Schlussfolgerungen zu ziehen sei eine erweiterte Multistakeholder-Zusammenarbeit zwischen Regierungen, der Wirtschaft, der Wissenschaft und der Zivilgesellschaft notwendig. Das Statement enthält insgesamt zehn konkrete Aufforderungen an Regierungen und Unternehmen[1].

Joint Statement on Spread of Disinformation Online, November 2020

Das „FOC Joint Statement on Spread of Disinformation Online“ drückt seine Besorgnis über das Anwachsen der Verbreitung von Desinformationen über soziale Netzwerke und andere Internet-basierte Kanäle aus. Das Statement fordert Regierungen auf, zielgerichtete Desinformationskampagnen zu unterlassen. Es appelliert auch an die Verantwortung nicht-staatlicher Stakeholder sich an menschenrechtliche Vorgaben, ethischen Normen, Demokratie und Rechtsstaatlichkeit zu orientieren. Das Statement wurde von einer FOC-Multistakeholder-Arbeitsgruppe unter Leitung der Regierungen von Finnland und Großbritannien erarbeitet. Das Statement enthält neun Empfehlungen für Regierungen[2], sieben Empfehlungen für Medienplattformen und soziale Netzwerke[3] sowie vier Empfehlungen für Zivilgesellschaft und die akademische Community[4].

2021 übernimmt Finnland den Vorsitz in der Freedom Online Coalition. Im Dezember 2021 soll ein FOC-Ministertreffen in Helsinki stattfinden.

Mehr zum Thema
Q4/2020FOC
  1. [1] FOC issues Joint Statement on Artificial Intelligence and Human Rights, 11. November 2020: „Calls to action: To promote respect for human rights, democracy, and the rule of law in the design, development, procurement, and use of AI systems, the FOC calls on states to work towards the following actions in collaboration with the private sector, civil society, academia, and all other relevant stakeholders: ● States should take action to oppose and refrain from the use of AI systems for repressive and authoritarian purposes, including the targeting of or discrimination against persons and communities in vulnerable and marginalized positions and human rights defenders, in violation of international human rights law. ● States should refrain from arbitrary or unlawful interference in the operations of online platforms, including those using AI systems. States have a responsibility to ensure that any measures affecting online platforms, including counter-terrorism and national security legislation, are consistent with international law, including international human rights law. States should refrain from restrictions on the right to freedom of opinion and expression, including in relation to political dissent and the work of journalists, civil society, and human rights defenders, except when such restrictions are in accordance with international law, particularly international human rights law. ● States should promote international multi-stakeholder engagement in the development of relevant norms, rules, and standards for the development, procurement, use, certification, and governance of AI systems that, at a minimum, are consistent with international human rights law. States should welcome input from a broad and geographically representative group of states and stakeholders. ● States need to ensure the design, development and use of AI systems in the public sector is conducted in accordance with their international human rights obligations. States should respect their commitments and ensure that any interference with human rights is consistent with international law. ● States, and any private sector or civil society actors working with them or on their behalf, should protect human rights when procuring, developing and using AI systems in the public sector, through the adoption of processes such as due diligence and impact assessments, that are made transparent wherever possible. These processes should provide an opportunity for all stakeholders, particularly those who face disproportionate negative impacts, to provide input. AI impact assessments should, at a minimum, consider the risks to human rights posed by the use of AI systems, and be continuously evaluated before deployment and throughout the system’s lifecycle to account for unintended and/or unforeseen outcomes with respect to human rights. States need to provide an effective remedy against alleged human rights violations. ● States should encourage the private sector to observe principles and practices of responsible business conduct (RBC) in the use of AI systems throughout their operations and supply and value chains, in a consistent manner and across all contexts. By incorporating RBC, companies are better equipped to manage risks, identify and resolve issues proactively, and adapt operations accordingly for long-term success. RBC activities of both states and the private sector should be in line with international frameworks such as the UN Guiding Principles on Business and Human Rights and the OECD Guidelines for Multinational Enterprises. ● States should consider how domestic legislation, regulation and policies can identify, prevent, and mitigate risks to human rights posed by the design, development and use of AI systems, and take action where appropriate. These may include national AI and data strategies, human rights codes, privacy laws, data protection measures, responsible business practices, and other measures that may protect the interests of persons or groups facing multiple and intersecting forms of discrimination. National measures should take into consideration such guidance provided by human rights treaty bodies and international initiatives, such as human-centered values identified in the OECD Recommendation of the Council on Artificial Intelligence,5 which was also endorsed by the G20 AI Principles.6 States should promote the meaningful inclusion of persons or groups who can be disproportionately and negatively impacted, as well as civil society and academia, in determining if and how AI systems should be used in different contexts (weighing potential benefits against potential human rights impacts and developing adequate safeguards). ● States should promote, and where appropriate, support efforts by the private sector, civil society, and all other relevant stakeholders to increase transparency and accountability related to the use of AI systems, including through approaches that strongly encourage the sharing of information between stakeholders, on topics such as the following: ○ user privacy, including the use of user data to refine AI systems, the sharing of data collected through AI systems with third parties, and if reasonable, how to opt-out of the collection, sharing, or use of user-generated data ○ the automated moderation of user generated content including, but not limited to, the removal, downranking, flagging, and demonetization of content ○ recourse or appeal mechanisms, when content is removed as the result of an automated decision ○ oversight mechanisms, such as human monitoring for potential human rights impacts ● States, as well as the private sector, should work towards increased transparency, which could include providing access to appropriate data and information for the benefit of civil society and academia, while safeguarding privacy and intellectual property, in order to facilitate collaborative and independent research into AI systems and their potential impacts on human rights, such as identifying, preventing, and mitigating bias in the development and use of AI systems. ● States should foster education about AI systems and possible impacts on human rights among the public and stakeholders, including product developers and policy-makers. States should work to promote access to basic knowledge of AI systems for all.“, siehe: https://freedomonlinecoalition.com/news/foc-issues-joint-statement-on-artificial-intelligence-and-human-rights/
  2. [2] FOC issues Joint Statement on Spread of Disinformation Online, 11. November 2020: „ The FOC calls on governments to: • Abstain from conducting and sponsoring disinformation campaigns, and condemn such acts. • Address disinformation while ensuring a free, open, interoperable, reliable and secure Internet, and fully respecting human rights. • Improve coordination and multi-stakeholder cooperation, including with the private sector and civil society, to address disinformation in a manner that respects human rights, democracy and the rule of law. • Implement any measures, including legislation introduced to address disinformation, in a manner that complies with international human rights law and does not lead to restrictions on freedom of opinion and expression inconsistent with Article 19 of the International Covenant on Civil and Political Rights. • Respect, protect and fulfill the right to freedom of expression, including freedom to seek, receive and impart information regardless of frontiers, taking into account the important and valuable guidance of human rights treaty bodies. Synthetic media is defined here as audio or visual content that has been manipulated using advanced software to change how a person, object or environment is presented. “Free” in this context does not mean “free of cost”. • Refrain from discrediting criticism of their policies and stifling freedom of opinion and expression under the guise of countering disinformation, including blocking access to the Internet, intimidating journalists and interfering with their ability to operate freely. • Support initiatives to empower individuals through online media and digital literacy education to think critically about the information they are consuming and sharing, and take steps to keep themselves and others safe online. • Take active steps to address disinformation targeted at vulnerable groups, acknowledging, in particular the specific targeting of and impact on women and persons belonging to minorities. • Support international cooperation and partnerships to promote digital inclusion8, including universal and affordable access to the Internet for all. Siehe: https://freedomonlinecoalition.com/news/foc-issues-joint-statement-on-spread-of-disinformation-online/
  3. [3] FOC issues Joint Statement on Spread of Disinformation Online, 11. November 2020: „The FOC urges social media platforms and the private sector to: • Address disinformation in a manner that is guided by respect for human rights and the UN Guiding Principles on Business and Human Rights, • Increase transparency into the factors considered by algorithms to curate content feeds and search query results, formulate targeted advertising, and establish policies around political advertising, so that researchers and civil society can identify related implications. • Increase transparency around measures taken to address the problems algorithms can cause in the context of disinformation, including content take down, account deactivation and other restrictions and algorithmic alterations. This may include building appropriate mechanisms for reporting, designed in a multi-stakeholder process and without compromising effectiveness or trade secrets. • Promote users’ access to meaningful and timely appeal processes to any decisions taken in regard to the removal of accounts or content. • Respect the rule of law across the societies in which they operate, while ensuring not to contribute to violations or abuses of human rights. • Use independent and impartial fact-checking services to help identify and highlight disinformation, and take measures to strengthen the provision of independent news sources and content on their platforms.• Support research by working with governments, civil society and academia and, where appropriate, enabling access to relevant data on reporting, appeal and approval processes, while ensuring respect for international human rights law. Siehe: https://freedomonlinecoalition.com/news/foc-issues-joint-statement-on-spread-of-disinformation-online/
  4. [4] FOC issues Joint Statement on Spread of Disinformation Online, 11. November 2020: „The FOC urges civil society and academia to: • Continue research into the nature, scale and impact of online disinformation, as well as strategic level analysis to inform public debate and government action. • Adequately consider the impact of disinformation on women and marginalized groups who are targeted by disinformation campaigns in this research. • Engage with the private sector and governments to share findings and collaborate on research, whilst ensuring appropriate privacy protections are in place. • Actively participate in public debate and in multi-stakeholder initiatives looking to address disinformation and emphasize the necessity of evidence-based discussion. Siehe: https://freedomonlinecoalition.com/news/foc-issues-joint-statement-on-spread-of-disinformation-online/