Q2/2019 - European Union

Brussels, 8 April/14 June

In the run-up to the elections to the European Parliament, there had been warnings against manipulation through the misuse of online platforms. In this respect, the elections were a first acid test for the effectiveness of the voluntary "Code of Practice on Disinformation" concluded between the EU and the online platforms Google, Twitter and Facebook in October 2018. In a joint statement of 14 June 2019, EU Commissioners Andrus Ansip (Vice-President), Věra Jourová (Justice), Julian King (Security) and Maryja Gabriel (Digital Economy and Society) as well as EU High Representative for Foreign Affairs Federica Mogherini said that the Code of Practice had principally passed the test and that the efforts to largely restrict the abuse of online platforms designed to manipulate the elections had been successful. EU officials acknowledge in particular that online platforms have made progress in terms of transparency in political advertising and takedowns in case of disinformation. But EU officials also make it clear that this can only be a first step. Their requests to online platforms include the demand to improve cooperation with fact checkers, to better enable users to recognise disinformation, and to make more extensive data available to scientific institutions for independent research. They further state that more information is also needed about those who are behind manipulation and disinformation campaigns. On 22 May 2019, Microsoft joined the Code of Practice on Disinformation. The platforms report monthly to the EU Commission[1].

In June 2018, the EU Commission appointed the "High-Level Expert Group on Artificial Intelligence" and asked it to develop ethical guidelines for a trustworthy artificial intelligence. More than 50 specialists worked in to the expert group, which was chaired by Pekka Ala-Pietilä from Finland[2].

The Expert Group proposes a definition of artificial intelligence. "Artificial intelligence (AI) refers to systems that display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals. AI-based systems can be purely software-based, acting in the virtual world (e.g. voice assistants, image analysis software, search engines, speech and face recognition systems) or AI can be embedded in hardware devices (e.g. advanced robots, autonomous cars, drones or Internet of Things applications)."[3]

The 40-page report analyses the possibilities and risks of the evolution of artificial intelligence and concludes that user confidence in artificial intelligence is the key element of its further development. Therefore, all developments and applications should be within the framework of existing laws and comply with the seven ethical principles established by the expert group. These seven principles are:

  • Human agency and oversight: AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights, and thus enable an equitable society. In no way should they reduce, restrict or misdirect the autonomy of human beings.
  • Technical robustness and safety: A trustworthy AI system requires safe, reliable and robust algorithms that are suited to manage defects and inconsistencies in all phases of an AI system’s life cycle.
  • Privacy and data governance: Individuals should maintain full control over their personal data, and the data collected about them should not be used to unlawfully or unfairly discriminate against them
  • Transparency: Traceability of the AI systems must be ensured.
  • Diversity, non-discrimination and fairness: AI systems should take into account the full range of human capabilities, skills and requirements and ensure e-accessibility for all.
  • Societal and environmental well-being: AI systems should be used to promote positive social and societal change, sustainability and environmental responsibility.
  • Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.

As a result of the Expert Group's report, the EU Commission has set up a European Alliance for Artificial Intelligence (AI Alliance). This new multi-stakeholder network is to initiate and accompany pilot projects and participate in the relevant discussions within the framework of the G20 and G7. Plans involve the establishment of networks of AI centers of excellence and digital innovation centers. EU Vice-President Andrus Ansip considers the European focus on the ethical dimension of artificial intelligence as a competitive advantage for the region: " The ethical dimension of AI is not a luxury feature or an add-on. It is only with trust that our society can fully benefit from technologies. Ethical AI is a win-win proposition that can become a competitive advantage for Europe: being a leader of human-centric AI that people can trust."[4]

Mehr zum Thema
Q2/2019EU
  1. [1] A Europe that protects: EU reports on progress in fighting disinformation ahead of European Council, Brussels, 14 June 2019: “Efforts to ensure the integrity of services have helped close down the scope for attempted manipulation targeting the EU elections but platforms need to explain better how the taking down of bots and fake accounts has limited the spread of disinformation in the EU. Google, Facebook and Twitter reported improvements to the scrutiny of ad placements to limit malicious click-baiting practices and reduce advertising revenues for those spreading disinformation. However, no sufficient progress was made in developing tools to increase the transparency and trustworthiness of websites hosting ads. Despite the achievements, more remains to be done: all online platforms need to provide more detailed information allowing the identification of malign actors and targeted Member States. They should also intensify their cooperation with fact checkers and empower users to better detect disinformation. Finally, platforms should give the research community meaningful access to data, in line with personal data protection rules. In this regard, the recent initiative taken by Twitter to release relevant datasets for research purposes opens an avenue to enable independent research on disinformation operations by malicious actors. Furthermore, the Commission calls on the platforms to apply their political ad transparency policies to upcoming national elections.“ See: http://europa.eu/rapid/press-release_IP-19-2914_en.htm
  2. [2] Ethics guidelines for trustworthy AI, Final Report of the High-Level Expert Group on AI, Brussels, 8 April 2019, see: https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai
  3. [4] Artificial intelligence: Commission takes forward its work on ethics guidelines, Brussels, 8 April 2019; See: https://europa.eu/rapid/press-release_IP-19-1893_en.htm