Q4/2019 - World Economic Forum (WEF)

Davos, October 2019

Studies on Cyber Security and on Artificial Intelligence

In the 4th quarter of 2019, the World Economic Forum Davos published two studies, one on cyber security and one on artificial intelligence. The two studies were supervised by the WEF Center for Cybersecurity in Geneva and the WEF Center for the 4th Industrial Revolution in San Francisco.

The WEF considers cyber attacks one of the “top 10 global risks” mankind will be exposed to in the next decade. According to the WEF, extensive effort worth several trillion dollars will be required in the next ten years to be able keep pace with technological change and to grant security. Above all, however, political and economic leaders must understand that cyber security issues must be a top priority in their leadership. The study includes a “Cybersecurity Guide for Leaders in Today’s Digital World”, which contains “10 tenets that describe how cyber resilience in the digital age can be formed through effective leadership and design[1].

The study on artificial intelligence points out both the possibilities and dangers that arise with AI and that will fundamentally affect the life of every individual in the coming decade. It urges not to leave the development of AI unguided. Society needs an comprehensive multistakehoder discussion in order to be able to sensibly intervene in future developments, so says the study[2].

Speech of Bruce Schneier on the Relationship between Policymakers and Technology

On 12 November 2019, the world-renowned Internet security expert Bruce Schneier gave a presentation to the World Economic Forum. In his speech he denounced the lack of cooperation between technology and policymakers. In his view such behaviour is utterly dangerous in a world of extremely rapid technological change. Developments could get "out of control". Schneier complained that there were too few incentives for talented technicians to go into politics. Technicians were often not aware of the political and societal implications of their research. On the other hand, most politicians did not understand the new technology on which they might make political decisions. He called for qualification programs for both technicians and politicians in order to achieve better hand in hand work of both stakeholder groups[3].

Mehr zum Thema
Q4/2019WEF
  1. [1] The Cybersecurity Guide for Leaders in Today’s Digital World, World Economic Forum, Davos, “Cyberattacks are one of the top 10 global risks of highest concern in the next decade, with an estimated price tag of trillion if cybersecurity efforts do not keep pace with technological change. While there is abundant guidance in the cybersecurity community, the application of prescribed action continues to fall short of what is required to ensure effective defence against cyberattacks. The challenges created by accelerating technological innovation have reached new levels of complexity and scale – today responsibility for cybersecurity in organizations is no longer one Chief Security Officer’s job, it involves everyone. The Cybersecurity Guide for Leaders in Today’s Digital World was developed by the World Economic Forum Centre for Cybersecurity and several of its partners to assist the growing number of C-suite executives responsible for setting and implementing the strategy and governance of cybersecurity and resilience in their organization. The guide bridges the gap between leaders with and without technical backgrounds. Following almost one year of research, it outlines 10 tenets that describe how cyber resilience in the digital age can be formed through effective leadership and design”, siehe
  2. [2] A Framework for Developing a National Artificial Intelligence Strategy, World Economic Forum, Centre for Fourth Industrial Revolution, Davos, 16. A Framework for Developing a National Artificial Intelligence Strategy Conclusion: The Fourth Industrial Revolution and the artificial intelligence at its core are fundamentally changing the way we live, work and interact as citizens. The complexity of this transformation may look overwhelming and to many threatening. We should remember that all technologies are social constructs shaped by our individual and collective choices. Indeed, AI technologies have no other objectives than the ones that we assigned them. Yet our failure to proactively shape their development may lead to unfortunate outcomes. Therefore, this is the time not for regrets but for decisive action to forge a positive way forward. We must engage in a multistakeholder collaboration to actively guide the ongoing revolution and ensure benefits for the many rather than the few. Careful planning is the most effective way to ensure positive outcomes. From this perspective, we strongly encourage nations around the world to design their own national AI strategies, not to win the global AI race but as an expression of their duty to protect and provide for their citizens in this time of technological change. To this end, we have prepared a short framework for designing a national AI strategy, building on the insights of those already released to help those who have not yet done it. Through the World Economic Forum’s Centre for the Fourth Industrial Revolution, we will support volunteer governments in the design of their strategy. Thus, this framework will be tested on the ground and key learnings will be disseminated publicly.http://www3.weforum.org/docs/WEF_National_AI_Strategy.pdf
  3. [3] Bruce Schneier, We must bridge the gap between technology and policymaking. Our future depends on it, Rede von Bruce Schneier vor dem Weltwirtschaftsforum Davos, Genf, 12. November 2019, „Technologists and policymakers largely inhabit two separate worlds. It's an old problem, one that the British scientist CP Snow identified in a 1959 essay entitled The Two Cultures. He called them sciences and humanities, and pointed to the split as a major hindrance to solving the world’s problems. The essay was influential – but 60 years later, nothing has changed. When Snow was writing, the two cultures theory was largely an interesting societal observation. Today, it’s a crisis. Technology is now deeply intertwined with policy. We're building complex socio-technical systems at all levels of our society. Software constrains behaviour with an efficiency that no law can match. It's all changing fast; technology is literally creating the world we all live in, and policymakers can’t keep up. Getting it wrong has become increasingly catastrophic. Surviving the future depends in bringing technologists and policymakers together. Consider artificial intelligence (AI). This technology has the potential to augment human decision-making, eventually replacing notoriously subjective human processes with something fairer, more consistent, faster and more scalable. But it also has the potential to entrench bias and codify inequity, and to act in ways that are unexplainable and undesirable. It can be hacked in new ways, giving attackers from criminals and nation states new capabilities to disrupt and harm. How do we avoid the pitfalls of AI while benefiting from its promise? Or, more specifically, where and how should government step in and regulate what is largely a market-driven industry? The answer requires a deep understanding of both the policy tools available to modern society and the technologies of AI. In his book Future Politics, Jamie Susskind writes: "Politics in the twentieth century was dominated by a central question: how much of our collective life should be determined by the state, and what should be left to the market and civil society? For the generation now approaching political maturity, the debate will be different: to what extent should our lives be directed and controlled by powerful digital systems – and on what terms?" But AI is just one of many technological areas that needs policy oversight. We also need to tackle the increasingly critical cybersecurity vulnerabilities in our infrastructure. We need to understand both the role of social media platforms in disseminating politically divisive content, and what technology can and cannot to do mitigate its harm. We need policy around the rapidly advancing technologies of bioengineering, such as genome editing and synthetic biology, lest advances cause problems for our species and planet. We're barely keeping up with regulations on food and water safety – let alone energy policy and climate change. Robotics will soon be a common consumer technology, and we are not ready for it at all. Addressing these issues will require policymakers and technologists to work together from the ground up. We need to create an environment where technologists get involved in public policy – where there is a viable career path for what has come to be called “public-interest technologists.” The concept isn’t new, even if the phrase is. There are already professionals who straddle the worlds of technology and policy. They come from the social sciences and from computer science. They work in data science, or tech policy, or public-focused computer science. They worked in Bush and Obama’s White House, or in academia and NGOs. The problem is that there are too few of them; they are all exceptions and they are all exceptional. We need to find them, support them, and scale up whatever the process is that creates them.“ In: https://www.weforum.org/agenda/2019/11/we-must-bridge-the-gap-between-technology-and-policy-our-future-depends-on-it/