The Internet of Wild Things: Why EU Cybersecurity makes me wannacry

More than 200.000 computers infected, more than 150 countries affected, a total cost of up to 4 billion USD, and countless hospitals, factories, and services shutdown – that was the result of the worldwide cyberattack known as WannaCry. Putting its direct impact aside, however, the Wannacry attack painfully laid bare the cyber vulnerabilities of our age.

Cyberattacks are by far no longer a niche issue or futuristic material for blockbuster movies like Die Hard 4.0. They’re a daily reality. Last year in Europe, almost 70% of large businesses and half of all small businesses are estimated to have been victims to a cyberattack. And that’s no great surprise. The majority of companies doesn’t even have formal cyber security policies in place. Most worryingly perhaps is that the majority of cyber intrusions focus on a crucial people-oriented sector: healthcare. A recent report into cyber security trends highlighted that almost 20% of all attacks target that particular industry. The WannaCry attack, for example, affected many National Health Service hospitals in England and Scotland. All kinds of devices had been infected, ranging from computers and blood-storage refrigerators to MRI scanners.

The alarm bells have been ringing for some time now and WannaCry was a wake-up call that due to its global reach, finally jolted some into action. Germany, for example, has now stated its intention to update its cybersecurity legislation to include the health care and financial sectors in its list of critical industries that require minimum cybersecurity standards.

The European Union has also been working on its cybersecurity legislation, having put into place its NIS Directive (Directive on security of network and information systems), which must be implemented by the EU Member States by May 2018. This Directive obliges critical institutions and infrastructures to have basic minimum cybersecurity standards.

However, in this fast-evolving brave new digital world, catching-up just ain’t good enough. The pace and extent of digitalisation is so fast and wide that legislation needs to adapt accordingly. That means not just reacting but pro-actively considering and putting into place cybersecurity safeguards for emerging fields.

Digitalisation is a base innovation, similar to the invention of electricity. It is spreading like wildfire into every sector, service and product. Everything is becoming connected. The Internet of Things is a prime example where regular products from coffee machines to baby dolls are being digitised. But our cybersecurity legislation is not taking it into account. These devices don’t need to have any basic cybersecurity standards. As such, the Internet of Things can through cyber-manipulation actually turn into an Internet of Wild Things.

Just last year, a massive cyberattack took control over thousands of internet-connected devices – ranging from cameras, kettles, thermostats and TVs – to then use this “zombie army” of things to take down sites such as Twitter, Spotify and Paypal.

These internet-connected products are often sold as “smart devices”. They’re not. Without basic cybersecurity standards, they’re stupid devices. They open a cyber door into our digital and physical lives. They can be the entry point, allowing someone to cross-over into other digital areas such as your credit card details. They can create vulnerability. As has also been shown in automated cars that have been hacked into and hijacked so to speak.

There’s a large vacuum in this field that needs to be filled. And the longer it takes to fill it, the more vulnerable digital society will become. Because any new legislation calling for basic IT standards on connected devices would arguably only count for new devices sold in the market. But what about all those old smart devices that are already in circulation?

Secondly, there’s another fundamental question to be asked about connected devices. Let’s be realistic: a one-off cybersecurity standard won’t do. Cybersecurity is a non-stop game where software needs to be continually updated and expanded. That was also one of the reasons why so many computers had been infected by WannaCry – they were running out-of-date versions of Windows. How will such a process for updating connected devices be put into place? Is it realistic and would companies want to take on that responsibility? And what will its impact be? Izabella Kaminska, recently asked in an op-ed in the Financial Times what would happen in the case of self-driving cars, if one encounters “the spinning wheel of death (ie. a software update) just when they need to rush to hospital?” Digital systems can have physical effects.

The European Parliament in a recently adopted Report on Digitising European Industry has brought attention to this issue of connectivity. The report raises the issue that “producers are responsible for ensuring safety and cybersecurity standards as core design parameters” and that “cyber security requirements for the Internet of Things…would strengthen European cyber-resilience”. Hear Hear! The European Union Agency for Network and Information Security (ENISA) has also been promoting this issue, highlighting its damage potential.

Basic IT security parameters need to be put into place to ensure the Internet of Things doesn’t turn into the Internet of Wild Things. European policymakers need to move this issue forward in spite of industry moaning. As a first step, they could adapt public procurement rules in such a way that any connected device would be required to have basic cybersecurity standards.

Debating Ethics in the Digital Disruption

We are in the midst of perhaps the largest societal disruption in history. Digitalisation is changing the way we work, consume, communicate, think, live. And this in an incredibly short timeframe. The industrial revolution in the 18th Century took about 80 years. It resulted in mass urbanisation and mass technological, economic, societal, social, environmental, geostrategic and political change.

Digitalisation goes beyond this in many ways. Its change is not only faster, it accelerates life itself. It touches upon every sector and every facet of our lives. It’s revolutionising production, information, mobility, and so on. Autonomous cars, robots, artificial intelligence, automation, social media, are all transformative. They will define us; they will define society. This is the dream of the Silicon Valley pioneers. Harnessing the transformative powers of digitalisation to change society for the better.

This blue-eyed idealism, however, has some dark downsides. We’re aware of the information cocoon, the echo chambers, that our social media builds, which leads to greater social polarisation and extremism. We know that digitalisation impacts attention, memory, empathy and attitude. We know that it will impact the job market. But, we don’t know how it will do so, at what speed and depth, and what its meta-impact will be on our society at large.

Different doomsday scenarios are emerging. In a world of robots and artificial intelligence (AI), Israeli Historian Yuval Noah Harari is already talking of an upcoming “useless class”. A class of humans that will be living in a post-work world with no economic purpose. Principal researcher at Microsoft Research, Kate Crawford, has warned that in a world of rising nationalism AI could be a “fascists dream”, allowing authoritarian regimes an unprecedented amount of command and control. And Sir Mark Walport, UK chief scientific advisor, too has cautioned against the uncontrolled use of AI in areas such as medicine and the law, because AI can’t be neutral. AI is based on humans and human data. That means AI can take on the very inherent biases that we have, magnify them, and then act on them.

All kinds of ethical questions are emerging in this digital disruption. Who will be liable in a crash by an autonomous car or in a surgery gone wrong by a robot? What should be the algorithms used in autonomous cars – drive over and kill the pedestrian if it saves your life in a traffic situation, or crash and kill yourself, saving the pedestrian? What degree of regulation does social media require to combat fake news and hate speech, without impacting freedom of speech? Should robots be banned from certain tasks? Should there be regulation stating that final decisions must still be made by a human rather than by artificial intelligence? Akin to the military rule that a “kill decision” always has to be made by a human. How do we distribute wealth created by robots? And what happens to the workers displaced by robots?

The problem is, as Reinhold Niebuhr noted in his seminal work Moral Man and Immoral Society, that “the growing intelligence of mankind seems not to be growing rapidly enough to achieve mastery over the social problems, which the advances of technology create”. It is high time for a public debate on ethics and digital technology. Leading AI companies are already moving ahead. Facebook, Amazon, Google DeepMind, IBM, Apple, and Microsoft, this unholy alliance of competitors, has already joined hands in a Partnership on AI designed to initiate more public discussion on artificial intelligence. Why would that be? Because it’s about their business. Loss of public trust in these new technologies could significantly affect their business, burning the billions of dollars in research budgets put into AI.

The public discussion should not be for business to shape. Public authorities have leadership responsibility to put this on the political agenda, start this discussion and engage citizens in it. Some progress is taking place. The UK has established a Data Ethics Group at the Alan Turing Institute and the European Commission can also particularly be praised. Together with media partners from 19 EU countries (such as El Pais, The Guardian, Frankfurter Allgemeine Zeitung, Gazeta Wyborcza, etc.) it has launched a massive set of internet consultations, engaging citizens in surveys on the impact of the digital world on jobs, privacy, health, democracy, security, and so on.

We need an ethical discussion on digital technologies to ensure that the safeguards we have in place for the analogue world are also in the digital world. Digitalisation is shaping and defining us. But we need to be the ones that shape and define it. So that it may give us its greatest advantage and the least disadvantage.