When discussing ‘AI’, it’s tempting to look at future extremes: the potential dangers of killer drones gone rogue, or the job prospects for people when robots possibly outperform them in every conceivable way. But there is no need for speculation. Actual developments are already interesting – and worrying – enough.

Data is being amassed, combined and used for automated decisions that we don’t understand, but that affect us nonetheless. Who knows why Google Search shows what it does and on what personal data that is based? Who can be sure they are seeing the same deals on Booking.com as others, and if not, why not? The lack of transparency creates ample scope for manipulation and discrimination. At scale, systems such as YouTube’s video recommendation algorithm or Facebooks’ News Feed also affect important public values, such as democracy and tolerance.

Moreover, public authorities across Europe are collecting and combining data, and increasingly use them to decide who gets access to social services and who will be targeted by police and other authorities. As Professor Eubanks noted, when such systems are ‘not built to explicitly dismantle structural inequities, their speed and scale intensify them.’ By using existing data, inevitably biased and partial, they tend to exacerbate existing inequalities and project them into the future.

EU needs to act

Of course, the EU’s General Data Protection Regulation (GDPR) restricts how and when people’s personal data can be used and offers some protection. But this law is not designed to address all the human rights concerns that algorithmic decision-making systems raise, which go well beyond the right to privacy and personal data. They pose security risks, they discriminate and can curtail access to public services and free speech. This can happen at a scale and speed that simply goes beyond the individual rights approach of the GDPR.

Beyond the ethics guidelines, the policy recommendations put forward by the expert group are for the most part not actionable.

Hence, already in early 2017, the European Parliament asked the European Commission to propose legislation in the area of AI and robotics. In response, the Juncker Commission created a High-Level Expert Group on ‘AI’, which published ethics guidelines and recommendations for policy and investment by June 2019.

Unfortunately, the expert group of 52 members had an overwhelming number of industry representatives and just four ethicists. According to one of them, philosophy professor Thomas Metzinger, the resulting ethics guidelines amounted to ‘ethics washing’. In any case, ethics are no replacement for binding rules. To get more transparency from firms, and increasingly public authorities, about the types of automated decision-making systems they use, for what purpose and based on what data, and to restrict certain uses, you cannot depend on individual goodwill.

Beyond the ethics guidelines, the policy recommendations put forward by the expert group are for the most part not actionable. There are some valuable ideas around access to data and data-sharing between firms, but for instance the recommendations made by the German Data Ethics Commission in October 2019 provide more practical input for legislation. Not surprisingly, that group did not suffer from industry over-representation.

What’s in store?

More positively, in summer 2019, European Commission President-elect Ursula von Der Leyen announced ‘legislation for a coordinated European approach on the human and ethical implications of AI’, within the first 100 days of taking office. While that deadline was optimistic and will not be met, legislative proposals are expected for end of 2020.

Even that will be a challenge, though. ‘AI’ touches upon existing rules on non-discrimination and data protection, product safety and civil liability; not to mention the internal market and innovation aspects. These areas are spread out among different departments within the Commission, with different political leadership and institutional interests. Already, the European Commissioner in charge of the Internal Market, Thierry Breton, has said that he would not be ‘the voice of regulation on AI’, contradicting von der Leyen’s statements. Instead, he appears more interested in the underlying data ecosystem, as he considers the datasets on which AI systems are trained as a strategic asset. In short, he intends to ensure that the data generated in Europe is more widely shared among European businesses to spur innovation on the continent.

While that aim may be worthwhile, it will have to be cleared with the Commission department in charge of protecting citizens’ personal data — the Directorate-General for Justice and Consumers. This department may be reluctant to condone data-sharing strategies because the dividing line between personal and non-personal data is often blurry. And it’s still struggling to ensure the General Data Protection Regulation (GDPR) is effectively enforced, with a review of the legislation scheduled for May this year. This department is also instrumental for any new rules on civil liability for algorithmic systems — i.e. what are the responsibilities of developers, producers, users and software providers, when something goes wrong?

While the EU is consulting and making declarations about ethical AI, a surveillance and prediction infrastructure, including facial recognition technology, is rolled out across Europe.

It’s up to Executive Vice-President Margrethe Vestager to coordinate the work in this area, as she oversees the wider digital strategy of the Commission and directly heads the competition department. The latter will be involved in questions around data-sharing between firms and the problem that a few big firms hoard and control large datasets that are key for AI applications. While Vice-President Vestager is comfortable in using competition policy to fine firms and protect consumers, that’s different from the pro-active industrial policies on data that Commissioner Breton and others are favouring. In short, tough internal discussions lie ahead. 

So what will happen?

At this stage, it’s clear the Commission will not propose legislation the coming month. Instead, it will put forward a White Paper on ‘AI’, a digital strategy and a strategy for data, most likely on 19 February 2020. Legislation on AI may follow at the end of the year, as well as rules to facilitate data sharing by online platforms, to fuel ‘AI’ applications. The content of these initiatives is unclear, but a leaked version of the White Paper gives a rough idea.

In the document, the Commission prefers mandatory rules for high-risk ‘AI’ systems, changes to the civil liability framework and a strengthened framework of public oversight and enforcement. This is laudable and necessary, but the devil will be in the details — and they are not included. There’s no clarity on the types of obligations high-risk systems will have to respect in terms of transparency, accountability and safety. As to the data side, the Commission has more questions than answers at this stage, notably on access to data, data-sharing, and biased data.

Therefore, most of 2020 will likely be devoted to consultations on these topics. But it is important the Commission gets to work quickly with the legal framework. The GDPR took almost 6,5 years from adoption by the Commission until application in practice — and this does not include effective enforcement. While the EU is consulting and making declarations about ethical AI, a surveillance and prediction infrastructure, including facial recognition technology, is rolled out across Europe.