Regulating Artificial Intelligence (AI) is, without overstating its importance, one of the most important dilemmas of our digital age. The European Union is now in the process of finding its own answers to this dilemma, which, seems already fair to say, will be different from the answers of other global powers such as the US or China.
In February, the European Commission released a White Paper on AI, putting forward its perspective, which took inspiration from the work carried out by the High-Level Expert Group of AI which the EC had previously assembled to support its policy work in this area. The publication of the White Paper confirmed what was already long known – that the EU wants to take a leadership position in regulating AI while ensuring that it remains competitive in this field at the global level.
Yet, the ways forward are not quite clear yet. This can be seen from the approach the Commission used in its consultation – which was to put forward four options, ranging from a soft-law only approach to regulating every single application of AI.
At The Good Lobby, we were happy about the opportunity to provide our feedback and our answer to what the best way forward would be, having regard to our values and mission and seeking to protect citizens and their fundamental rights. We believe regulating AI is important for civil society organizations, a topic which we have addressed previously, and we are committed to actively participating in this debate while also enabling other organizations to do so as well.
We strongly believe that the best way forward is not through self-regulation as the risks that AI poses are too great, and that the EU needs to have a decisive and comprehensive intervention. Our feedback, which is also available on the Commission’s website, is reproduced in its entirety below.
The Good Lobby welcomes the opportunity of providing feedback on this legislative initiative. Without a doubt, Artificial Intelligence (AI) is already changing the world, its potential impacts having long been discussed and analysed. Other than the many benefits of this technology, presented by the Commission both in its White Paper and in its Communication on AI, it also poses numerous risks, some of which have been highlighted in the same documents and in the Ethics Guidelines prepared by the High-Level Expert Group on AI. Therefore, we believe it is encouraging and commendable that the European Commission pays close attention to the way in which AI can be regulated and that this is the right time for it to do so.
Looking at the options provided, The Good Lobby strongly supports a combination of options 2, 3a, and 3b. The risks that AI poses, which are very much present, with plenty of concrete examples of biased algorithms from the past 12 months alone, are too great to be addressed through a soft-law approach. We believe that in order to address these risks appropriately there is a need for a decisive and comprehensive legislative intervention.
While a voluntary labelling scheme – option 2 – is not in and of itself sufficient, we recognise that there is some value to this idea. As suggested in the White Paper, such an approach could be appropriate for low-risk applications of AI. Nevertheless, we are not persuaded that it would properly address the challenges posed by high-risk AI.
In expressing our strong support for a combination between options 3a and 3b, we are taking into account the need to encourage innovation, the risk of fragmentation and uncertainty in the absence of a singular common set of rules, and the difficulties inherent to defining different categories of risks.
While option 3c would have the advantage of minimising the risk of fragmentation and uncertainty, it would also greatly stifle innovation and have negative consequences over the development of AI in Europe, discouraging organizations, and in particular SMEs, from engaging with this technology. Should that happen, Europe would lag behind the rest of the world, missing out on the benefits that AI can bring. Without a strong position in terms of development, Europe is also likely to lose credibility in terms of regulation. Furthermore, we are also aware that numerous applications of AI are innocuous such as GPS applications using AI to predict the quickest route to a destination. Given the differences between the nature of AI applications and their potential negative impacts, despite the existing risk of fragmentation and uncertainty which we hope the Commission will have in mind, we believe different approaches for different applications are necessary.
We do not support regulating only certain categories of AI. Such an approach has the risk of being too narrow, too broad, or both too narrow and too broad depending on the circumstances. Should that happen, there is a danger that certain applications of AI that pose considerable risks would not be considered because they do not fall within a certain more loosely or narrowly defined category or that low-risk applications from one such category are overly regulated. Nevertheless, we see value in option 3a and that is why we are recommending it is pursued, together with options 3b and 2 as discussed above. The value we see is related to regulating certain particular uses of AI, for example, remote biometric identification systems. In such a case the risks are so clear and blatant that comprehensive regulation is, without a doubt, necessary. We believe a serious debate is necessary on the use of facial recognition, together with very strong regulation, and we regret that the initial plan for a complete EU-ban on this technology seems to have been dropped.
From the options put forward, the most appropriate, in our view, is option 3b despite the difficulties with defining different categories of risks accurately and efficiently, which have been repeatedly emphasised and the risks of fragmentation and uncertainty already discussed. While the White Paper does provide two criteria on which risks could be assessed, this is just the beginning of a difficult process of setting out categories of risks which will have to be kept under constant review and flexible enough to allow for modifications where necessary.
From the two criteria provided, we believe that risks should be calculated taking into account the impact on rights and safety rather than the sector and specific use. As we talk about a human-centric approach, it seems only fitting that this should be the case.
On enforcement, The Good Lobby believes that for it to be effective, it needs to combine an ex-ante mechanism, which to allow for the scrutiny and questioning of the design of an AI system with an ex-post approach, to review how the system is actually working and the concrete decisions it produces to allow for correction of legal errors and the intervention of human oversight – or equity and mercy, as described by Lord Sales.
Finally, as an overarching point, The Good Lobby wants to emphasise the need to ensure that existing EU legislation can adequately protect individuals for the risks posed by AI. While it is true that the EU has a strong non-discrimination framework in place, the Commission must ensure, through this legislative proposal or others, that such legislation is fit for purpose and fit for the current times.
The reality is that a technology such as AI which has so many applications and can give rise to so many different risks needs to be approached with a certain degree of flexibility, particularly so as not to hamper innovation. We recognise that having different rules could be difficult and confusing for business, civil society organizations and citizens alike, at least initially, but this, we believe, despite the shortcomings, is the best way forward. At the same time, we hope that the Commission will take the necessary steps to address any resulting uncertainty, by engaging in awareness-raising and by supporting other organizations who do so too.
1. See J. Khan, “The Problem with the EU’s AI strategy”, Fortune, February 2020, available here https://fortune.com/2020/02/25/eu-a-i-whitepaper-eye-on-a-i/
2. Lord Sales, “Algorithms, Artificial Intelligence and the Law”, Judicial Review, 2020, Vol. 25, No.1, 46-66, p. 53.