A European Approach to Regulate Artificial Intelligence: Possible Global Impact
Krishna Ravi Srinivas, PhD, Consultant & Senior Fellow, RIS
On 21st April European Commission (EC) unveiled an ambitious and broad proposal on regulating Artificial intelligence (AI) in Europe. It is primarily aimed at regulating AI and its applications in European Union. However, given the role of EC as a regulator and promotor of innovation and EU being a major market for AI besides home to institutions and companies that are doing pioneering work at AI , it has global implications. The proposal (‘Regulation of The European Parliament and of The Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) And Amending Certain Union Legislative Acts) and Proposal for a Regulation of the European Parliament and of the Council on machinery products, will now undergo a lengthy review process at the European Parliament and at the European Council, that represents the 27 national governments. The final approved version may be different from what EC has proposed.
News reports and commentators have highlighted the provisions that ban certain applications and its identification of high-risk applications of AI that would be subjected to supervision and standards for development. The overall thrust is to find a balance between promoting innovation, building trust among the public on AI and its applications and identification of applications that are aberrant to European values and fundamental human rights. Although there have been critiques and skeptical views on the regulations, it is an ambitious attempt to develop a wholistic framework to regulate.
What is interesting and noteworthy from a governance perspective is that EC chose this after considering the four options and it also provides for regulatory sandboxes as an option. EU is keen to take an early lead on regulating new technologies and enforcing new regulations and it has tried this in inter alia, data protection, digital competition, and online content moderation. In all these EU has been trying to base them on the core values that guide EU in formulating its rules and regulations. By such initiatives EU is attempting to set a template or model in regulating that technology/application which will inspire other countries to adopt it or adapt it. For example, although adhering to General Data Protection Regulation is considered cumbersome, the EU’s GDPR has emerged as a model for other countries. When it comes to AI, EU has been looking it from a wholistic perspective considering it’s visions and plans for a Digital Europe and the Commission’s White Paper on AI published in 2020 which specified the vision for AI in Europe as an ecosystem of excellence and ecosystem of trust.
The regulations ban outright some applications and for purposes that are considered as high risk, the providers must make available detailed documentation on how the AI system is compliant with the rules and in addition, a ‘proper level of human oversight’ should be evident in system design and application and quality requirements for data used in training AI software have to be met. Thus, compliance should not be an issue for applications that are not high risk and otherwise are acceptable. There is a provision for levying fines up to 6 percent of the yearly global revenue in case of serious violations. Thus, for companies and others who want to harness the European Market and provide goods and services to consumers there are challenges. It includes challenges such as making their AI applications and deployment compatible with the regulations. But as Europe is also a major innovator in AI this is a challenge to inter alia, start ups in Europe too.
To what extent this regulation will influence the global regulation of AI is not clear at this moment. But Europe has the first mover advantage as it has proposed a model that eschews tight control by the state on development and deployment of AI and also to reward and punish individual and social behavior through deploying AI and big data. The very fact that only applications/deployment that would fall under the high-risk category would be heavily regulated and this risk-based approach gives flexibility in regulation as it not technology based but based on the purpose of application or deployment of AI.
In terms of innovation governance, EU proposal is an example for promoting responsible innovation in AI and linking trust with excellence in harnessing and advancing Science, Technology and Innovation. Moreover, the idea of using regulatory sandboxes indicates that regulation is not atop down approach and there is scope for innovators and regulators to work together, learn and understand issues in regulating emerging technologies and applications and enhance regulation. This can result in agility in regulation and governance. In 2018, EC and the member states adopted the Coordinated Plan on AI , for establishing policy co-ordination and development of national strategies. Now this Coordinated Plan will be reviewed. The proposal envisages setting up of a European Artificial Intelligence Board, which will consist of representatives from the relevant regulator from each member state, EC and the European Data Protection Supervisor. The Board is expected to play a key role in harmonized implementation of the regulations and in recommending cases that could be deemed as ‘high risk’.
To sum up, this is an ambitious attempt to regulate AI and will have implications for development and deployment of AI in Europe and elsewhere. While it is too premature to state that this will have larger repercussions in regulating AI in every country that wants to regulate AI, one need not be surprised if EU/EC use soft power to promote it as a perfect solution for dilemmas in regulating AI.
(These are preliminary observations, and, a longer article will be published soon on the proposed regulations on AI)