Artificial Intelligence (hereinafter: “AI”) is undoubtedly the “technology of the future”. However, its use, utilization and influence in various aspects of life and activities has long since begun. In fact, it is claiming more and more territory – even in areas that would not have been possible a few years ago. (AI, for example, in the service of justice has already occupied us in our previous article). The EU felt the need, and rightly-albeit a little late, to start the process of proposing relevant regulations. As far as this is feasible.
The European Commission’s involvement with Artificial Intelligence
Demonstrating confidence in a technology that is largely unknown, difficult to understand, or sometimes completely incomprehensible (to most of us) is not an easy task. More precisely: it seems absolutely dangerous. Much more since there has not even been an attempt to regulate its individual parameters.
The European Commission recently published, on 21 April 2021, the Proposal for a Regulation establishing harmonized rules on artificial intelligence. This is the European approach aimed at a credible AI that will serve the European principles, values and, in particular, the human being. This is an effort aimed at mitigating the (given) risks deriving from its gradual expansion.
This proposal implements the political commitment of the President of the European Commission Ursula von der Leyen. The latter had timely announced, in her policy guidelines for the Commission 2019-2024 (“A Union that strives for more“), that the Commission would propose legislation for a coordinated European approach to the impact of AI on people and ethics.
The definition of AI
AI is, as already noted, a concept with which most of us are unfamiliar. But even its definition does not seem easy to understand: “artificial intelligence system” is, according to the Proposal, “software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with” (Article 3 par. 1).
The purpose of the (proposed) Regulation
The importance of the proposed Regulation is evident in the purpose mentioned in the above Proposal. Such (purpose) is defined as the improvement of the functioning of the internal market by the establishment of a single legal framework (in particular) for the development, marketing and use of artificial intelligence – in accordance with the values of the Union.
We deduct that the Regulation pursues purposes of overriding public interest. Among them: a high level of protection of health, safety and fundamental rights.
At the same time, it calls for the guarantee of the free movement of goods and services, based on AI, at a cross-border level. In doing so, it seeks to prevent Member States from imposing restrictions on the development, marketing and use of AI systems, in so far as the restrictions are not provided for in the Regulation itself.
The Regulation also recognizes the need for uniform regulation at an EU level, in order to avoid any fragmentation of the internal market and to prevent any legal uncertainty. Legal certainty, on the other hand, is considered necessary to facilitate investment in AI and the development of a single market for such legal, secure and reliable systems.
The whole proposal shows the effort to achieve security in the development of AI and its coexistence with the ethics and values of the EU. That is why, after all, the approach of its regulations is done in the light of the risks inherent in the use of AI systems.
The risk-based approach to the use of AI
The Regulation, regarding the ways of using AI, takes an approach based on the risk created by its use. Specifically, the uses of AI are distinguished into those that create: (a) unacceptable risk, (b) high risk, (c) low-not high risk.
Unacceptable risk
Some AI systems are considered to pose an unacceptable risk.
The use of such systems is considered unacceptable because it is contrary to the values of the Union (eg it violates fundamental rights). These are the systems that have significant potential for manipulating people through techniques that address the subconscious beyond their consciousness. They are also systems that exploit the vulnerabilities of certain vulnerable groups in order to substantially distort their behavior.
Such uses of AI are prohibited. The use of systems of remote biometric identification in publicly accessible areas in real time for law enforcement purposes is also prohibited. There may, however, be some exceptions.
High risk
Some AI systems are considered to pose a high risk to the health, safety and fundamental rights of individuals.
Special rules are reserved for these systems. In particular, these systems are permitted on the European market subject to compliance with certain mandatory requirements and ex ante conformity assessment. At the same time, the Regulation imposes clear obligations on the providers and users of such systems. It aims, in this context, at the security and observance of the existing legislation for the protection of fundamental rights throughout the duration of their operation.
These systems include those used in areas such as:
(a) The biometric identification and categorization of natural persons.
(b) The management and operation of critical infrastructure (eg road traffic, water supply, gas supply, heating and electricity).
(c) Education and vocational training (eg evaluation of exam participants).
(d) Employment, employee management and access to self-employment.
(e) Access to and enjoyment of basic private services and public services and benefits (eg assessment or rating of individuals’ creditworthiness).
(f) Law enforcement (eg assessment of the reliability of evidence in the context of the investigation or prosecution of criminal offenses).
(g) Management of immigration, asylum and border controls (eg verification of the authenticity of travel documents and supporting documents of natural persons).
(h) The administration of justice and democratic processes.
Not high risk
Only very limited transparency obligations apply to non-high risk AI systems.
The legislation on Artificial Intelligence must, in any case, be human-centered. The above proposal to a significant degree seems to aim at the above ideal.
It is important, in any case, that the above Proposal makes EU values its priority. It is interesting that it is centered (and correctly) around the risks that may arise from AI systems.
Technological developments are really rapid, even for those who are operating in the field. Much more for the rest of us. However, it is necessary to protect all (those who are in the field and those who are not) from the risks involved in the use and utilization of the facilities of Artificial Intelligence. The draft Regulation is moving (also) in this direction.
A question that will probably be answered by the next generation:
Is it possible to place Artificial Intelligence in a strictly defined framework outside which it will not be able to expand?
I’m afraid not.
Stavros Koumentakis
Managing Partner
P.S. A brief version of this article has been published in MAKEDONIA Newspaper (June 27, 2021).
Disclaimer: the information provided in this article is not (and is not intended to) constitute legal advice. Legal advice can only be offered by a competent attorney and after the latter takes into consideration all the relevant to your case data that you will provide them with. See here for more details.