Although artificial intelligence (AI), IT-security and cybersecurity have long been discussed in the context of product regulation, especially in the context of product liability, these regulatory aspects have so far hardly found their way into concrete product law provisions. Therefore, the EU has recently launched several initiatives to legally cover the digitalisation of the product world. In the following, the main legislative projects that could become reality or at least be advanced in 2022 are presented. Specifically, this concerns the aspect of cybersecurity within the framework of the planned EU General Product Safety Regulation (hereinafter referred to as “GPSR-E”) (see A.), the proposal for an AI regulation (see B.), the proposal for a Regulation on liability for the operation of Artificial Intelligence-systems (see C.) and the reform of Directive 85/374/EEC (so-called Product Liability Directive) (see D.).
A. Cybersecurity under the EU General Product Safety Regulation
The GPSR-E presented in yesterday’s overview, which is to precede European product safety law as a general part, explicitly takes up the aspect of cybersecurity. According to Art. 7(1)(h) GPSR-E, the existence of required cybersecurity features is to be a criterion in assessing the safety of a product. After all, cybersecurity risks can have an impact on the safety of consumers, which is why they must be covered by the safety concept under product safety law. Article 7(1)(h) of the GPSD-E is to be understood as requiring minimum legal requirements for cybersecurity. Regulation (EU) 2019/881, which establishes an EU-wide certification framework for the cybersecurity of ICT products, services and processes, does not specify such a minimum level.
The aspect of cybersecurity, which is explicitly mentioned for the first time in the product safety law, will have particular relevance for digital products (such as IoT products). Specifically, Art. 7(1)(h) GPSR-E will be highly relevant for the entire product safety law, because the sectoral EU legal acts do not explicitly address the aspect of cybersecurity. Only the draft EU Machinery Regulation published in April 2021 addresses IT-security and cybersecurity as subjects of the essential health and safety requirements in sections 1.1.9 and 1.2.1 of Annex III.
Further details on the reform of the General Product Safety Directive can be found in our blog post What’s changing in 2022: Product Safety Law.
B. Proposal for a AI Regulation
The EU Commission presented a proposal for an AI Regulation (hereinafter referred to as ” AI-Reg-E”) on 21.04.2021. This legal act is intended to define the legal framework for the development, distribution and use of AI systems. The draft legal framework is based on the regulatory concept of product safety law – the so-called New Legislative Framework (NLF) – and forms a cross-sectional matter characterised by IT and product safety law.
The central regulatory subject of this draft are so-called high-risk AI systems, which, in contrast to AI systems with unacceptable risk, are generally permissible. According to Art. 3 No. 1 AI-Reg-E, an AI system “means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”. This very broad legal definition is almost boundless and is therefore subject to strong criticism. Based on this regulatory object, the draft standardises general software-related security requirements for high-risk AI systems (cf. Art. 9–15 AI-Reg-E). These requirements are concretised by harmonised technical standards, compliance with which triggers a presumption of conformity, Art. 40 AI-Reg-E. The primary responsibility for conformity with the safety requirements lies with the provider, i.e. “natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge”, Art. 3 No. 2 AI Regulation-E. He is subject to the most extensive range of obligations. This includes, among other things:
- Obligation to carry out a conformity assessment procedure, at the end of which the CE marking is affixed, Art. 16(e) and (i), 43, 49 AI-Reg-E
- Obligation to provide instructions, Art. 13(3) AI-Reg-E
- Obligation to establish a quality management system, Art. 16(b) in conjunction with Art. 17 AI-Reg-E
- Obligation to report to the authorities, Art. 16(h) in conjunction with Art. 21 sentence 1 AI-Reg-E
- Obligation to take corrective measures, Art. 16(g) in conjunction with Art. 21 sentence 1 AI-Reg-E
In addition, importers and distributors are also among the addressees of the obligations. They are primarily subject to the classic formal inspection obligations as well as cooperation and notification obligations. From the point of view of product safety law, the inclusion of users is new; they are also addressees of obligations. For example, they must comply with the instructions of the providers set out in the instructions for use and monitor the operation of the high-risk AI system, Art. 29(1), (4) sentence 1 AI-Reg-E. Furthermore, the draft also assigns the user duties of cooperation and reporting, Art. 29(4) sentences 2 and 3 AI-Reg-E.
With this ambitious project, the EU is supposed to achieve a “great success”. The draft aims at nothing less than “to preserve the EU’s technological leadership and to ensure that Europeans can benefit from new technologies developed and functioning according to Union values, fundamental rights and principles”. However, a number of regulatory proposals are subject to criticism, such as the aforementioned broad legal definition of the term “AI system”. Against the background of the need for discussion and coordination, it remains to be seen whether the planned AI Regulation will be adopted in 2022.
C. Proposal for a Regulation on liability for the operation of AI systems
As early as October 2020, the European Parliament made recommendations to the Commission for a regulation on civil liability in the use of artificial intelligence and drafted a proposal for a Regulation on liability for the operation of AI-systems. The proposed regulation of the European Parliament provides for strict liability of the operator of a so-called AI system with high risk, which are explicitly mentioned in a catalogue, Art. 4 No. 1. The liability addressee is both the frontend and backend operator, for whom there is an insurance obligation, Art. 4 No. 4. With regard to other AI systems, the European Parliament proposes a fault-based liability. The fault of the operator is presumed; however, he has the possibility to exonerate himself by invoking the following reasons, Art. 8 No. 2:
- The AI-system was activated without his or her knowledge while all reasonable and necessary measures to avoid such activation outside of the operator’s control were taken.
- Due diligence was observed by performing all the following actions: selecting a suitable AI-system for the right task and skills, putting the AI-system duly into operation, monitoring the activities and maintaining the operational reliability by regularly installing all available updates.
A duty of cooperation directed at the manufacturer of the AI system is to be activated by a request of the operator in order to enable the determination of liability, Art. 8 No. 4.
It can be assumed that the proposal of the European Parliament will be implemented, although changes are to be expected. Since the EU Commission must also be involved in the course of the legislative procedure, it is to be expected that the legal act will enter into force in the not exactly foreseeable future.
D. Reform of the Product Liability Directive
In the context of the report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics, the EU Commission formulated a need to adapt the Product Liability Directive and, based on this, initiated a legislative project to adapt the liability rules to the digital age and to developments in the field of AI. According to the Commission’s report, the challenges in the course of the digitalisation of products are the complexity of products, services and the value chain, connectivity and openness as well as autonomy and opacity. The reform of the Product Liability Directive is to address, among other things, the questions of the classification of stand-alone software as a product, the liability implications of product changes through software updates, and the manufacturer’s liability of autonomously developing AI systems and of reconditioned used products.
Whether and to what extent the reform will become reality is currently still open, as a whole series of regulatory projects, in particular the necessity of certain regulations, are still under discussion. In any case, the implementation of the reform will still take some time. After all, the public consultation only recently ended on 10.01.2022. Adoption by the Commission is planned for the third quarter of 2022.
Do you have any questions about this news, or would you like to discuss the news with the author? Please contact: Dr. Gerhard Wiebe