A. Cyber Resilience Act (CRA)
On 30.11.2023, the EU Parliament, the Council and the Commission reached a provisional agreement on the proposal for a regulation on horizontal cybersecurity requirements for products with digital elements (Cyber Resilience Act – CRA). The CRA sets out binding cybersecurity requirements for placing hardware and software on the market. The CRA is intended to ensure that networked products such as toys, fridges or televisions fulfil certain cyber security requirements when they are placed on the market. There have been some changes compared to the Commission’s first draft (see our news from 29.10.2022).
Following the provisional agreement, the regulation is now expected to be adopted in the near future. Once it comes into force, the companies affected will have 36 months (instead of the previous 24 months) to implement the requirements. However, a transitional period of 21 months deviating from this applies to the reporting of exploited vulnerabilities and security incidents.
I. Material scope of application
The CRA applies to all products that are either directly or indirectly connected to another device or the internet; according to Art. 3 para. 1 CRA, it covers hardware and software equally. However, certain products for which the cybersecurity requirements are already laid down in existing EU legislation, e.g. for medical devices, aviation or vehicles, are excluded from the scope of application.
The impact on open-source software was highly controversial. It was feared that the CRA would hinder the development of open-source software due to requirements that would be almost impossible to fulfil, especially for smaller, non-commercial software developers. Improvements have now been made here. Non-commercial projects, in particular open-source software (insofar as it is not part of a commercial project), are excluded from the scope of application. The activities of non-profit organisations that generate income but reinvest this income in the software are also considered non-commercial.
Pure cloud solutions and cloud service models (such as Software-as-a-Service – SaaS) that do not support the functionality of a product with digital elements or that have been designed or developed outside the responsibility of the manufacturer are not covered by the scope of the CRA. Directive (EU) 2022/2555 on measures for a high common level of cybersecurity across the Union (NIS 2 Directive) is decisive in this area.
II. SMEs in the personal area of application
Additional support measures for micro, small and medium-sized enterprises that are not excluded from the personal scope of the CRA have been newly agreed. The member states can, for example, develop programmes in the areas of training, awareness-raising, information dissemination and conformity assessment procedures. In addition, the member states can define standardised language rules for communication in the event of safety incidents and the relevant documentation (e.g. technical documentation) in order to reduce the administrative burden for manufacturers.
III. Critical products with digital elements
A product covered by the CRA must fulfil the basic cybersecurity requirements in accordance with Art. 5 CRA in conjunction with Annex I of the CRA. Annex I of the CRA. The conformity assessment procedure relevant for compliance with the substantive requirements is generally carried out by the manufacturer itself in accordance with Art. 24 CRA. This is different for so-called critical products with digital elements within the meaning of Art. 6 CRA. A product falls into this category if its core function corresponds to one of the applications listed exhaustively in Annex III of the CRA, whereby a distinction is made between Class I and Class II products. In the case of Class I products, the manufacturer can demonstrate conformity by fully applying harmonised standards within the meaning of Art. 18 CRA, otherwise he must carry out one of the procedures listed in Art. 24 No. 2 CRA with the involvement of a notified body. Class II products, on the other hand, must undergo a conformity assessment procedure involving a notified body. Critical Class I products now also include products such as smart home systems, internet-enabled toys and wearables. However, Class I products no longer include operating systems (e.g. for servers, desktops and mobile devices).
IV. Time period for security updates
The manufacturer bears primary responsibility for product conformity. One expression of this is the manufacturer’s obligation to provide safety updates throughout the entire service life of the product. This support should be provided for a period of at least five years. The only exception is if products are expected to have a shorter service life.
V. Reports of actively exploited vulnerabilities or security incidents
Actively exploited vulnerabilities or security incidents must be reported simultaneously to the competent national authorities (the so-called Computer Security Incident Response Teams – CSIRT) and to ENISA (the European Union Agency for Cybersecurity). A joint reporting platform is to be set up for this purpose. However, in exceptional circumstances, manufacturers should be able to request the competent CSIRT to refrain from forwarding a report to other CSIRTs or ENISA for the time being.
B. AI Act
After a tough negotiation, the EU Parliament and the Council reached a political agreement on the Artificial Intelligence Act (AI Act) on 08.12.2023. It is therefore very likely that the AI Act will be adopted in 2024, even if it will not come into force this year due to a transitional period. The agreed regulation text, which is not yet available, must now be formally adopted by the EU Parliament and the Council in the next step. The Parliament’s Internal Market and Civil Liberties Committees will vote on the AI Act in one of their next meetings.
With the AI Act, the EU aims to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI applications. At the same time, the AI Act is intended to promote innovation and make Europe a pioneer in this area. With the AI Act, which follows a risk-based approach, the EU is thus attempting a “major breakthrough”.
I. Prohibited AI applications:
Due to potential threats to civil rights and democracy, the following AI applications should be banned altogether:
- biometric categorisation systems that use sensitive characteristics (e.g. political, religious or philosophical beliefs, sexual orientation, race)
- the untargeted reading of facial images from the internet or from video surveillance systems to create facial recognition databases
- emotion recognition in the workplace and in educational institutions
- social scoring based on social behaviour or personal characteristics
- AI systems that manipulate human behaviour in order to circumvent free will
- AI that exploits people’s weaknesses (age, disability, social or economic situation)
However, there are to be exceptions to the ban on the use of biometric identification systems in publicly accessible areas for law enforcement purposes.
II. Requirements for high-risk AI systems
Clear requirements are to be established for AI systems that are categorised as high-risk due to their significant potential for harm to health, safety, fundamental rights, the environment, democracy and the rule of law. Such high-risk AI systems include, for example, AI systems that are used to influence election results and voter behaviour. High-risk AI systems are therefore a central regulatory object of the AI Act. Among other things, a mandatory impact assessment must be carried out for fundamental rights, which also applies to the insurance and banking sectors. In addition, citizens have the right to complain about high-risk AI systems and to receive explanations of decisions based on high-risk AI systems that affect their rights. Providers of high-risk AI systems must therefore implement a complaints management system 2.0.
III. Specifications for general AI systems
General AI systems (GPAI) and the GPAI models on which they are based also fulfil certain requirements. Among other things, they must fulfil transparency requirements, which include the creation of technical documentation, compliance with EU copyright law and the dissemination of detailed summaries of the content used to train the AI.
Stricter requirements apply to GPAI models with a high systemic risk. Under certain criteria, models must be evaluated, systemic risks must be assessed and minimised, the Commission must be informed of serious incidents, cybersecurity must be ensured and energy efficiency must be reported. In addition, until harmonised standards are published, codes of conduct and practical guidelines can be used to comply with the AI Act.
IV. Promotion of AI innovations
A central point of criticism of the AI Act was its inhibiting effect on innovation due to overregulation. So-called regulatory sandboxes and practical tests, which are set up by national authorities to develop and train innovative AI before it is launched on the market, are intended to provide innovation-promoting solutions to this criticism.
V. Sanctions
The AI Act provides for sanctions to be transposed into national law. Non-compliance with the AI Act is to be punishable by fines of between EUR 35 million or 7% of global turnover and EUR 7.5 million or 1.5% of global turnover, depending on the offence and the size of the company.
C. Data protection and cyber security in radio equipment law
Delegated Regulation (EU) 2022/30, published at the beginning of 2022, amends Directive 2014/53/EU (RED) and introduces data protection and cybersecurity requirements for certain radio equipment for the first time. Radio equipment that can itself communicate over the internet (whether directly or via other devices) must not have a harmful effect on the network or its operation, nor cause misuse of network resources that would result in an unacceptable degradation of service, Art. 1 para. 1 Regulation (EU) 2022/30. In addition, this radio equipment must have security features that ensure that personal data and the privacy of the user and the subscriber are protected, insofar as it can process personal data within the meaning of Art. 4 para. 2 Regulation (EU) 2016/679 or traffic data or location data within the meaning of Art. 2 lit. b), c) Directive 2002/58/EC (Art. 1 para. 2 lit. a) Regulation (EU) 2022/30).
Originally, the regulation was to apply from 01.08.2024. The date of application has since been postponed by one year – to 01.08.2025 – because it will take more time to develop the more specific harmonised standards. When the CRA comes into force, the changes associated with Regulation (EU) 2022/30 should be incorporated into the CRA.
D. New Product Liability Directive and AI Liability Directive
On 11.12.2023, negotiators from the EU Parliament and the Council reached an informal agreement on the new Product Liability Directive. The Product Liability Directive, which will replace the outdated Directive 85/374/EEC, is intended in particular to take account of new technologies and the digitalisation of distribution channels and products as well as the associated new risks for consumers. For example, the definition of a product has been expanded so that strict liability can also apply to software that causes damage. However, in order not to hinder innovation, the regulations should not apply to open-source software that is developed or provided outside of a commercial activity.
The new Product Liability Directive provides for an overall extension of strict liability. Not only is the circle of potential liability addressees enlarged (authorised representatives, fulfilment service providers and provider of online platforms). Data for non-professional purposes will also be included in the protected legal interests in future. Furthermore, the limitation period is to be 25 years in exceptional cases if the symptoms of the health impairment only appear slowly. The injured person can then still receive compensation even after this period has expired, provided that the legal proceedings were initiated within this period.
In addition to material liability rules, the new Product Liability Directive also contains procedural requirements. This is because it also aims to make it easier for consumers to obtain compensation. For example, the defectiveness of a product can be presumed if the claimant has excessive difficulties in providing evidence, particularly due to technical or scientific complexity. In addition, claimants can request that the court oblige the economic operator being sued to disclose the “necessary and appropriate” documents.
However, the fate of the AI Liability Directive is uncertain. This directive does not affect material product liability law, but merely serves to facilitate the assertion of non-contractual fault-based civil law claims for compensation in relation to damage caused by an AI system. The further course of the legislative process, which was dependent on the AI Act due to the close link, remains to be seen; at least officially, the legislative project has not yet been “buried”.
Do you have any questions about this news, or would you like to discuss the news with the author? Please contact: Dr. Gerhard Wiebe