News

Article 4 of the AI Act: obligations and opportunities for companies when dealing with AI literacy

16.01.2025

When Article 4 of the AI Regulation (or “AI Act”) comes into force on 2 February 2025, providers and deployers of AI systems will be faced with a major challenge: ensuring that all the people dealing with the operation and use of “their” AI systems are sufficiently skilled. Article 4 of the AI Act, which aims to build up knowledge, experience and consciousness, underscores the growing importance of dealing with artificial intelligence in a responsible and informed way in companies. But how can companies comply with this obligation and at the same time take advantage of the opportunities associated with competent use of AI?

What is “AI literacy”?

Art. 4 of the AI Act, which will enter into force on 2 February 2025 in accordance with Art. 113 lit. a AI Act, reads as follows:

“Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.”

Thus Article 4 of the AI Act not only requires employees to be trained and qualified, but also states that the specific context of the AI systems used by companies and the target groups for whom these systems are used have to be considered. It is not just limited to high-risk AI systems. However, the article does not contain any definition of what “AI literacy” is.

Instead, AI literacy is defined in Article 3 no. (56) of the AI Act as:

“skills, knowledge and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause”.

In summary, “AI literacy” is defined as the ability to use AI in an informed manner and to understand both its potential and its risks. This appeal for action goes beyond technical knowledge and also encompasses the social, ethical and legal implications of use of AI.

What measures make sense?

This means that from 2 February 2025 companies will be obliged to put in place internal structures and measures enabling AI-related skills to be sufficiently developed. To achieve this, the following measures will especially have to be implemented:

  • Preparation of internal guidelines and standards: These should clearly define best practices, ethical principles and compliance requirements and provide guidance for all those involved.
  • Regular further education and training: Continuous training programmes will be necessary in order for employees to gain a basic understanding and to address current developments and ethical challenges. Thus technical content should be taught alongside ethical issues, encouraging employees to engage in critical thinking. The goal should be to incorporate employees’ existing experience and knowledge so that it is possible to specifically address individual learning needs. The co-determination rights of works councils under section 96 and following of the German Works Constitution Act (Betriebsverfassungsgesetz – BetrVG) and other rights will have to be considered.
  • Practice-oriented learning in interdisciplinary teams: Communication between disciplines such as IT, ethics and law can promote a broader understanding of the opportunities and risks posed by AI systems.
  • The measures designed to build up expertise should also be regarded as part of higher-level AI compliance, which is to be structured differently depending on the existing compliance structures, the company’s risk profile and the areas in which AI is used (see our overview on establishing AI compliance). As data privacy considerations (in German only) play a significant role in the use of AI, connections between these should be made both within the compliance structure and during training. This ensures that legal requirements are complied with and enables internal educational initiatives to be coordinated (while respecting relevant co-determination rights) and promotes a consistent AI strategy.

When structuring the measures, it is always necessary to take into account the specific context in which the AI systems are used and the employees’ requirements and previous knowledge.

In any case, basic training in AI skills should be launched from 2 February 2025 to avoid risks and take advantage of opportunities.

Risks from non-compliance with Article 4 of the AI Act

Although at first sight Article 4 of the AI Act seems to be an appeal for action rather than a requirement, it is by no means meaningless from a legal perspective.

  • During legal actions concerning liability, the failure to take appropriate measures can be considered a breach of a duty of care. Especially in cases involving malfunctions or damage caused by AI systems, courts could examine whether the company has implemented appropriate training and qualification measures.
  • In terms of employment law, it is vital to think about the rights and likewise the duties of employees affected by the use of AI in order to achieve AI literacy. The provisions also have an influence on terminations of employment relationships due to a lack of AI literacy, especially on dismissals issued for this reason. Works councils will have to be involved in any training seminars implementing the obligations under Article 4 of the AI Act and other measures, in accordance with section 96 and following of the German Works Constitution Act.

Opportunities for companies

Besides minimising legal risks, companies that set about developing expertise at an early stage can also gain significant competitive advantages. Responsible handling of AI systems boosts the confidence of customers, partners and investors. At the same time, it opens up new possibilities for developing innovative products and services which meet ethical and social standards.

Bottom line: path towards AI literacy as the key to success

Article 4 of the AI Act sets companies the task of systematically developing the skills of their employees. This requires them to implement technical measures and to change their corporate cultures to ensure responsible and informed use of AI. Companies that take on this challenge proactively position themselves as pioneers in an increasingly digitalised and regulated economy, laying the foundations for their long-term success.

Over the next few weeks, we will provide a step-by-step outline of the aspects under employment and co-determination law which have to be borne in mind. Our next article on this topic will take a look at the basics and limits of a right to further training on AI.