Understanding and complying with the AI Act

Understanding and complying with the AI Act

How to prepare for the European AI Act: a practical guide for legal directors

In a world where artificial intelligence is rapidly transforming our organizations, regulatory compliance is becoming a major strategic issue. The European AI Act, the new legislative framework that will come fully into force in 2026, represents a decisive turning point for all companies using AI systems. As a legal manager, you find yourself on the front line of this daunting challenge. How do you navigate this new regulatory reality while enabling your organization to innovate?

This practical step-by-step guide will help you prepare for the AI Act, with concrete strategies and effective tools to turn this regulatory constraint into a strategic opportunity.

1. Understanding the fundamentals of the European AI Act

The AI Act represents the world's first comprehensive legislation specifically dedicated to artificial intelligence. Adopted in March 2024, this regulation establishes a harmonized framework for the development, marketing and use of AI systems within the European Union.

The particularity of this regulation lies in its risk-based approach. Obligations vary considerably depending on the category in which your AI system falls:

  • Unacceptable risk systems : totally forbidden (cognitive manipulation, social rating, etc.)
  • High-risk systems : subject to strict requirements (conformity assessment, technical documentation, etc.).
  • Limited-risk systems : subject to transparency requirements
  • Minimal risk systems : few or no specific constraints

The penalties provided for are particularly dissuasive, reaching up to 35 million euros or 7% of annual worldwide sales for the most serious infringements.

"A fine-grained understanding of this categorization is the essential first step in developing your compliance strategy," stresses a digital law expert at a recent AI Act conference.

2. Map your existing and future AI systems

Before you can implement an effective compliance strategy, you need to have a clear and comprehensive view of all AI systems in use or under development within your organization.

This mapping must include :

  • Complete inventory of deployed AI solutions
  • Suppliers and partners involved
  • Data used for training and operation
  • The purposes and use cases of each system
  • Departments and teams involved

To carry out this exercise effectively, interdepartmental collaboration is essential. Organize workshops bringing together legal, IT, data science and user business teams to ensure that your mapping is complete.

A centralized governance tool like the one offered by Cleyrop can greatly facilitate this process by providing complete visibility over all your data assets and AI systems in a unified catalog.

3. Assess risk levels according to AI Act classification

Once you've mapped your systems, the next step is to determine which risk category each of your AI systems falls into.

For high-risk systems, which will be subject to the most stringent obligations, pay particular attention to the following areas:

  • Systems used to evaluate job applicants
  • AI solutions involved in decisions on access to essential services
  • Systems used to assess solvency
  • AI applications in health and safety

For each system identified as high-risk, you must implement :

  • A risk management system
  • Comprehensive technical documentation
  • Automatic activity logs
  • Appropriate human supervision
  • High levels of robustness, precision and cyber security

"Risk assessment is not a one-off exercise, but an ongoing process that needs to be integrated into the lifecycle of your AI systems," reminds a compliance manager at a major French company that has already begun the process of achieving compliance.

4. Implement governance adapted to the requirements of the AI Act

Compliance with the AI Act requires the establishment of robust and appropriate governance. This governance must effectively oversee the development, deployment and use of AI systems within your organization.

Here are the key elements to put in place:

  • An AI ethics committee with representatives from various functions (legal, IT, business, CSR).
  • Validation procedures for new AI projects
  • An AI-specific risk assessment framework
  • Standardized documentation processes
  • Quality control mechanisms for training data

The appointment of an AI compliance officer, reporting to the legal department but working closely with the technical teams, can be particularly relevant for organizations using many high-risk systems.

A platform like Cleyrop's, which integrates data governance and AI model traceability features, is a valuable asset to support this governance and demonstrate your compliance in the event of an audit.

5. Document your AI systems in compliance with regulatory requirements

The AI Act imposes particularly stringent documentation requirements, especially for high-risk systems. This documentation must be sufficiently detailed to demonstrate your systems' compliance with the requirements of the regulation.

Essential elements to be documented include:

  • system design and technical specifications
  • The training data used and where it comes from
  • Validation and testing methods
  • Risk management measures implemented
  • Human monitoring procedures

To facilitate this documentation exercise, consider adopting specialized tools to centralize and standardize your technical documentation. Data lakehouse solutions like the one offered by Cleyrop provide cataloguing and traceability functionalities that considerably simplify this task.

"Documentation is not just a legal obligation, it's also a strategic tool that enables you to better understand and control your AI systems," explains a legal director at a CAC 40 company that anticipated compliance with the AI Act.

6. Train your teams in the challenges of the AI Act

Compliance with the AI Act cannot rest solely on the shoulders of the legal department. It requires awareness-raising and training for all employees involved in the lifecycle of AI systems.

Your training plan should focus on :

  • Development teams to integrate compliance requirements right from the design stage (privacy by design)
  • Product managers to assess risks upstream of projects
  • sales teams to properly communicate system capabilities and limitations
  • End-users to ensure appropriate human supervision

Various formats can be offered: webinars, practical workshops, in-house documentation or e-learning modules. The key is to adapt the content to the technical level and responsibilities of each audience.

Several organizations now offer certified AI Act training courses, which can be a wise investment for key members of your legal team and your data scientists.

7. Adapt your contracts and relations with suppliers

The AI Act will have a significant impact on your contractual relationships, particularly with your AI solution providers. A thorough review of your existing and future contracts is in order.

Major points of attention include:

  • Compliance clauses specific to the AI Act
  • Distribution of responsibilities for technical documentation
  • Transparency obligations on training data
  • guarantees for human supervision
  • Audit and control mechanisms

For new contracts, draw up standard clauses adapted to the various risk categories. For existing contracts, draw up an action plan for their gradual updating, prioritizing those linked to high-risk systems.

"In this evolving regulatory context, give preference to partners who demonstrate a thorough understanding of AI Act issues and who have already integrated these requirements into their solutions," recommends an expert in technology contract law.

This is precisely the approach taken by Cleyrop, whose solutions have been designed from the outset with particular attention to regulatory compliance and data sovereignty.

8. Implement a progressive, pragmatic compliance strategy

Given the scale of the changes required by the AI Act, a gradual, pragmatic approach is essential. It would be illusory to aim for total and immediate compliance for all your AI systems.

Here's a three-phase roadmap you could adopt:

Phase 1 (immediate) :

  • Finalize the mapping of your AI systems
  • Identify priority high-risk systems
  • Train key teams in AI Act requirements

Phase 2 (6-12 months) :

  • Bringing your high-risk systems into compliance
  • Review your contracts with strategic suppliers
  • Deploying your AI governance framework

Phase 3 (12-18 months) :

  • Extend compliance to all your systems
  • Automate documentation and control processes
  • Set up regular audits

This sequential approach will enable you to focus your resources on the most critical areas while gradually building your AI compliance maturity.

"The important thing is not to be perfectly compliant from day one, but to demonstrate a serious commitment and steady progress towards compliance," stresses a European regulator at a recent conference on the AI Act.

9. Turning regulatory constraints into competitive advantage

Beyond mere compliance, the AI Act can be seen as an opportunity for differentiation and responsible innovation. Organizations that know how to integrate these regulatory requirements into their overall strategy will derive a definite competitive advantage.

Here's how to turn this constraint into an opportunity:

  • Promote your compliance to your customers and partners as a guarantee of reliability and ethics.
  • Integrate the principles of trusted AI into your value proposition
  • Develop in-house expertise that can be leveraged in future developments
  • Take an active part in industry discussions on the interpretation and application of the AI Act

Solutions like those offered by Cleyrop, which natively integrate the principles of data sovereignty and algorithm transparency, enable you to reconcile innovation and compliance without compromise.

"Companies that see the AI Act as a mere regulatory constraint will be missing out on a major opportunity to strengthen the trust of their stakeholders," says an innovation director at a major French group.

10. Keep abreast of regulatory changes and clarifications

The IA Act is a living regulation that will continue to evolve as guidelines, enforcement decisions and case law are issued. Keeping abreast of these developments is essential to maintaining your compliance over the long term.

To do this :

  • Subscribe to digital and AI law newsletters
  • Participate in sectoral working groups on the interpretation of the AI Act
  • Follow the publications of the European AI Office , which will oversee the application of the regulation.
  • Regular exchanges with your counterparts in other organizations

Consider also relying on technology partners such as Cleyrop, who integrate a regulatory watch into their offering and ensure that their solutions evolve in line with new requirements.

The AI Act represents a major challenge, but also a tremendous opportunity to structure your approach to artificial intelligence. By adopting a methodical, step-by-step approach, you can not only ensure your organization's compliance, but also boost the confidence of your stakeholders and consolidate your market position.

Ready to turn this regulatory constraint into a strategic advantage? Our experts are at your disposal to guide you through this process and show you how our solutions can help you comply with the European AI Act. Contact us today for a personalized assessment of your needs.

France, future leader in AI?

France, future leader in AI?

It's been a busy week for AI from the public authorities' point of view: after the birth of theAI Act the European the AI Commission's report Commission was published on March 13, 2024 and submitted to the French President.

A report that takes stock of the current situation and makes 25 recommendations for the country to take advantage of AI opportunities while controlling the risks.

We already knew: France is lagging behind in the adoption of AI... and that's not good news.

The report therefore aims first and foremost to "de-demonize AI without idealizing it". It stresses that the benefits of AI will not be automatic, but will depend on political choices and collective commitment.

Let's start with the facts:

 Significant growth potential

According to the report, AI could have a major economic impact. It could double France's annual growth thanks to the automation of certain tasks. After 10 years, GDP could increase by 250 to 420 billion euros, equivalent to today's industry.

Beyond this transitory effect linked to automation, AI also seems to accelerate innovation in a more perennial way. By facilitating the emergence of new products, services and models, it could induce a permanent increase in the rate of growth.

However, these gains are not guaranteed. Recent history shows that France has benefited little from the digital revolution, unlike the United States. To take advantage of AI, appropriate public policies will be needed, in terms of innovation, industry, competition, training, etc.

French companies lag behind

To date, France and Europe are clearly lagging behind in AI. Investment is 3 to 4 times lower than in the United States, on a comparable wealth basis. Only a handful of European companies are positioned in the AI value chain, and none of them are world leaders.

This delay poses a risk of economic downgrading. On the one hand, France could miss out on the AI economy and see its value captured by other countries. On the other hand, existing companies could lose competitiveness to new players.

To close this gap, the report recommends massively redirecting savings towards innovation, with the creation of a €10 billion "France IA" fund. It also recommends facilitating access to data, particularly personal data, making France a major hub for computing power, and supporting an open ecosystem of AI developers.

Contrasting effects on employment

As far as employment is concerned, the report estimates that AI will have an overall positive effect in France, despite uncertainties. On the one hand, the automation enabled by AI will eliminate some jobs, particularly those consisting of routine tasks. But on the other hand, AI should also create jobs in new professions as well as in existing ones.

An empirical study conducted on French companies shows that those who adopt AI see their total employment increase more than others. This positive effect is explained by the fact that AI replaces tasks, not jobs in their entirety. Only 5% of jobs can be directly replaced by AI.

However, this effect is not uniform. Certain administrative and commercial professions seem more exposed to job cuts. And self-employed workers performing easily automatable tasks could face increased competition from AI.

Beyond the effect on employment volume, AI could also widen inequalities. Companies that adopt AI tend to hire more highly skilled and technical profiles, which are better paid. But conversely, AI also seems to benefit the least skilled or productive workers initially.

To support these transformations, the report stresses the importance of initial and continuing training. It recommends investing in observation and research into the impact of AI on employment. Social dialogue is also seen as essential to building AI uses in a partnership-based way.

Impact on daily life

A technology that's already very present

Beyond the economic sphere, AI is increasingly present in our daily lives. According to a survey, 55% of French people say they are familiar with ChatGPT one year after its launch. But AI applications go far beyond that: facial recognition, translation, content recommendation, voice assistants and more.

This omnipresence arouses both fascination and fear in public opinion. 77% of French people see AI as a real revolution, but 68% are in favor of a pause in its development. This ambivalence is nothing new. In the past, many innovations (trains, electricity, etc.) have aroused fears, sometimes unfounded, sometimes justified.

To promote the acceptability of AI, the report calls for educational work and public debate. It recommends launching a vast plan to raise awareness and train the nation, drawing in particular on education and research.

Increasingly present personal assistants

Among the consumer applications of AI, voice assistants like Siri or Alexa are having a growing impact on our daily lives. They enable many tasks to be carried out without human intervention: listening to music, obtaining information, controlling connected objects, etc.

In the field of customer service, conversational agents are also developing rapidly. They are capable of answering basic questions in a fluid, natural way. Their deployment enables companies to reduce costs and improve service availability.

In the future, personal assistants are set to become increasingly intelligent and autonomous. They could become true everyday companions, capable of learning our preferences and anticipating our needs. Their mode of interaction should also evolve towards more natural, integrated interfaces.

Impacts on mobility and health

Two areas where AI could have a major impact are mobility and healthcare. The development of autonomous vehicles promises to radically transform the way we travel. It could reduce accidents, smooth traffic flow, facilitate parking and even reorganize urban space.

In healthcare, AI is opening up new perspectives in diagnostics, personalized medicine, epidemiology and prevention. Medical decision-support tools are being developed, capable, for example, of detecting cancers on the basis of imagery. Ultimately, AI could enable continuous, personalized monitoring of each patient.

However, these innovations also raise ethical and liability issues. They will require us to adapt our legal and insurance frameworks. The protection of highly sensitive health data will be a major challenge. The report calls for a societal debate on these issues.

Energy-hungry technology

Another challenge for AI is its environmental impact. Training large-scale AI models consumes large amounts of energy. According to one estimate, AI could consume between 85 and 134 TWh of electricity worldwide by 2027, equivalent to the consumption of Sweden.

This consumption is linked to the computing power required, which relies on energy-hungry processors. Their production also has an environmental impact, due to the extraction of rare materials. However, processors dedicated to AI represent only a tiny fraction of global production.

Faced with this challenge, the report calls for France to become a pioneer in sustainable AI. It recommends greater transparency on the environmental impact of models, directing research towards more sober solutions, and mobilizing AI itself to accelerate the ecological transition.

The 25 recommendations and 7 priorities (in blue to speed things up):

  1. Launch an AI awareness and training plan for the nation to create the conditions for collective ownership of the issues.
  2. Invest massively in digital companies and business transformation, notably via the creation of a €10 billion "France & AI" fund, to support the French AI ecosystem.
  3. Make France and Europe a major center of computing power, in the short and medium term.
  4. Transforming the approach to personal data to continue to protect while facilitating innovation.
  5. Promote French culture by providing access to cultural content while respecting intellectual property rights.
  6. Assert the principle of an "AI exception" in public research to boost its attractiveness.
  7. Structure a coherent diplomatic initiative aimed at founding global AI governance.
  8. Generalize AI deployment in all higher education courses and acculturate students in secondary schools.
  9. Invest in continuing vocational training for the workforce and training schemes around AI.
  10. Make social dialogue and co-construction the cornerstone of AI use.
  11. Equipping public agents to transform administration with AI.
  12. Better care thanks to AI by giving more time to care.
  13. Better education thanks to AI via individualized student support.
  14. Sovereign computing capabilities.
  15. Access quality data.
  16. Attract talent to build the technologies and uses of tomorrow.
  17. Massively deploy AI in the economy.
  18. Build the international governance of AI that is lacking today.
  19. Develop an AI systems evaluation capability in France.
  20. Avoid dominant competitive positions.
  21. Facilitate the training of AI models while respecting intellectual property rights.
  22. Greater transparency on the environmental impact of AI models.
  23. Directing research towards more sober AI solutions.
  24. Mobilizing AI itself to accelerate the ecological transition.
  25. Create a "1% AI" solidarity mechanism for developing countries.

What do we think?

While the report and the definition of the issues surrounding the AI revolution already underway are accurate, the 25 (!) recommendations have an incantatory air that can leave one dreaming...

And unsurprisingly, the temptation to regulate what doesn't yet exist is always present in the subtext, with, let's say... uncertain economic impacts, already seen in the recent past(cuckoo RGPD).

Nevertheless, let's remain optimistic: as the report points out, France has a number of strengths, but must act quickly and decisively if it is not to fall behind.

A certain band of gorillas 🦍 is already at work 😁.

And what do you think? We'd love your opinion.