
Understanding and complying with the AI Act
How to prepare for the European AI Act: a practical guide for legal directors
In a world where artificial intelligence is rapidly transforming our organizations, regulatory compliance is becoming a major strategic issue. The European AI Act, the new legislative framework that will come fully into force in 2026, represents a decisive turning point for all companies using AI systems. As a legal manager, you find yourself on the front line of this daunting challenge. How do you navigate this new regulatory reality while enabling your organization to innovate?
This practical step-by-step guide will help you prepare for the AI Act, with concrete strategies and effective tools to turn this regulatory constraint into a strategic opportunity.
1. Understanding the fundamentals of the European AI Act
The AI Act represents the world's first comprehensive legislation specifically dedicated to artificial intelligence. Adopted in March 2024, this regulation establishes a harmonized framework for the development, marketing and use of AI systems within the European Union.
The particularity of this regulation lies in its risk-based approach. Obligations vary considerably depending on the category in which your AI system falls:
- Unacceptable risk systems : totally forbidden (cognitive manipulation, social rating, etc.)
- High-risk systems : subject to strict requirements (conformity assessment, technical documentation, etc.).
- Limited-risk systems : subject to transparency requirements
- Minimal risk systems : few or no specific constraints
The penalties provided for are particularly dissuasive, reaching up to 35 million euros or 7% of annual worldwide sales for the most serious infringements.
"A fine-grained understanding of this categorization is the essential first step in developing your compliance strategy," stresses a digital law expert at a recent AI Act conference.
2. Map your existing and future AI systems
Before you can implement an effective compliance strategy, you need to have a clear and comprehensive view of all AI systems in use or under development within your organization.
This mapping must include :
- Complete inventory of deployed AI solutions
- Suppliers and partners involved
- Data used for training and operation
- The purposes and use cases of each system
- Departments and teams involved
To carry out this exercise effectively, interdepartmental collaboration is essential. Organize workshops bringing together legal, IT, data science and user business teams to ensure that your mapping is complete.
A centralized governance tool like the one offered by Cleyrop can greatly facilitate this process by providing complete visibility over all your data assets and AI systems in a unified catalog.
3. Assess risk levels according to AI Act classification
Once you've mapped your systems, the next step is to determine which risk category each of your AI systems falls into.
For high-risk systems, which will be subject to the most stringent obligations, pay particular attention to the following areas:
- Systems used to evaluate job applicants
- AI solutions involved in decisions on access to essential services
- Systems used to assess solvency
- AI applications in health and safety
For each system identified as high-risk, you must implement :
- A risk management system
- Comprehensive technical documentation
- Automatic activity logs
- Appropriate human supervision
- High levels of robustness, precision and cyber security
"Risk assessment is not a one-off exercise, but an ongoing process that needs to be integrated into the lifecycle of your AI systems," reminds a compliance manager at a major French company that has already begun the process of achieving compliance.
4. Implement governance adapted to the requirements of the AI Act
Compliance with the AI Act requires the establishment of robust and appropriate governance. This governance must effectively oversee the development, deployment and use of AI systems within your organization.
Here are the key elements to put in place:
- An AI ethics committee with representatives from various functions (legal, IT, business, CSR).
- Validation procedures for new AI projects
- An AI-specific risk assessment framework
- Standardized documentation processes
- Quality control mechanisms for training data
The appointment of an AI compliance officer, reporting to the legal department but working closely with the technical teams, can be particularly relevant for organizations using many high-risk systems.
A platform like Cleyrop's, which integrates data governance and AI model traceability features, is a valuable asset to support this governance and demonstrate your compliance in the event of an audit.
5. Document your AI systems in compliance with regulatory requirements
The AI Act imposes particularly stringent documentation requirements, especially for high-risk systems. This documentation must be sufficiently detailed to demonstrate your systems' compliance with the requirements of the regulation.
Essential elements to be documented include:
- system design and technical specifications
- The training data used and where it comes from
- Validation and testing methods
- Risk management measures implemented
- Human monitoring procedures
To facilitate this documentation exercise, consider adopting specialized tools to centralize and standardize your technical documentation. Data lakehouse solutions like the one offered by Cleyrop provide cataloguing and traceability functionalities that considerably simplify this task.
"Documentation is not just a legal obligation, it's also a strategic tool that enables you to better understand and control your AI systems," explains a legal director at a CAC 40 company that anticipated compliance with the AI Act.
6. Train your teams in the challenges of the AI Act
Compliance with the AI Act cannot rest solely on the shoulders of the legal department. It requires awareness-raising and training for all employees involved in the lifecycle of AI systems.
Your training plan should focus on :
- Development teams to integrate compliance requirements right from the design stage (privacy by design)
- Product managers to assess risks upstream of projects
- sales teams to properly communicate system capabilities and limitations
- End-users to ensure appropriate human supervision
Various formats can be offered: webinars, practical workshops, in-house documentation or e-learning modules. The key is to adapt the content to the technical level and responsibilities of each audience.
Several organizations now offer certified AI Act training courses, which can be a wise investment for key members of your legal team and your data scientists.
7. Adapt your contracts and relations with suppliers
The AI Act will have a significant impact on your contractual relationships, particularly with your AI solution providers. A thorough review of your existing and future contracts is in order.
Major points of attention include:
- Compliance clauses specific to the AI Act
- Distribution of responsibilities for technical documentation
- Transparency obligations on training data
- guarantees for human supervision
- Audit and control mechanisms
For new contracts, draw up standard clauses adapted to the various risk categories. For existing contracts, draw up an action plan for their gradual updating, prioritizing those linked to high-risk systems.
"In this evolving regulatory context, give preference to partners who demonstrate a thorough understanding of AI Act issues and who have already integrated these requirements into their solutions," recommends an expert in technology contract law.
This is precisely the approach taken by Cleyrop, whose solutions have been designed from the outset with particular attention to regulatory compliance and data sovereignty.
8. Implement a progressive, pragmatic compliance strategy
Given the scale of the changes required by the AI Act, a gradual, pragmatic approach is essential. It would be illusory to aim for total and immediate compliance for all your AI systems.
Here's a three-phase roadmap you could adopt:
Phase 1 (immediate) :
- Finalize the mapping of your AI systems
- Identify priority high-risk systems
- Train key teams in AI Act requirements
Phase 2 (6-12 months) :
- Bringing your high-risk systems into compliance
- Review your contracts with strategic suppliers
- Deploying your AI governance framework
Phase 3 (12-18 months) :
- Extend compliance to all your systems
- Automate documentation and control processes
- Set up regular audits
This sequential approach will enable you to focus your resources on the most critical areas while gradually building your AI compliance maturity.
"The important thing is not to be perfectly compliant from day one, but to demonstrate a serious commitment and steady progress towards compliance," stresses a European regulator at a recent conference on the AI Act.
9. Turning regulatory constraints into competitive advantage
Beyond mere compliance, the AI Act can be seen as an opportunity for differentiation and responsible innovation. Organizations that know how to integrate these regulatory requirements into their overall strategy will derive a definite competitive advantage.
Here's how to turn this constraint into an opportunity:
- Promote your compliance to your customers and partners as a guarantee of reliability and ethics.
- Integrate the principles of trusted AI into your value proposition
- Develop in-house expertise that can be leveraged in future developments
- Take an active part in industry discussions on the interpretation and application of the AI Act
Solutions like those offered by Cleyrop, which natively integrate the principles of data sovereignty and algorithm transparency, enable you to reconcile innovation and compliance without compromise.
"Companies that see the AI Act as a mere regulatory constraint will be missing out on a major opportunity to strengthen the trust of their stakeholders," says an innovation director at a major French group.
10. Keep abreast of regulatory changes and clarifications
The IA Act is a living regulation that will continue to evolve as guidelines, enforcement decisions and case law are issued. Keeping abreast of these developments is essential to maintaining your compliance over the long term.
To do this :
- Subscribe to digital and AI law newsletters
- Participate in sectoral working groups on the interpretation of the AI Act
- Follow the publications of the European AI Office , which will oversee the application of the regulation.
- Regular exchanges with your counterparts in other organizations
Consider also relying on technology partners such as Cleyrop, who integrate a regulatory watch into their offering and ensure that their solutions evolve in line with new requirements.
The AI Act represents a major challenge, but also a tremendous opportunity to structure your approach to artificial intelligence. By adopting a methodical, step-by-step approach, you can not only ensure your organization's compliance, but also boost the confidence of your stakeholders and consolidate your market position.
Ready to turn this regulatory constraint into a strategic advantage? Our experts are at your disposal to guide you through this process and show you how our solutions can help you comply with the European AI Act. Contact us today for a personalized assessment of your needs.