AI ACT Compliance
Your AI systems that respect EU law and values
Artificial Intelligence presents many opportunities for companies. But this technology also presents risks for individuals. This is why the European Union is establishing a regulatory framework to regulate the development and use of AI systems: the AI Act.
If you are using such systems, or if you consider doing so, it is essential that you prepare now for your compliance with the AI Act. Luxgap informs you and supports you in this task.
Principle
What is AI ACT?
The AI Act is a European regulatory framework, its development was initiated in 2021, in view of the risks that artificial intelligence systems may pose to an individual.
The EU establishes rules for the development, marketing and use of AI, with a risk-based approach.
Now what ?
Questions to ask yourself when using AI:
- Does my AI system respect the fundamental rights and values of the European Union?
- How can I guarantee that my AI is safe and trustworthy for users?
- Is my AI system considered high-risk according to EU regulations?
- How can I ensure adequate transparency and accountability in the decisions made by my AI system?
How to
Become compliant?
How can you make your company comply with the requirements of the AI Act today? Luxgap supports you in implementing the various measures to be considered:
- Identify and classify the AI systems used or envisaged according to their level of risk (for example, low, moderate, high).
- For high-risk AI systems, perform a thorough assessment of the associated risks.
Maintain complete documentation for each AI system, describing its operation, its objectives, its training data, and any other relevant information. This documentation must be easily accessible and regularly updated.
Clearly inform users when interacting with an AI system, providing them with information about the functioning, capabilities and limitations of the system. Ensure transparency in automated decision-making.
- Regularly train staff on the ethical and regulatory implications of AI.
- Establish human supervision for high-risk AI systems, in order to ensure human intervention when necessary.
- Conduct regular audits to ensure that AI systems comply with regulatory requirements.
- Collaborate with independent third parties to carry out external audits if necessary.
- Designate an AI compliance manager within the company.
- Establish clear procedures for the implementation, updating and monitoring of AI systems.
- Ensure that AI systems comply with data protection regulations, such as the GDPR.
- Impelment measures to protect the privacy of users and guarantee the security of data.
- Establish clear mechanisms to deal with complaints and concerns related to AI.
- Ensure clear responsibility in the event of damage caused by an AI system.
Continuously monitoring regulatory developments concerning AI at EU level and adapt company practices accordingly.
Work closely with national and European authorities to ensure compliance and obtain guidance. |
Raise awareness throughout the company of the implications of AI and the importance of regulatory compliance.
These actions are numerous but essential to ensure that your company, when using AI systems, uses them ethically and in compliance with EU regulations. It is also recommended that you regularly consult with legal and ethical experts to obtain advice specific to your business context. See how Luxgape can support you in the deployment and monitoring of your AI systems.