← All articles

consultant

AI Act – Annex III: move to high-risk without getting it wrong

High-risk AI systems: how to decide if Annex III applies and build a compliant file (risk management, Annex IV, CE marking) in Luxembourg, as of May 2026.

Summary: “High-risk” AI systems (Annex III) trigger heavy obligations: risk management, Annex IV technical documentation (Art. 11), logging (Art. 12), human oversight (Art. 14), CE marking (Art. 48). Here is how to decide if Annex III applies and what to include in your file, in Luxembourg, as of May 2026.

The general rule

Regulation (EU) 2024/1689 “AI Act” designates certain systems as “high-risk” under Article 6 and Annex III (e.g., recruitment, credit scoring, access to essential services, education/exams, biometrics, justice and law enforcement, management of critical infrastructure). This classification triggers specific technical and organizational requirements: risk management system (Art. 9), data governance (Art. 10), logging (Art. 12), transparency and instructions for use (Art. 13), human oversight (Art. 14), accuracy/robustness/cybersecurity (Art. 15), quality management system (Art. 17), Annex IV‑compliant technical documentation (Art. 11), conformity assessment (Art. 43 ff.), EU declaration of conformity (Art. 47) and CE marking (Art. 48). Official text: Regulation (EU) 2024/1689, OJ 12.07.2024, CELEX 32024R1689 (Publications Office). See also the Commission’s article pages: Arts. 9, 10, 12, 14, 17, 47, 48 on the AI Act Service Desk.

Timeline as of 9 May 2026: the AI Act entered into force on 1 August 2024. The Commission announces phased application, with specific dates for high-risk systems (Annex III) and a potential adjustment via the “Digital Omnibus” linked to the availability of harmonised standards. The Commission’s “Standardisation” page and its FAQ detail these milestones and Article 113 (entry into force and application). A conditional deferral of Annex III high-risk rules towards end‑2027 is being discussed; the EDPB and the EDPS commented on it in Joint Opinion 1/2026. In practice, anticipate structured compliance starting in 2026 while monitoring official announcements.

What the regulators say

  • European Commission (binding text): Annex III “referenced in Art. 6(2)” lists the high-risk areas; Art. 11 requires detailed technical documentation (Annex IV) before placing on the market/putting into service; Art. 47 requires an EU declaration of conformity; Art. 48 mandates CE marking for compliant high‑risk systems. References: Regulation (EU) 2024/1689 (Publications Office); Arts. 11/Annex IV, 47, 48 (AI Act Service Desk).
  • Commission (FAQ/standardisation): classification is based on the intended purpose and maps to requirements “risk management, data quality, documentation & traceability, transparency, human oversight, accuracy/robustness/cybersecurity.” Reference: Navigating the AI Act (Commission).
  • CNPD (Luxembourg): the CNPD published thematic AI materials and an opinion on Draft Law No. 8476 organizing national aspects of the AI Act. It recalls that deployments involving personal data remain subject to the GDPR and can be investigated even beyond the AI Act’s specific roles. References: CNPD – AI thematic file; “Prohibited AI systems”; CNPD Opinion on Draft Law 8476.
  • EDPB/EDPS: in Joint Opinion 1/2026 (Digital Omnibus), the authorities stress the risks of broadly deferring “high‑risk” obligations and recall the link with fundamental rights and data protection. Reference: EDPB/EDPS Joint Opinion 1/2026.

How to apply it in practice

Step 1 – Map your use cases and decide “Annex III or not”

  • Identify whether your system falls under any Annex III cluster (e.g., recruitment/career management, credit/insurance scoring, hospital triage, exam proctoring, critical infrastructure management, biometrics). Base this on intended purpose, affected population, and legal or equivalent effects.
  • Apply the Art. 6(3) exemption test: even if listed in Annex III, a system is not “high-risk” if it does not pose a significant risk to health/safety/fundamental rights. Document this analysis.

Ref.: Regulation (EU) 2024/1689; Commission FAQ.

Example: a Luxembourg ATS that “scores” CVs and filters candidates for banking jobs is prima facie Annex III (employment). If it never sends automated rejections and no decision is taken without meaningful human review, it will still fall within “high‑risk” insofar as outputs influence selection. Document the Art. 6 assessment and your Art. 14 oversight safeguards.

Refs.: Navigating the AI Act; Art. 14 (AI Act Service Desk).

Step 2 – Build the conformity file (before placing on the market/putting into service)

  • Risk management system (Art. 9): iterative cycle covering design → testing → deployment → post‑market monitoring; reasonably foreseeable use/misuse scenarios, bias/error assessment and mitigation measures.
  • Data governance (Art. 10): quality, relevance and representativeness of training/validation/test sets; cleaning/annotation procedures; bias monitoring; source traceability.
  • Logging (Art. 12): logging capabilities tailored to the purpose; e.g., for an ATS: run identifiers, model version, relevant inputs, scores/explanations, human actions, timestamps.
  • Human oversight (Art. 14): define control points, triggers for human review, ability to override or correct decisions, operator training.
  • QMS (Art. 17): AI quality policy, roles, model validation/evolution procedures, change control, serious incident management.
  • Annex IV technical documentation (Art. 11): system description, intended purpose, architecture, data, performance/robustness metrics, risk management, test results, instructions for use, known limitations.
  • Conformity assessment (Chap. IV) and EU Declaration (Art. 47), then CE marking (Art. 48).

Refs.: AI Act Service Desk – Art. 9, 10, 12, 14, 17, 11/Annex IV, 47, 48.

Step 3 – During use (deployer)

  • Use the system according to the provider’s instructions; if you supply input data (e.g., HR or banking data), ensure quality/representativeness (as recalled in the Commission FAQ).
  • Implement post‑market monitoring, serious incident handling, and continuous record‑keeping.

Ref.: Navigating the AI Act (Commission).

Step 4 – After placing on the market (lifecycle and audits)

  • Keep the EU declaration for 10 years (Art. 47) and maintain up‑to‑date documentation (Art. 11 + Annex IV). Any “significant” model change may require reassessment.
  • Prepare for checks by the national competent authority and/or market surveillance (in LU, the CNPD steps in wherever personal data and national law apply; other sectoral authorities may be designated by the implementing law).

Refs.: Art. 47 (AI Act Service Desk); CNPD – AI materials and opinion on Draft Law 8476.

Common pitfalls

  1. Stopping at a binary “Annex III yes/no” without applying Art. 6(3). Many projects try to “avoid” Annex III by narrowing scope; you need a written, structured, reasoned analysis of significant risk. Ref.: Regulation (EU) 2024/1689; Commission FAQ.
  2. Incomplete Annex IV file. Authorities expect end‑to‑end traceability (data, training, versioning, tests, known limitations, oversight measures). A marketing whitepaper does not replace Annex IV. Ref.: Art. 11 and Annex IV (AI Act Service Desk).
  3. Mixing up “provider/deployer.” Obligations differ. In Luxembourg, many groups act as both in‑house developers (providers) and users (deployers). Map your roles per system. Ref.: Navigating the AI Act (Commission).
  4. Forgetting input data governance on the deployer side. If you feed a high‑risk system (e.g., credit scores), input data quality/relevance/representativeness is also your operational responsibility. Ref.: Navigating the AI Act (Commission); Art. 10 (AI Act Service Desk).
  5. Underestimating CE marking and the EU declaration. CE marking (Art. 48) is not a graphic formality: it attests compliance with high‑risk requirements. Without it, placing on the market risks blockage/withdrawal. Ref.: Art. 47 and 48 (AI Act Service Desk).
  6. Ignoring the GDPR interface. The AI Act does not displace the GDPR (legal bases Art. 6, Art. 9 for sensitive data; rights Arts. 15‑22; automated decision‑making Art. 22). The CNPD reminds that deployments involving personal data remain subject to GDPR oversight. Refs.: CNPD – AI file; EDPB/EDPS 1/2026.

Official sources

Timeline note (as of 9 May 2026): initial application of Annex III high‑risk obligations around 2 August 2026 is subject to a proposed legislative adjustment (Digital Omnibus), notably conditional on the availability of harmonised standards; the EDPB/EDPS formally commented on potential deferrals. Follow official updates from the Commission and the CNPD for your internal milestones.

Luxgap regulatory expertise article. For personalised guidance on this topic, contact us or configure your online quote.

LUXGAP NEWSLETTER

Get our analyses the moment they drop.

GDPR, NIS 2, AI expertise articles, plus invitations to free webinars + trainings at Luxgap. 1 to 2 emails per week max, one-click unsubscribe.

Your data is never shared. GDPR-compliant (we're DPOs after all).

A question on this topic?

Our team usually replies within one business day. Configure your quote or write to us.

Build my quote →