This document is a template provided as a starting point for your compliance documentation. It does not constitute legal advice and should be reviewed by a qualified legal professional before use. Viktoria Compliance accepts no liability for the use of this template.
AI Act Risk Classification Decision — Template
Customize Template
Fill in your organisation details below. The preview updates in real time.
Version 1.0.0 — Last updated 2026-04-25
1. Purpose and Scope
This Risk Classification Decision determines the regulatory category of [aiSystemName] (version [aiSystemVersion]) under Regulation (EU) 2024/1689 (the 'AI Act'). The decision is required before [companyName] places the system on the market or puts it into service in the European Union, and is renewed at each material modification of the system. Submissions to the competent national authority [supervisoryAuthority] reference this document. The classification is approved by [approverName] on behalf of the management body and is effective as of 2026-04-26; it shall be reviewed on or before [reviewDate] or upon any material change.
2. System Description
Provider/deployer: [companyName] of [companyAddress]. AI system: [aiSystemName] version [aiSystemVersion]. Intended purpose: [aiSystemPurpose]. The description includes the techniques and approaches used (rule-based, machine learning, large language models, computer vision, etc.), the inputs accepted by the system, the outputs produced, the operating environment (cloud, on-premises, embedded), and the foreseeable users and affected persons. Pre-trained or fine-tuned base models, including any GPAI models incorporated, are listed with their providers and version identifiers.
3. Prohibited Practices Check (Article 5)
We assess whether the system implements any of the prohibited practices listed in Article 5 of the AI Act, applicable since 2 February 2025: subliminal techniques materially distorting behaviour to cause significant harm; exploitation of vulnerabilities of specific groups; social scoring by public authorities or on their behalf; risk-assessment of natural persons solely on the basis of profiling for criminal-offence prediction; untargeted scraping of facial images for face-recognition databases; emotion recognition in workplaces or educational institutions outside medical or safety purposes; biometric categorisation inferring race, political opinions, trade-union membership, religious beliefs, sex life or sexual orientation; real-time remote biometric identification in publicly accessible spaces for law-enforcement purposes outside the narrow exceptions of Article 5(1)(h). If any prohibited practice is implemented or could be implemented in normal use, the system cannot be placed on the market in the EU and the project must be redesigned or terminated.
4. High-Risk Classification Check (Article 6 and Annex III)
We assess whether the system is high-risk under (a) Article 6(1) — the system is intended to be used as a safety component of a product covered by Union harmonisation legislation listed in Annex I and the product is required to undergo third-party conformity assessment; or (b) Article 6(2) — the system falls within one of the use-cases enumerated in Annex III: biometric identification and categorisation; management and operation of critical infrastructure; education and vocational training; employment, workers management and access to self-employment; access to and enjoyment of essential private services and public services and benefits; law enforcement; migration, asylum and border control; administration of justice and democratic processes. The Article 6(3) exception applies where the system performs a narrow procedural task, improves the result of a previously completed human activity, detects decision-making patterns or deviations without replacing or influencing the human assessment, or performs a preparatory task for an Annex III assessment — in those cases, we document the basis for non-classification and notify the supervisory authority on first deployment.
5. GPAI Classification Check (Articles 51-55)
We assess whether the system is, or incorporates, a general-purpose AI model under Article 51 — that is, a model which displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way it is placed on the market and that can be integrated into a variety of downstream systems or applications. Where the model meets the criterion for systemic risk (cumulative compute used for training exceeding 10^25 floating-point operations, per Article 51(2)), additional obligations apply (Articles 55, 56). [companyName] documents whether it is the GPAI provider, downstream provider, or deployer, and which obligations follow: Article 53 (technical documentation, copyright policy, training-data summary), Article 54 (authorised representative for non-EU providers), Article 55 (systemic-risk obligations including model evaluation, adversarial testing, serious-incident reporting and cybersecurity).
6. Transparency Obligations Check (Article 50)
Independently of risk-tier classification, we assess whether the system triggers transparency obligations under Article 50, applicable from 2 August 2026: (a) AI systems intended to interact directly with natural persons must inform those persons that they are interacting with an AI system, unless this is obvious from context; (b) providers of generative AI systems producing synthetic audio, image, video or text content must mark the outputs as artificially generated or manipulated in a machine-readable format; (c) deployers of emotion-recognition systems or biometric-categorisation systems must inform the natural persons exposed to them; (d) deployers of AI systems generating or manipulating image, audio or video content constituting a deep fake must disclose that the content has been artificially generated or manipulated, with limited exceptions for evidently artistic, creative, satirical, fictional or analogous works.
7. Classification Result and Consequences
Based on the assessments in Sections 3-6, [aiSystemName] is classified as one of: PROHIBITED — the system cannot be placed on the market or put into service; HIGH-RISK — Articles 8-22 apply (risk management system, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy/robustness/cybersecurity, registration in the EU database before deployment, conformity assessment, EU declaration of conformity and CE marking, post-market monitoring, serious-incident reporting); GPAI — Articles 53-55 apply per provider/deployer role; LIMITED-RISK — Article 50 transparency obligations apply; MINIMAL/NO-RISK — voluntary codes of conduct only. The applicable obligations and the timeline for compliance are recorded in Annex A to this decision.
8. Review and Approval
This decision is reviewed by the AI Officer [aiOfficer], the CISO [cisoName] and, where personal data is processed, the Data Protection Officer [dpoName]. It is approved by [approverName] on behalf of the management body of [companyName] in accordance with the corporate AI governance policy. A summary of this decision is registered in the AI System Inventory and is made available to the supervisory authority [supervisoryAuthority] on request. The decision is re-issued upon any material change to the system, the underlying GPAI model, the intended purpose, or the regulatory environment, and at the latest on the review date set out in Section 1.
This document is a template provided as a starting point for your compliance documentation. It does not constitute legal advice and should be reviewed by a qualified legal professional before use. Viktoria Compliance accepts no liability for the use of this template.