This document is a template provided as a starting point for your compliance documentation. It does not constitute legal advice and should be reviewed by a qualified legal professional before use. Viktoria Compliance accepts no liability for the use of this template.
AI Act Human Oversight Procedure — Template
Customize Template
Fill in your organisation details below. The preview updates in real time.
Version 1.0.0 — Last updated 2026-04-25
1. Purpose and Scope
This procedure governs the design and exercise of human oversight in respect of [aiSystemName], a high-risk AI system within the meaning of Article 6 of Regulation (EU) 2024/1689 (the 'AI Act'). Intended purpose: [aiSystemPurpose]. The procedure implements Article 14 (provider's design obligation) and Article 26(2) (deployer's implementation obligation). It is owned by the AI Officer [aiOfficer] and the Human Oversight Lead [oversightLead]. It enters into force on 2026-04-26 and is reviewed on or before [reviewDate] or upon any material modification of the system.
2. Oversight Design (Article 14)
Human oversight measures are designed to enable natural persons to whom oversight is assigned to: (a) properly understand the relevant capacities and limitations of the high-risk AI system and duly monitor its operation, including detecting and addressing anomalies, dysfunctions and unexpected performance; (b) remain aware of the possible tendency to automatically rely or over-rely on the output produced by the system ('automation bias'), in particular for high-risk AI systems used to provide information or recommendations to inform decisions to be taken by natural persons; (c) correctly interpret the system's output, taking into account the available interpretation tools and methods; (d) decide, in any particular situation, not to use the system or otherwise disregard, override or reverse its output; (e) intervene in the operation of the system or interrupt the system through a 'stop' button or similar procedure that allows the system to come to a halt in a safe state.
3. Operator Selection
Operators assigned to oversee [aiSystemName] are selected based on technical knowledge, experience, education and training appropriate to the context in which the system is used, in accordance with Article 26(2). The selection records: the operator's name and role; the date of assignment; the technical knowledge baseline against the system's intended purpose and operational context; conflicts of interest; backup operators for continuity. Operators are designated by name in the post-market monitoring file referenced from the AI System Inventory.
4. Operator Training
Each designated operator completes training before being assigned to oversight duties and at recurring intervals thereafter. The training covers: the intended purpose of the AI system; capacities and known limitations; performance metrics on representative datasets; failure modes and observed misuse patterns; the user interface and the meaning of any uncertainty or confidence indicators; the procedure for intervention, override and rollback; the channel and timeline for reporting anomalies, near-misses and incidents to the AI Officer; data protection, fundamental rights and ethical considerations specific to the system. Training completion is recorded; failure to complete training within the prescribed window suspends the operator's oversight authorisation.
5. Intervention Authority
Designated operators have the express authority to intervene in the operation of [aiSystemName], to disregard, override or reverse its output, and to interrupt the system through a 'stop' procedure that brings the system to a halt in a safe state, in accordance with Article 14(4)(d) and (e). Intervention authority is granted to a clearly identified person in real time and cannot be conditional on managerial approval where immediate safety, fundamental-rights or operational-integrity considerations apply. Operators are protected from negative employment or contractual consequences when they exercise intervention authority in good faith.
6. Logging and Traceability
All oversight actions — system interventions, overrides, rollbacks and stops — are logged in the system's technical logs in accordance with Article 12 of the AI Act. Each log entry records the date and time, the operator, the system state before and after the action, the trigger and rationale, and any downstream consequences. Logs are retained for the period necessary to meet the post-market monitoring and incident-reporting obligations of Articles 72-73, and for at least six (6) months for high-risk systems in any case (Article 12(3)). Logs are protected against tampering and made available to the supervisory authority [supervisoryAuthority] on request.
7. Effectiveness Review
The Human Oversight Lead [oversightLead] conducts a quarterly review of the effectiveness of the oversight measures. The review covers: number and category of interventions, overrides and stops; root-cause patterns; near-misses; operator workload; observed automation bias; corrective actions arising from post-market monitoring; whether oversight design changes are needed (e.g., more salient confidence indicators, more aggressive default-off behaviour, narrower deployment context). Findings are reported to the AI Officer and incorporated into updates of this procedure, of the system's technical documentation and of operator training. Material findings are notified to the supervisory authority [supervisoryAuthority] under the post-market monitoring obligation.
This document is a template provided as a starting point for your compliance documentation. It does not constitute legal advice and should be reviewed by a qualified legal professional before use. Viktoria Compliance accepts no liability for the use of this template.