This document is a template provided as a starting point for your compliance documentation. It does not constitute legal advice and should be reviewed by a qualified legal professional before use. Viktoria Compliance accepts no liability for the use of this template.
AI Governance Policy — Template
Customize Template
Fill in your organisation details below. The preview updates in real time.
Version 1.0.0 — Last updated 2026-04-25
1. Purpose and Scope
This AI Governance Policy ('Policy') sets out how [companyName] complies with Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 (the 'AI Act') across the full lifecycle of every AI system the entity provides or deploys, and how it is held accountable to the competent national authority [supervisoryAuthority]. It applies to all business units, all AI systems on the market, in service, in pilot or in development, and all employees, contractors and third parties involved in their design, training, deployment or operation. The Policy is approved by [approverName] on behalf of the management body, takes effect on 2026-04-26 and is reviewed on or before [reviewDate].
2. Governance Structure
The management body of [companyName] holds ultimate accountability for AI Act compliance. Day-to-day execution is delegated as follows: AI Officer ([aiOfficer]) — owner of this Policy, of the AI System Inventory, of risk-classification decisions, and of all communications with [supervisoryAuthority]; Chief Information Security Officer ([cisoName]) — owner of cybersecurity, robustness and adversarial-testing requirements (Articles 15 and 55(1)(d)); Data Protection Officer ([dpoName]) — coordinates with the AI Officer where AI systems process personal data, in line with the GDPR and the special-categories exception of Article 10(5); AI Ethics Review Board (chaired by [ethicsBoardChair]) — multi-disciplinary review of high-risk and high-impact AI systems before launch and on material modification.
3. AI Literacy Programme (Article 4)
Article 4 of the AI Act requires providers and deployers to take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account technical knowledge, experience, education and training, the context in which AI systems are intended to be used, and the persons or groups of persons on which AI systems are to be used. [companyName] delivers an AI literacy programme covering: foundational understanding of AI systems and their lifecycle; the AI Act risk classification and the obligations attached to each tier; the corporate AI governance framework; bias, fairness and human-oversight expectations; incident-recognition and reporting. The programme is rolled out by role and is renewed at least annually.
4. AI Risk Management
Every AI system, regardless of risk tier, is subject to the corporate AI risk-management process, with depth proportionate to risk. For high-risk systems, the risk-management process is the iterative process required by Article 9 of the AI Act and is integrated with the corporate cybersecurity, data-protection and operational-risk frameworks. The process identifies and evaluates known and foreseeable risks to health, safety and fundamental rights; estimates and evaluates risks emerging from intended use and reasonably foreseeable misuse; evaluates risks identified by the post-market monitoring system; adopts targeted risk-management measures. Residual risks are evaluated to confirm acceptability and are communicated to deployers in the instructions for use.
5. AI Ethics Review
Before placing on the market or putting into service any AI system that is high-risk under Article 6 or that triggers transparency obligations under Article 50, the AI Ethics Review Board chaired by [ethicsBoardChair] reviews the project. The review covers proportionality of the AI use, fairness across protected groups, fundamental-rights impact (Article 27 fundamental-rights impact assessment for deployers of certain high-risk systems in public-service or essential-private-service contexts), human oversight design, transparency and explainability, environmental and societal impact, and exit and rollback strategies. The Board's opinion is recorded and binding on the project; disputes are escalated to the management body.
6. Serious Incident Reporting (Article 73)
Providers of high-risk AI systems placed on the market in the Union notify serious incidents to the market-surveillance authority [supervisoryAuthority] of the Member State where the incident occurred. A 'serious incident' under Article 3(49) covers the death or serious harm to the health of a person, serious and irreversible disruption of the management or operation of critical infrastructure, infringement of Union law obligations intended to protect fundamental rights, and serious harm to property or the environment. Notification is made immediately after the provider has established a causal link or its reasonable likelihood, and in any event no later than fifteen (15) days after awareness; for incidents resulting in death of a person, the deadline is shortened to ten (10) days; for widespread infringements or for serious and irreversible disruption of critical infrastructure, two (2) days.
7. Fundamental Rights Impact Assessment (Article 27)
Where [companyName] is a deployer of a high-risk AI system listed in points 5(b) and 5(c) of Annex III (essential public services and benefits / essential private services such as creditworthiness) or where required by other Union or national law, a fundamental-rights impact assessment under Article 27 is conducted before deploying the system for the first time. The assessment describes the deployer's processes in which the AI system will be used, the period and frequency of intended use, the categories of natural persons and groups likely to be affected, the specific risks of harm to those groups, the human oversight measures, and the measures to be taken in case of materialisation of those risks. The supervisory authority [supervisoryAuthority] is notified of the results in accordance with Article 27(3).
8. Post-Market Monitoring (Article 72)
For each high-risk AI system, [companyName] operates a post-market monitoring system that actively and systematically collects, documents and analyses relevant data on the performance of the system over its lifetime. The system is based on a post-market monitoring plan and may rely on logs generated by the system in accordance with Article 12. Where deviation from declared performance, fairness or robustness is detected, the AI Officer triggers corrective action, which may include re-training, restriction of intended purpose, additional human oversight, or withdrawal of the system from the market. Article 79 corrective actions are implemented and notified within the deadlines set by the supervisory authority.
9. Audit, Monitoring and Continuous Improvement
Internal audit reviews implementation of this Policy and the AI System Inventory at least annually, on a risk-prioritised basis. The audit covers the existence and currency of risk-classification decisions, conformity assessments, transparency notices, human oversight procedures, data-quality procedures, and the post-market monitoring records. External audit, certification or independent assurance is obtained where required by the supervisory authority [supervisoryAuthority] or by sector-specific obligations. Findings are reported to the management body, tracked through to closure, and used to update the Policy and the underlying procedures.
10. Management Approval and Document Control
This Policy is approved by [approverName] on behalf of the management body of [companyName]. The AI Officer [aiOfficer] owns the Policy and is responsible for its distribution, version control, recording of approvals, scheduling of reviews, and maintenance of evidence of compliance for inspection by [supervisoryAuthority]. Material amendments require re-approval by the management body. Non-material editorial changes may be made by the AI Officer and notified at the next management review.
This document is a template provided as a starting point for your compliance documentation. It does not constitute legal advice and should be reviewed by a qualified legal professional before use. Viktoria Compliance accepts no liability for the use of this template.