GDPR, NIS2, and the AI Act: How to Build One Compliance Programme That Covers All Three in 2026
TL;DR
Most EU SMEs treat GDPR, NIS2, and the AI Act as three separate compliance programmes — three separate policies, three separate audit trails, three separate owners, three separate training schedules. It's expensive, error-prone, and unnecessary. These three regimes share more structural DNA than most compliance teams realise: all three require documented risk management, incident or breach notification within compressed timelines, management-body accountability, and documented technical and organisational measures. This article maps the overlaps, identifies the genuine divergences, and provides a concrete blueprint for collapsing three programmes into one integrated framework without losing coverage or audit defensibility.
Why the "three separate programmes" trap is so common
The pattern repeats across every organisation we audit. GDPR arrived first (2018), and a DPO was appointed, a privacy policy was drafted, and a data-processing register was built. Then NIS2 arrived (transposition 2024-2025), and a separate cybersecurity programme was created — often reporting to the CISO, with its own risk register, its own incident reporting playbook, its own supplier questionnaire. Now the AI Act is ramping up (staggered effective dates 2025-2027), and a third parallel structure is emerging — an AI governance board, an AI risk classification process, a model registry.
Three programmes. Three policies. Three owners. Three board reports. Three training curricula. Three supplier questionnaires.
The direct costs are obvious: duplicated headcount, overlapping audit fees, redundant tooling spend. The hidden costs are worse: conflicting answers when a supplier asks "what are your security requirements?", drifting policies that slowly diverge until they contradict each other, and — most dangerously — blind spots where nobody owns the intersection. When a GDPR data breach involves an AI-processed dataset on NIS2-critical infrastructure, who leads? In the three-programme model, the answer is "all three owners argue about it while the 24-hour NIS2 clock runs out."
What the three regimes actually share
Before we can merge, we have to see the structural overlap. Here are the specific provisions across GDPR, NIS2, and the AI Act that address effectively the same operational need.
Shared obligation 1: Documented risk assessment
- GDPR Article 35 (DPIA) requires a documented data protection impact assessment for high-risk processing.
- NIS2 Article 21(2)(a) requires policies on risk analysis and information system security.
- AI Act Article 9 requires a risk management system for high-risk AI systems covering the entire lifecycle.
All three require the same core artefact: a documented, reviewed, periodically updated risk assessment covering identified threats, likelihoods, impacts, and mitigations. The scope differs (personal data / information systems / AI-driven decisions), but the methodology is effectively identical. One risk register structured with three analysis lenses is sufficient, audit-defensible, and far easier to maintain than three separate registers.
Shared obligation 2: Incident / breach notification
- GDPR Article 33: notify supervisory authority within 72 hours of a personal data breach.
- NIS2 Article 23: early warning within 24 hours, full notification within 72 hours, final report within one month.
- AI Act Article 73: report serious incidents involving high-risk AI systems to the market surveillance authority "without undue delay" (specific timelines pending implementing acts, expected 15 days for most incidents).
Three different timelines, three different authorities, one operational reality: something went wrong, and the clock started. An integrated notification runbook — with a single detection trigger, a single escalation tree, and a parallel notification workflow — is far safer than three separate runbooks that each wait to be invoked. The first 30 minutes after incident detection must push simultaneously into all three notification paths, not pick one and scramble to remember the others.
Shared obligation 3: Management body accountability
- GDPR Article 5(2) (accountability principle): the controller is responsible for and must demonstrate compliance.
- NIS2 Article 20(1): management bodies are responsible for approving and overseeing cybersecurity risk-management measures, and members can be held accountable.
- AI Act Article 17 (requirements for providers of high-risk AI): a quality management system, including accountability framework, documented and allocated responsibilities.
All three regimes are moving in the same direction: responsibility sits with the board. The "IT will handle it" defence is gone. Your management body needs a single integrated dashboard showing compliance posture across all three regimes, reviewed at each quarterly board meeting, with formal minutes. Not three separate dashboards presented in rotation.
Shared obligation 4: Documented technical and organisational measures
- GDPR Article 32: appropriate technical and organisational measures considering state of the art.
- NIS2 Article 21: appropriate and proportionate measures across ten specific domains.
- AI Act Articles 10-15: data governance, technical documentation, record-keeping, transparency, human oversight, accuracy and robustness.
An integrated control framework — mapped once against all three regimes — is the operational backbone of an efficient programme. ISO 27001:2022 as a base, extended with ISO 42001 (AI management system) and privacy-specific controls, covers the vast majority of technical requirements across the three regimes. You implement each control once, and document its applicability to whichever regimes it serves.
Shared obligation 5: Supply chain / third party management
- GDPR Articles 28 + 44-49: processor contracts, adequacy determinations, standard contractual clauses for international transfers.
- NIS2 Article 21(2)(d): supply chain security including the security of relationships with direct suppliers and service providers.
- AI Act Articles 25 + 28: responsibilities along the AI value chain, obligations for providers and deployers of AI systems.
All three regimes push accountability into the vendor chain. A single supplier evaluation framework — one questionnaire, one annual review process, one risk-tiering scheme — that maps evidence to all three regimes is dramatically more efficient than three parallel vendor assessment programmes. Vendors appreciate it too: a single consolidated questionnaire from you is far easier for them to answer than three separate documents asking overlapping questions in slightly different language.
Shared obligation 6: Training and awareness
- GDPR Article 39(1)(b): the DPO must inform and advise staff on their obligations.
- NIS2 Article 20(2): management bodies must follow training and ensure similar training for their employees.
- AI Act Article 4: providers and deployers must ensure a sufficient level of AI literacy among staff involved in operation and use of AI systems.
An integrated annual training programme covering privacy, cybersecurity, and AI literacy is a single course that takes 45 minutes per employee — not three separate courses totalling 2+ hours. Staff retain more, budget costs are lower, completion rates are higher. And one centralised training record is easier to audit than three separate completion trackers.
Where the three regimes genuinely diverge
Merging does not mean flattening. There are real differences that your integrated programme must still respect. Here are the divergences that cannot be consolidated.
Divergence 1: Who leads
GDPR lives in the DPO function (mandatory for many organisations). NIS2 lives with the CISO. The AI Act — for most SMEs — is still finding its home, often split between legal and product. These three leaders must remain distinct with clear accountabilities, but they must operate in a single governance council, not in silos. We recommend a monthly compliance council chaired by the CEO or COO, with DPO, CISO, and AI governance lead as permanent members, plus finance and product as rotating participants.
Divergence 2: The unit of regulation
GDPR regulates processing of personal data. NIS2 regulates network and information systems of in-scope entities. The AI Act regulates specific AI systems (with most obligations concentrated on "high-risk" classifications). The overlap is real but not total: not every NIS2-regulated system processes personal data, not every high-risk AI system is on NIS2-regulated infrastructure. Your compliance matrix must explicitly map which systems are in scope of which regime — do not assume a single "in scope" flag works for all three.
Divergence 3: Reporting authorities
GDPR reports go to national Data Protection Authorities. NIS2 reports go to national cybersecurity authorities. AI Act reports go to market surveillance authorities. These authorities have different contact channels, different portal systems, different language requirements, different report templates. Your incident runbook must trigger all relevant notifications in parallel, with pre-prepared templates for each — not ask a stressed on-call engineer to figure out who to notify at 3am.
Divergence 4: Penalty ceilings and structures
GDPR: up to €20M or 4% global annual turnover, whichever higher. NIS2: up to €10M or 2%, plus national variations (France's daily penalties, Italy's sectoral maxima). AI Act: up to €35M or 7% for prohibited practices, €15M or 3% for high-risk violations, €7.5M or 1% for supplying incorrect information. The AI Act ceilings are the highest of the three — which surprises many organisations that underestimate the AI Act's sharp end. Budget compliance investment against the highest applicable penalty, not an average.
A concrete blueprint for integration
With the shared obligations mapped and the divergences explicit, here is a seven-step blueprint for building an integrated programme. This is the playbook we use with SME clients transitioning from three-programme chaos to a single governance framework.
Step 1: Build a unified control catalogue
Pick a master control framework. ISO 27001:2022 Annex A (93 controls) is the most common starting point because it already maps to GDPR Article 32 and NIS2 Article 21 with strong documentary precedent. Extend with ISO 27701 for privacy-specific extensions and ISO 42001 (AI management system, published 2024) for AI-specific controls. Build a single catalogue showing, for each control, the regimes it satisfies. A control like "access management with MFA" satisfies GDPR (protection of personal data), NIS2 (access control requirement), and AI Act (security of high-risk AI systems) — document this once and reuse.
Step 2: Consolidate the risk register
One risk register with three analysis lenses. Each identified risk gets analysed for: privacy impact (GDPR lens), operational and cyber impact (NIS2 lens), and AI-specific impact (AI Act lens, where applicable). The mitigations column links to the unified control catalogue. This single artefact satisfies GDPR Article 35, NIS2 Article 21(2)(a), and AI Act Article 9 — reviewed quarterly by the compliance council, updated as threats evolve. Drop the three separate registers.
Step 3: Redesign the incident runbook
The 24-hour NIS2 early warning drives the clock. From the moment of detection, your runbook must (in parallel) classify the incident's scope (personal data involved? NIS2-regulated system? high-risk AI system?) and launch the notification workflow for every applicable regime. Pre-built templates for each authority in each national language. Pre-authorised decision paths so the on-call team doesn't need to wake executives for routine decisions. A tabletop exercise every quarter with realistic multi-regime scenarios — we recommend one quarterly exercise specifically scripted to trigger all three notifications, so the team has muscle memory.
Step 4: Consolidate vendor management
One vendor questionnaire. One annual review. One risk-tiering scheme. Design the questionnaire around the ISO 27001 Annex A structure, with targeted supplementary questions for vendors touching personal data (GDPR-specific Art. 28 content) or AI systems (AI Act Art. 25 obligations flow-through). A single consolidated vendor assessment saves you time and makes your vendors happier — which improves response rates and data quality.
Step 5: Build one governance dashboard
A single board-level dashboard with three panels: privacy posture, cyber posture, AI governance posture. Each panel shows the same metric types: risk register status (count and severity of open items), control implementation progress, incident count and status, training completion rate. One view of compliance. Review quarterly with the full board, with formal minutes. This also delivers on NIS2 Article 20 (management body accountability) in a single artefact rather than three separate reports.
Step 6: Run one training programme
An integrated 45-minute annual course covering: privacy basics (GDPR), cybersecurity hygiene (NIS2), AI literacy (AI Act). Role-specific modules for technical staff, sales, HR, and board members. A single tracked completion record. Use short monthly refresher nudges (5-minute micro-courses on specific topics) instead of annual blasts — retention is materially higher.
Step 7: Test with a unified audit
Once a year, run a single integrated audit that tests your programme against all three regimes. External assessors who understand all three — this is important; GDPR-only auditors miss NIS2 gaps, and NIS2-only auditors miss AI Act exposure. The output is one report with three regime-specific appendices. Findings feed into the unified risk register and the same remediation backlog. Budget annually for this audit: it is not optional, and the savings from running one audit instead of three cover the specialist premium.
Common integration mistakes to avoid
From the dozens of integrations we've led in 2024-2025, here are the recurring mistakes to avoid.
Mistake 1: Merging owners
The DPO, CISO, and AI lead must remain distinct roles with distinct accountabilities. GDPR specifically requires the DPO to be independent and to report to the highest management level. Collapsing these into a single "compliance officer" role undermines the regulatory architecture and attracts scrutiny. Integrate the programmes, not the people.
Mistake 2: Flattening documentation
A single monolithic "compliance policy" document is unauditable. Keep your artefacts modular — one risk register, one control catalogue, but regime-specific policies (privacy policy, information security policy, AI governance policy) that cross-reference each other. Auditors want to see regime-specific artefacts with clear mapping — not a 400-page integrated document that nobody reads.
Mistake 3: Under-investing in the AI Act early
Because the AI Act's full obligations are staggered through 2027, many organisations are deprioritising it. This is a mistake for two reasons. First, the "prohibited practices" and "high-risk classification" provisions are already effective. Second, the effort to map and govern AI systems retroactively across a large estate is much harder than building the framework while AI deployments are still early. Start the AI Act workstream now, even if your high-risk systems are few.
Frequently Asked Questions
We're a small SaaS (20 employees). Do we really need to worry about the AI Act?
Probably yes, but in a limited way. The AI Act's strictest obligations attach to "high-risk AI systems" (Annex III) and to providers of foundation models — most SME SaaS companies are not in either category. However, you are likely a "deployer" of AI systems (using third-party AI features in your product or operations), and deployers have obligations under Article 26: ensuring human oversight, monitoring performance, logging use, and informing affected parties. A light-touch AI governance policy covering these deployer obligations is sufficient for most small SaaS — it does not require a full-scale AI Act programme. Check whether any of your AI features fall into Annex III (hiring, credit, education, law enforcement, essential private services) — if not, your obligations are limited.
How much does this cost?
An integrated programme costs 30-50% less to operate than three separate programmes, based on cost benchmarks we collected from 12 EU SMEs in 2025. The typical three-programme approach runs €80K-200K annually for a mid-sized SME (50-250 employees) when you sum headcount, tooling, external audits, and legal advice. An integrated programme at the same scale runs €50K-130K. Most of the savings come from (a) one audit instead of three, (b) unified tooling rather than three separate GRC platforms, (c) one training curriculum instead of three.
What if our NIS2 authority disagrees with our AI Act authority?
Rare today but increasingly common as enforcement ramps up. Document the reasoning behind every decision and keep a clear paper trail. If a national authority's guidance conflicts with another authority's interpretation, follow the stricter one for your operations and flag the conflict in writing to both authorities. Consistent documentation of good-faith decisions is your best defence in disputes.
Where should the integrated compliance council report?
Directly to the CEO with a standing agenda slot at the board. Not to the CFO, not to the CIO. NIS2 and the AI Act both explicitly require management-body accountability — burying compliance in a functional reporting line undermines both. The compliance council is a peer to the executive committee, not a subordinate function. This also signals to the organisation that compliance is not an IT or legal backwater — it is a strategic capability.
What's the single biggest quick win?
Consolidate your vendor questionnaire tomorrow. You almost certainly have three separate vendor assessment processes (privacy, security, AI). Merging them into one reduces internal work, improves vendor response quality, and creates a single source of truth about your supply chain risk. It's a two-week project that saves ongoing months of effort and unblocks integrated risk reporting. Start here.
How do we sell this integration internally?
Lead with the numbers. Calculate the annual cost of your current three-programme setup (headcount allocation, tooling, external fees). Compare to integrated-programme benchmarks (30-50% reduction). Present this to finance first — they become your ally. Once finance is bought in, the case to the CEO is a cost-reduction story with a side benefit of better coverage. Avoid leading with "regulatory alignment" — executives have heard that pitch before and are numb to it.
What tools support this integration?
Most traditional GRC platforms (OneTrust, Vanta, Drata) are adding cross-regime mapping in 2025-2026, but the quality varies. Before buying, ask for a demo that specifically shows: one control catalogue mapped to all three regimes, one risk register with three analysis lenses, and one dashboard showing all three postures. If the vendor has to "configure" this, the integration is not native. Pure open-source stacks (e.g., ISO 27001 documentation templates, Wazuh for monitoring, OpenGRC) work well for smaller organisations and cost nothing beyond time.
The bottom line
GDPR, NIS2, and the AI Act are not three separate problems. They are three expressions of the same regulatory instinct: demonstrate that you manage risk, protect data and systems, notify when things go wrong, hold the management body accountable, and train your people. An organisation that runs three parallel programmes pays three times for the same outcome while leaving intersection risks uncovered.
An integrated programme does not mean merging everything — the DPO, CISO, and AI lead roles stay distinct, and regime-specific artefacts stay intact. What integrates is the operating system underneath: one risk register, one control catalogue, one incident runbook, one vendor programme, one training curriculum, one governance council, one audit. The three regime-specific views are windows onto the same foundation, not separate buildings.
Organisations that integrate successfully in 2026 will pay less, catch more incidents, and be more audit-defensible than those running three programmes in parallel. The economics alone make the case. The strategic case — not losing critical hours during a multi-regime incident — closes it.
Viktoria Compliance's assessment is built on the integrated model. Our adaptive questionnaire maps your organisation against GDPR, NIS2, and the AI Act in a single pass, identifies overlaps and gaps, and produces a single prioritised remediation roadmap. If you are still running three separate programmes, let us show you what the consolidated view looks like — even the first conversation typically surfaces two or three cross-regime gaps that nobody was owning.
Evalúe su preparación en cumplimiento
Realice nuestra evaluación gratuita de preparación para RGPD, NIS2 y Reglamento de IA y obtenga recomendaciones personalizadas en minutos.
Iniciar evaluación gratuitaEU Compliance Weekly
Get the latest regulatory updates, compliance tips, and enforcement news delivered to your inbox every week.