Torna al blog
Best Practices20 min readMay 6, 2026

Building Your AI Inventory Before 2 August 2026: A Practical Guide for EU SMEs

By María Fernández Romero

TL;DR

Most substantive AI Act obligations apply from 2 August 2026 — Annex III high-risk obligations under Articles 8 to 22, deployer duties under Article 26, Article 50 transparency requirements, and EU database registration under Article 49. None of those obligations can be discharged without a complete, current, owner-assigned inventory of every AI system in scope. Article 5 prohibitions have been enforceable since 2 February 2025; GPAI obligations have applied since 2 August 2025 (Art. 113). An inventory built today must therefore cover prohibited-practice screening, GPAI-deployer flow-down records, and the regular tier work. The good news for resource-constrained SMEs: you almost certainly already maintain analogous registers for GDPR Article 30 records of processing, NIS2 asset inventories, and ISO 27001 Annex A.5.9. The AI inventory is best built as an extension of those, not as a parallel artefact. This guide gives you the inventory schema, the reuse map, three worked SME examples, the gaps that catch organisations out, and a 60-day plan to ship the register before the deadline lands.

EU AI Act compliance milestones (Art. 113)EU AI Act compliance milestones (Art. 113)2 Feb 2025
Article 5 prohibitions enforceable
2 Aug 2025
GPAI obligations + governance bodies
2 Aug 2026Annex III high-risk + most obligations2 Aug 2027
Annex I legacy product-safety strand
As of 6 May 2026~88 days remaining at publication
Source: Regulation (EU) 2024/1689, Article 113 — staggered application dates.

Why an AI inventory matters by 2 August 2026

The AI Act does not impose a single, named "inventory" obligation. It imposes more than a dozen obligations that a compliance officer can only discharge if they know which AI systems sit inside their organisation, what those systems do, who owns them, and which tier each one occupies. Article 9 risk management presupposes a list of systems against which to run the analysis. Article 11 technical documentation presupposes that the in-scope systems have been identified. Article 26 deployer duties — human oversight, monitoring, logging, informing affected workers — apply per system, and you cannot apply them per system without a per-system register. Article 49 EU database registration is itself a system-level filing. Without the inventory, none of the rest can be ordered, prioritised, or evidenced.

The fines are designed to make the gap expensive. Article 99 sets administrative penalties at €35M or 7% of global annual turnover for prohibited-practice infringements (Art. 99(3)), €15M or 3% for non-compliance with high-risk obligations (Art. 99(4)), and €7.5M or 1% for supplying incorrect, incomplete, or misleading information to notified bodies and competent authorities (Art. 99(5)). SMEs benefit from a partial mitigation under Article 99(6) — fines may be set at the lower of the absolute amount or the percentage rather than the higher — but the mitigation does not remove the ceiling, and it does not apply at all to the most damaging category of error: discovering during an enforcement action that you do not know what AI you operate. An incomplete inventory is the precondition for the third type of fine, because every answer you give to the authority is necessarily an estimate.

AI Act administrative fines (Article 99)AI Act administrative fines (Article 99)€35MArt. 99(3)/ 7%Prohibited practices (Art. 5)€15MArt. 99(4)/ 3%Non-compliance with high-risk obligations€7.5MArt. 99(5)/ 1%
Incorrect / incomplete / misleading information to authorities
% = global annual turnover; whichever is higher applies · SMEs: lower of absolute / % under Art. 99(6)
Source: Regulation (EU) 2024/1689, Article 99 §§ 3, 4, 5.

Market access is the second pressure point. Article 79 lets the market surveillance authority require corrective action for non-conforming high-risk systems, including withdrawal of the product from the EU market. For a B2B SaaS vendor whose enterprise contracts include compliance warranties — and whose customers will start asking for AI Act sign-off in procurement from the second half of 2025 onwards — being unable to produce a system-level inventory on request is a commercial liability long before it is a regulatory one. The customers who ask first will be the German, French, and Dutch enterprises that already run mature GDPR vendor due-diligence processes. They will not accept "we are still mapping" as an answer in late 2026.

What "AI system" means under Article 3(1)

Article 3(1) defines an AI system as a machine-based system designed to operate with varying levels of autonomy that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. The definition is aligned with the OECD definition revised in 2023 and is broad by design. Any organisation that has read it carefully has the same first reaction: this is wider than we were assuming.

Three implications follow for the scope of the inventory. First, "machine learning" is not a precondition. A rule-based expert system that infers outputs from inputs against a knowledge base falls within the definition, even though most engineers would not call it AI. Recital 12 carves out some classical statistical and optimisation techniques where there is no inferential element, but the carve-out is narrow and contested at the edges. When in doubt, include the system in the inventory and document the reason it does not require a tier classification.

Second, "varying levels of autonomy" means the boundary between AI and conventional software does not run along the line of "human in the loop". A system with mandatory human approval for every output is still an AI system if it infers those outputs. Article 14 human oversight is then a design feature of an AI system, not an exit from the AI Act. CV-screening tools that surface a ranked shortlist for a recruiter to choose from are AI systems even when the recruiter formally makes the hiring decision.

Third, "outputs that can influence physical or virtual environments" pulls in a long list of operational software that organisations rarely tag as AI: dynamic pricing engines, fraud-scoring models, customer-churn predictors, content moderation classifiers, recommendation systems, automated email-routing, demand forecasting, predictive maintenance models. Most of these will turn out to be Minimal-risk under the tier analysis, but they belong in the inventory anyway. The point of the register is to demonstrate that a classification was performed, not to list only the systems that turned out to be high-risk.

Mapping the inventory to AI Act tiers

Each entry in the inventory carries a tier label, and the label drives the rest of the file. The five categories — Unacceptable, High-risk, Limited-risk, Minimal-risk, and the GPAI overlay — sit on top of the same register, and a single product can produce multiple inventory entries when it contains multiple distinct AI use cases. Treat the entries as units of compliance work, not as a vendor catalogue.

AI Act risk-tier classification flowAI Act risk-tier classification flow
Falls under an Article 5 prohibition?
Art. 5
YES
PROHIBITED · discontinue
NO
Safety component of an Annex I product?
Art. 6(1) · Annex I
YES
HIGH-RISK · 2 Aug 2027
NO
Falls within Annex III §§ 1–8?
Annex III §§ 1–8
NO
LIMITED-RISK · Art. 50
MINIMAL · Art. 4 literacy only
YES
Art. 6(3) derogation conditions met (no profiling)?
Art. 6(3)
YES
NOT HIGH-RISK · document rationale
NO
HIGH-RISK · 2 Aug 2026
Annex III §5(b) fraud-detection text-level carve-outAnnex III §5(b) — text-level exception, applies before Art. 6(3) filter
Decision flow derived from Articles 5, 6(1)–(3), Annex III, and Annex I of Regulation (EU) 2024/1689.

Unacceptable-risk systems under Article 5 — eight prohibited families covering manipulative techniques, exploitation of vulnerabilities, social scoring, profiling-only predictive policing, untargeted facial-image scraping, workplace and educational emotion recognition, biometric categorisation by sensitive attributes, and real-time remote biometric identification in public for law enforcement — should never appear in an active inventory entry. If an existing system or a planned procurement matches any prohibition, the inventory record exists only to document the discontinuation decision. The prohibitions have been enforceable since 2 February 2025, so this is no longer a future obligation.

High-risk entries — Annex III categories plus Article 6(1) safety components of products covered by the Union harmonisation legislation listed in Annex I (medical devices, machinery, toys, lifts, and similar) — carry the heaviest field set. The obligation set in Articles 9 through 49 (thirteen items in total) generates documentation requirements that a thin inventory record cannot support. For high-risk entries, the inventory is best treated as an index pointing to a separate technical file under Article 11 and Annex IV, with the inventory entry itself holding only the metadata an auditor needs to navigate to the file. The substantive obligations apply from 2 August 2026 for Annex III high-risk systems, and from 2 August 2027 for the Annex I product-safety strand under Art. 6(1).

A second filter operates within Annex III. Article 6(3) provides a derogation for Annex III systems that do not pose a significant risk of harm because they perform a narrow procedural task, improve the result of a previously completed human activity, detect decision-making patterns or deviations without replacing human assessment, or perform a preparatory task to an Annex III assessment. Profiling of natural persons disqualifies the derogation in every case. Each Annex III inventory entry needs a written Article 6(3) analysis: either a positive finding (with the specific limb relied on) and the reasoning, or a negative finding that confirms the system stays in the high-risk category.

Limited-risk entries — chatbots, deepfake tooling, AI-generated text or media, emotion-recognition systems where not prohibited — carry the Article 50 transparency obligations. The inventory needs to record the disclosure mechanism per system: the location of the chat banner, the watermark technique used for synthetic outputs, the editorial-review log for AI-generated text published on matters of public interest, the deployer-side notification used for deepfakes. Limited-risk obligations also apply from 2 August 2026, and unlike the high-risk path, no derogation route relieves the duty: every chatbot needs a disclosure, and the inventory needs to evidence it.

Minimal-risk entries make up the bulk of the register. There are no mandatory obligations under the AI Act for these systems, but the inventory still needs to record the negative finding — the documented reasoning that the system does not fall within Annex III, does not interact directly with natural persons, and does not generate synthetic media. The Article 4 AI literacy duty applies regardless of tier, so the inventory should capture which staff cohort operates each Minimal-risk system and confirm that they sit within the AI literacy programme.

GPAI is an overlay, not a tier. Almost every SME inventory entry built on top of a foundation model — GPT-4o, Claude, Gemini, Mistral Large, Llama, Command — is a deployer record under Article 53(1)(b), inheriting flow-down information from the upstream provider. The inventory needs a dedicated field for the upstream model identity and version, the provider, the published training-data summary reference, and the date the latest set of provider instructions for use was reviewed. There are two distinct rules to keep separate. Article 25 shifts high-risk provider obligations onto a downstream actor in three cases — placing a high-risk AI system on the market under their own name or trademark, substantially modifying a high-risk system already on the market, or modifying the intended purpose of a non-high-risk system (including a GPAI) so that it becomes high-risk under Article 6. Recital 109 addresses a different case: an organisation that fine-tunes or otherwise modifies a GPAI without making it a high-risk system. There, the obligations on the downstream actor are limited to the modification itself — typically supplementing the upstream technical documentation with information on the changes, including any new training-data sources. Most SMEs cross neither threshold, but the inventory is the only place where the line is documented for each system.

Required inventory fields

A defensible AI inventory has roughly fifteen fields per entry. Resist the temptation to add more — every additional field is a maintenance burden, and the sustainable register is the one that staff actually update when systems change. The list below is the minimum set that supports the obligations that apply from 2 August 2026.

AI inventory schema — 15 fields, colour-coded by reuse sourceAI inventory schema — 15 fields, colour-coded by reuse source1
System name + unique ID
2
Named owner
3
Vendor / in-house indicator
4
AI Act tier
5
Tier-classification rationale
6
Upstream model (GPAI)
7
Intended purpose
8
GDPR lawful basis
9
Training-data origin
10
Output type
11
Decision impact
12
Human oversight design
13
Conformity assessment status
14
EU database registration ID
15
Last review + next trigger
GDPR Art. 30 ROPAISO 27001 A.5.9NIS2 asset registerAI Act-specific
Roughly half of the AI inventory fields can be drawn from registers you already maintain.
  • System name and unique identifier — a stable label used consistently across the technical file, the EU database registration, vendor contracts, and internal change-management tickets.
  • Owner — a single named individual within the organisation who is accountable for keeping the entry up to date and for triggering re-classification when the system materially changes. A team or department is not an owner.
  • Vendor or in-house indicator — whether the system is built internally, procured as SaaS, or embedded inside a larger product. This drives whether you are a provider, a deployer, or both under Articles 25 and 26.
  • AI Act tier — Unacceptable, High-risk (Article 6(1) or 6(2)/Annex III with the specific Annex III point), Limited-risk (with the specific Article 50 paragraph), Minimal-risk, plus a GPAI overlay flag where applicable.
  • Tier-classification rationale — a short prose justification, including any reliance on the Article 6(3) derogation. Vague justifications will not survive an audit; "documented because reasoned" is the standard.
  • Upstream model — for GPAI-deployer entries, the model identity, version, provider, and the date the upstream provider's instructions for use under Article 53(1)(b) were last reviewed.
  • Intended purpose — the specific use case as described to deployers, in the wording of the instructions for use under Article 13. Drift between intended purpose and actual use is one of the most common audit findings.
  • GDPR lawful basis — for any system that processes personal data, the Article 6 GDPR basis (and Article 9 condition where special-category data is involved). Cross-reference the corresponding Article 30 GDPR record.
  • Training-data origin — provenance summary covering source, licensing terms, copyright opt-out compliance, and any data minimisation or de-identification applied. For GPAI-deployer entries, the field can point to the upstream provider's published training-data summary.
  • Output type — prediction, recommendation, classification, content generation, decision. The output type drives the Article 50 transparency obligations and the GDPR Article 22 analysis.
  • Decision impact — whether outputs influence consequential decisions about identifiable natural persons, and if so, the population affected (employees, candidates, customers, patients, beneficiaries). This is the field that signals when a separate DPIA under Article 35 GDPR or a fundamental-rights impact assessment under Article 27 of the AI Act is required.
  • Human oversight design — the Article 14 measures actually implemented: pre-deployment review, output validation, override capability, intervention thresholds, escalation paths. Generic "a human reviews the output" entries fail the audit standard.
  • Conformity assessment status — for high-risk entries, the route taken (internal control under Annex VI, third-party assessment under Annex VII for biometrics, integrated assessment for Annex I products), the date of the latest assessment, and the reference of the EU declaration of conformity under Article 47.
  • EU database registration — for high-risk entries that fall within Article 49, the registration ID and last-update date.
  • Last review date and next review trigger — the calendar date of the most recent classification review and the change events that will force a re-review (model swap, training-data change, intended-purpose extension, integration into a new business workflow).

Reusing what you already have

The cheapest AI inventory is the one extended from existing registers rather than built from scratch. Three pre-existing artefacts cover most of the data fields above and most of the governance discipline needed to keep them current.

GDPR Article 30 records of processing (ROPA) cover the personal-data spine of any AI system that processes data about identifiable natural persons. The ROPA already captures the lawful basis, the categories of data, the categories of data subjects, the recipients, the international transfers, and the retention schedule — every one of which the AI inventory needs to either replicate or reference. The pragmatic move is to add an "AI system reference" column to the existing ROPA and, on the AI-inventory side, to reference the corresponding ROPA entry rather than duplicate the personal-data fields. DPIAs under Article 35 GDPR and any AI Act Article 27 fundamental-rights impact assessments — applicable to deployers that are public bodies or private entities providing public services, and separately to every deployer (public or private) of an Annex III §5(b) creditworthiness system or §5(c) life- or health-insurance risk-assessment / pricing system — should both link out from the inventory entry rather than being copied into it.

NIS2 asset registers, where they exist, cover the operational and information-security spine. Member State NIS2 transposition laws — the transposition deadline was 17 October 2024 and the European Commission opened infringement proceedings in November 2024 against 23 Member States that had missed it; transposition has continued in waves through 2025 and 2026 — require essential and important entities to maintain a register of ICT assets and dependencies. For SMEs that fall within NIS2 scope, the register typically already includes the cloud platforms, SaaS dependencies, and critical software systems on which AI workloads run. Tagging AI systems within the NIS2 register and back-referencing to the AI inventory aligns the two regimes and avoids the situation where an organisation's NIS2 inventory shows a third-party SaaS without flagging that the SaaS embeds an AI use case in scope of the AI Act.

ISO/IEC 27001:2022 Annex A.5.9 (formerly A.8.1.1 in the 2013 version) requires an inventory of information and other associated assets, including software, services, and information itself. For ISO-certified SMEs, the asset inventory mandated by A.5.9 is the natural carrier for the AI register. The audit trail demanded by ISO 27001 — owner, classification, change record — already maps to the AI Act inventory fields above. Treating the AI inventory as a sub-set of the ISO 27001 asset inventory rather than as a new artefact saves duplicate maintenance, gives the AI register the benefit of the existing ISMS review cadence, and avoids the all-too-common state in which the ISMS asset register and the AI inventory drift into mutually inconsistent versions.

Beyond those three, two adjacent registers are worth touching: the procurement vendor register (which usually captures every external SaaS the organisation uses, and is often the fastest route to a complete shadow-AI sweep) and the data-protection record of consent or contractual basis (which already underpins the GDPR side of any customer-facing AI system). When the AI inventory is built as a federation of references across these existing registers rather than as a new spreadsheet, the maintenance overhead drops by roughly half and the cross-regime consistency that auditors look for emerges naturally.

Three SME inventory examples

Worked examples sharpen the discipline. Three short cases below show how the inventory schema lands in practice for an HR-tech vendor, a FinTech, and a MedTech SME, each typical of the early-stage European company most exposed to the 2 August 2026 deadline.

HR-tech: a 50-person SaaS vendor in Munich

The product offers an applicant-tracking system with two AI-driven features: a CV-summarisation feature built on top of GPT-4o, and an internal candidate-ranking feature trained on the customer's historical hiring data. The inventory contains two distinct entries, not one. Entry A — CV summarisation — is a Limited-risk system on its own (Article 50(1) chatbot-style transparency, because the recruiter sees a summary clearly labelled as AI-generated), with a GPAI-deployer overlay pointing to OpenAI as the upstream provider. Entry B — candidate ranking — is unambiguously High-risk under Annex III point 4(a) (employment, recruitment, candidate selection), with the vendor as the provider because the system is placed on the EU market under the vendor's own name. Entry B carries the full thirteen-item Article 9 to Article 49 obligation set; Entry A carries only the Article 50(1) disclosure plus the GPAI flow-down record. The shared owner across both entries is the Head of Product, with a designated AI Act lead in the legal team as a secondary owner. The inventory cross-references the ISO 27001 asset register (the company is certified) for both entries and the GDPR ROPA for the personal-data fields. Both entries flag the 2 August 2026 effective date as the trigger for completion of the technical file under Article 11 and Annex IV.

FinTech: a 30-person consumer-credit start-up in Vilnius

The product offers a consumer-credit application with two AI components: a credit-scoring model trained on internal data plus open-banking signals, and a fraud-detection model layered on top of transaction telemetry. The credit-scoring model lands in Annex III point 5(b) — creditworthiness assessment of natural persons — and is High-risk by default; the company is the provider, places the system on its own platform, and runs internal-control conformity assessment under Annex VI. The fraud-detection model takes the text-level carve-out written directly into Annex III §5(b), which excludes "AI systems used for the purpose of detecting financial fraud" from the creditworthiness category in the first place — that is a different exclusion from the Article 6(3) procedural-task derogation that filters Annex III systems generally, and the inventory entry needs to make clear which exit it is using. Because the carve-out is narrow (it covers fraud detection, not all transaction risk-scoring), the company's legal team keeps a defensive position by maintaining a partial high-risk control set on the fraud model, and triggers a re-classification any time the model's purpose extends beyond fraud detection. The borderline call is a customer-support chatbot built on Mistral Large; that becomes a Limited-risk Entry C with a GPAI-deployer overlay. The cross-reference to the GDPR ROPA is critical here because Article 22 GDPR (automated individual decision-making) applies to the credit-scoring entry, and the inventory needs to evidence the data subject's right to obtain human intervention, express their point of view, and contest the decision. Article 27 of the AI Act also bites here — every deployer of an Annex III §5(b) system must run a fundamental-rights impact assessment, regardless of whether the deployer is public or private. The DPO is the designated owner for both High-risk entries; the Head of Customer Operations owns the chatbot.

MedTech: a 20-person remote-monitoring start-up in Dublin

The product is a remote patient-monitoring application with a single AI use case: an algorithm that flags clinically significant changes in vital signs based on continuous wearable telemetry. The system is regulated as a medical device under Regulation (EU) 2017/745 (MDR) and carries CE marking under that regime. The AI inventory entry is High-risk under Article 6(1) — the AI is a safety component of a product covered by the Union harmonisation legislation listed in Annex I (medical devices) and requiring third-party conformity assessment. The substantive AI Act high-risk obligations apply to this entry from 2 August 2027, not 2 August 2026, because of the deferred Article 6(1) timetable in Article 113. The conformity assessment is integrated with the existing MDR procedure; the notified body operating under MDR also covers the AI-specific conformity work, with the AI-Act-specific deliverables (Article 11 technical documentation, Article 9 risk management, Article 13 transparency for healthcare professionals) layered onto the existing technical file. The inventory entry references the MDR technical file rather than duplicating it, the QMS owner is the same individual who already chairs MDR Article 10 quality-management responsibility, and the inventory cross-references the GDPR ROPA for the personal-health-data processing under Article 9(2)(h) GDPR. The deferred deadline is the trap here: a MedTech that mistakenly treats 2 August 2026 as its trigger ends up running parallel conformity work twelve months too early.

Gaps that catch SMEs out

Three categories of gap turn up repeatedly in SME inventory work, and each one quietly undermines the inventory unless it is hunted down deliberately.

Shadow AI is the largest. Marketing teams adopt Jasper, ChatGPT Team, or Copy.ai for content drafting. Sales teams plug Gong, Apollo, or Lavender into call recordings and email outreach. Engineering teams use GitHub Copilot, Cursor, or Codeium without a formal procurement decision. HR runs surveys through ChatGPT and feeds open-text responses through sentiment classifiers. Each of these is an AI use case in scope of Article 3(1), almost certainly under Limited or Minimal risk on its own, but invisible to the central inventory unless someone asks. The fix is procedural: a one-page declaration form circulated to every department head asking them to list every AI tool used by their team in the last six months, repeated quarterly, with the procurement card statement and the SaaS expense ledger used to cross-check the answers. The first round of declarations from a typical 50-person SaaS company surfaces between twelve and twenty AI tools the central compliance function did not know about.

Embedded AI inside SaaS is the second gap. Increasingly, the standard B2B SaaS product the organisation has used for years has quietly added AI features in 2024 and 2025 — Salesforce Einstein, HubSpot AI, Notion AI, Slack AI, Zoom AI Companion, Microsoft 365 Copilot, Google Workspace Gemini features, Atlassian Intelligence. The host organisation is a deployer of each of those AI sub-systems, even though the procurement was originally for the underlying productivity tool. The inventory must capture each AI sub-feature as a distinct entry, not collapse them into the parent SaaS line. Each vendor is, in turn, a GPAI deployer above an upstream model provider, so the flow-down chain runs three layers deep: foundation model (OpenAI/Anthropic/Google) → SaaS vendor (HubSpot/Microsoft/Slack) → host organisation. The inventory entry needs to name all three layers if the deployer obligations are to be enforceable.

Model swaps are the third and most slippery gap. A SaaS vendor changes its underlying GPAI model — say, from GPT-4 to GPT-4o, or from Claude 3.5 to Claude 4 — without changing the product surface, and the AI inventory entry on the deployer side becomes silently incorrect. The fix is to add a "model version locked" field to the inventory and to require a re-classification trigger when the upstream model changes. For mature vendors, the change is usually announced through a release-notes channel that the procurement team should subscribe to; for less mature vendors, the inventory entry should explicitly note that the upstream model identity is volatile, and the review cadence for that entry should be tightened (quarterly rather than annual).

Common owner-assignment problems

The most frequent reason an AI inventory drifts into uselessness is that ownership is assigned to a team or department rather than a named individual. "The product team owns Entry 7" is a non-answer when the system materially changes and no one updates the entry, because no one personally owed the update. The discipline that holds every other regulatory register together — the named DPO behind the GDPR ROPA, the named CISO behind the ISO 27001 asset register, the named NIS2 contact in the Member State notification — is the same discipline the AI inventory needs.

Three patterns work in practice. The first is a single AI Act lead, typically housed in the legal or compliance function, who owns the central register and is the named owner for the highest-risk entries (the High-risk Annex III systems where the technical file under Article 11 needs cross-functional input). The second is a federated model in which the AI Act lead owns the register itself and the methodology, while individual product owners or feature owners are named on each entry — appropriate for SaaS organisations where new AI features ship every quarter and central ownership of every entry would create a bottleneck. The third is a co-ownership model in which each High-risk entry has a designated business owner (the product manager or the head of the relevant function) and a designated control owner (the DPO or the AI Act lead) — the business owner is accountable for keeping the entry current as the system evolves, the control owner is accountable for the obligation set being met. Each model is defensible; what is not defensible is leaving entries with team-level or unfilled ownership.

A second owner-assignment problem is the absent co-owner on cross-functional entries. The HR-tech worked example above is the canonical case: the product owner of the candidate-ranking feature is naturally placed to keep the technical file current, but the obligation to inform affected workers under Article 26(7) sits with the customer's own deployer-side HR function, not the vendor's. If the vendor's inventory entry does not name a counterparty owner inside each customer organisation, the downstream Article 26 deployer obligations cannot be evidenced, and the vendor's own Article 13 instructions for use cannot be properly addressed. Customer onboarding documentation and the Master Services Agreement template both need a clause that requires the customer to name an AI Act deployer owner before activation.

Article 26(5) bears a final mention because it is one of the few deployer obligations that catches even disciplined inventories off guard. Deployers of high-risk AI systems must monitor the operation of the system on the basis of the instructions for use and, where relevant, inform providers in accordance with Article 72. Where deployers have reason to consider that use in accordance with the instructions may result in the system presenting a risk within the meaning of Article 79(1), they must, without undue delay, inform the provider or distributor and the relevant market surveillance authority — and suspend use of the system. Practically, this means every high-risk inventory entry needs a named monitoring owner on the deployer side, a documented escalation path that ends at the relevant national competent authority, and a written suspension protocol that can be activated without further internal sign-off. Designating those late, after the system is already in production, is one of the most common compliance findings in early enforcement reviews.

The 60-day inventory build plan

Sixty days is enough to ship a defensible first version of the inventory if the work is sequenced cleanly, the management body commits to the cadence at the start, and existing registers are reused rather than duplicated. The plan below assumes a typical 30-to-100-person SME with no previous AI governance function and one cross-functional AI Act lead designated for the work.

Days 1 to 10 — scope and reuse mapping. The AI Act lead gathers the existing GDPR ROPA, the ISO 27001 asset inventory (where applicable), the NIS2 register (where in scope), the procurement vendor list, and the SaaS expense ledger. Every entry across those registers is reviewed for AI relevance, and the ones that touch AI are flagged for the inventory. The lead drafts the inventory schema — the fifteen-field set above, with any organisation-specific extensions — and circulates it for sign-off by the DPO, the CISO (where one exists), and a sponsor on the management body. Tooling is decided in the same window: spreadsheet, GRC platform, or extension of the ROPA tooling. For SMEs without a GRC platform, a structured spreadsheet held under version control with mandatory review fields is sufficient for the first version.

Days 11 to 30 — discovery and population. The shadow-AI sweep runs in this window: a one-page declaration form to every department head, a SaaS-expense reconciliation, and structured interviews with each function lead (product, engineering, marketing, sales, customer support, HR, finance, operations). Each declaration becomes a draft inventory entry. The AI Act lead populates the metadata fields from the existing ROPA and asset registers wherever the data already exists, and only opens a fresh data-collection thread for fields that are genuinely AI-specific (intended purpose, output type, upstream model, training-data origin, human oversight design). By the end of day 30, every AI use case the organisation knows about should have a draft inventory entry, even if the tier classification is provisional and the rationale is still skeletal.

Days 31 to 50 — classification and gap analysis. The AI Act lead walks every draft entry through the tier decision tree (eight questions covering Article 5, Annex III, Article 6(1), Article 6(3), provider/deployer status, GPAI overlay, Article 50(1) interaction, Article 50(2)/(4) synthetic content, and Article 51 systemic risk). Each entry is assigned a tier with a written rationale. For High-risk and Limited-risk entries, the lead runs a gap analysis against the corresponding obligation set: for High-risk, the thirteen-item Article 9 to Article 49 baseline; for Limited-risk, the four Article 50 disclosure obligations. Each gap becomes a remediation task with a named owner and a target date, sequenced against 2 August 2026 (or 2 August 2027 for Annex I Article 6(1) entries). The DPIA and Article 27 fundamental-rights impact assessment scope is decided in the same window: every High-risk entry that processes personal data triggers a DPIA review, and any High-risk entry deployed by a public body, by a private entity providing public services, or by any deployer of an Annex III §5(b) creditworthiness or §5(c) life- or health-insurance pricing system triggers an Article 27 FRIA.

Days 51 to 60 — sign-off and operating cadence. The inventory and the gap-analysis backlog are presented to the management body for sign-off. The AI Act item is added as a standing quarterly board agenda point. The owner of each High-risk entry is formally designated in writing, and the deployer-side Article 26(5) monitoring owners are confirmed alongside their suspension authority. Customer-facing contracts and onboarding documentation are reviewed for the Article 13 instructions-for-use and Article 26 deployer-flow-down clauses. The Article 4 AI literacy programme is scoped, with the inventory used to identify the staff cohorts that operate or use each system. A first internal audit of the inventory is scheduled for the end of Q1 2027 — six months after the obligations bite, long enough for live operation to surface the first round of issues, short enough that gaps can still be remediated before the first external audit cycle. The second-version review of the inventory itself is scheduled for the end of Q2 2027.

Conclusion and next steps

The 2 August 2026 deadline is fixed and asymmetric: the cost of being late on the inventory is far higher than the cost of being early, because almost every other AI Act obligation depends on it. The work is also unusually well-aligned with what European SMEs already do well — GDPR Article 30 records of processing, NIS2 asset registers, ISO 27001 asset inventories — so the marginal effort is meaningfully lower than the headline regulation suggests. The trap is treating the AI inventory as a parallel artefact rather than as an extension of the registers that already exist. Build it as a federation, name a single individual as the owner of each entry, and treat shadow AI, embedded SaaS AI, and model swaps as the three categories of gap that will quietly undermine the register if they are not hunted explicitly.

Practically, the next sixty days look like this for an SME starting from zero: appoint an AI Act lead in week one, lock the schema and the reuse plan in week two, run the shadow-AI sweep in weeks three and four, populate the first complete draft of the register in weeks four and five, classify and gap-analyse every entry in weeks six and seven, and present a signed-off register and remediation roadmap to the management body in week nine. By week ten, the inventory is no longer a project — it is a standing operating asset with a quarterly review cadence and a designated owner for every entry. That is the posture the 2 August 2026 deadline rewards, and it is the posture that makes the rest of the AI Act compliance backlog tractable.

Verifica la tua prontezza alla conformità

Esegui la nostra valutazione gratuita di prontezza a GDPR, NIS2 e Regolamento IA e ricevi raccomandazioni personalizzate in pochi minuti.

Avvia la valutazione gratuita

EU Compliance Weekly

Get the latest regulatory updates, compliance tips, and enforcement news delivered to your inbox every week.

We respect your privacy. Unsubscribe anytime.

Articoli correlati