Back to blog
Best Practices17 min readMay 1, 2026

AI Act Tier Classification for EU SMEs: A Practical Guide for 2026

By Anna Bergström

TL;DR

The AI Act sorts every AI system into one of four risk tiers — Unacceptable, High, Limited, and Minimal — and then layers a parallel regime for General-Purpose AI (GPAI) models on top. Most of the regulatory weight sits in the High-risk tier, which carries roughly thirteen substantive obligations under Articles 9 to 49, but Limited-risk transparency duties (Article 50) catch every chatbot and deepfake on the market, and Article 5 prohibitions have been enforceable since 2 February 2025. Misclassification is far more expensive than classification: prohibited-practice fines reach €35M or 7% of global annual turnover, high-risk non-conformity fines reach €15M or 3%, and supplying incorrect information to authorities still costs €7.5M or 1.5%. For SMEs, the practical task is to inventory every AI use case, walk each through a short decision tree, and assign obligations before the 2 August 2026 compliance wave lands. This guide gives you the tree, the traps, and the tier-by-tier checklist needed to do that work in a single quarter.

Why getting tier classification wrong costs more than getting it right

Article 99 of the AI Act sets fine ceilings that are deliberately steeper than GDPR's. Engaging in a prohibited practice can trigger an administrative fine of up to €35,000,000 or 7% of total worldwide annual turnover, whichever is higher. Non-compliance with high-risk obligations or with Article 5 obligations sits at €15M or 3%. Supplying incorrect, incomplete, or misleading information to notified bodies and competent authorities is capped at €7.5M or 1.5%. SMEs and start-ups benefit from a partial mitigation under Article 99(6) — fines may be set at the lower of the two values rather than the higher — but that mitigation only blunts the ceiling; it does not remove it.

Numbers aside, the deeper cost shows up in your operating model. A misclassified system tends to be a system without a risk management file, without a data governance regime, without human oversight controls, and without a post-market monitoring plan. When a competent authority asks for any of those artefacts under Article 21, the gap is not a paperwork issue: it is months of remediation work performed under enforcement pressure. Retrofitting a high-risk control framework onto a deployed system typically takes three to six months and costs two to four times what the upfront classification work would have cost.

Reputational damage compounds the financial penalty. The market surveillance authority can require corrective action under Article 79, including withdrawal of the system from the EU market. For a B2B vendor whose contracts include compliance warranties, a single withdrawal order can void revenue across an entire customer base. Get the tier right at the design stage, and these scenarios become unreachable.

The four official risk tiers

The AI Act's tiering is intentionally asymmetric. The vast majority of AI systems on the EU market sit in the Minimal tier and carry no mandatory obligations. A small but commercially significant set sits in the Limited tier and triggers transparency duties. The High-risk tier is narrow but obligation-heavy. Unacceptable is a hard floor — these systems may not be placed on the EU market at all. Knowing where the boundaries fall between tiers is what the work is.

Unacceptable risk (Article 5)

Article 5 prohibitions have been enforceable since 2 February 2025. They are not a wish list: a system that falls into any of these categories cannot be marketed, deployed, or used in the EU. The list as enacted covers eight families of practice.

  • Article 5(1)(a): subliminal, manipulative, or deceptive techniques that materially distort behaviour and cause significant harm.
  • Article 5(1)(b): exploitation of vulnerabilities of specific groups (age, disability, socio-economic situation) to materially distort behaviour and cause significant harm.
  • Article 5(1)(c): social scoring by public authorities or on their behalf, leading to detrimental treatment in unrelated contexts or treatment that is unjustified or disproportionate.
  • Article 5(1)(d): predictive policing systems based solely on profiling or personality assessment to predict criminal offences.
  • Article 5(1)(e): untargeted scraping of facial images from the internet or CCTV to create or expand facial-recognition databases.
  • Article 5(1)(f): emotion recognition in workplaces and educational institutions, except for safety or medical reasons.
  • Article 5(1)(g): biometric categorisation that infers race, political opinions, trade-union membership, religious or philosophical beliefs, sex life, or sexual orientation.
  • Article 5(1)(h): real-time remote biometric identification in publicly accessible spaces for law-enforcement purposes, with a small set of exceptions subject to prior judicial authorisation.

SMEs encounter the prohibitions less often than they fear, but the touch points are real. Workplace emotion-detection features bundled into customer-experience platforms, sentiment dashboards in HR analytics, and biometric clock-in systems that infer demographic attributes are all live exposures. If any current or planned use case sits inside Article 5, the only compliant response is to redesign or exit the use case.

High risk (Article 6 and Annex III)

Two routes lead to the High-risk tier. The first is Article 6(1): a system is High-risk when it is itself a product, or a safety component of a product, that falls under the EU harmonisation legislation listed in Annex II and that requires third-party conformity assessment. This route picks up medical devices under Regulation (EU) 2017/745, machinery under Regulation (EU) 2023/1230, in-vitro diagnostics, lifts, toys, and several others. SMEs that supply software components into regulated hardware almost always land here.

The second route, Article 6(2), is the one most SMEs encounter: any system listed in Annex III is High-risk by default. Annex III defines eight categories.

  • Biometrics: remote biometric identification, biometric categorisation by sensitive attributes (where not prohibited), and emotion-recognition systems outside the workplace and educational contexts.
  • Critical infrastructure: safety components in the management and operation of critical digital infrastructure, road traffic, and the supply of water, gas, heating, and electricity.
  • Education and vocational training: admission decisions, evaluation of learning outcomes, allocation between institutions, and detection of prohibited behaviour during testing.
  • Employment, workers management, and access to self-employment: recruitment and selection (CV screening, candidate ranking), decisions affecting terms of employment, promotion, termination, task allocation based on individual behaviour or personal traits, and performance and behaviour monitoring.
  • Access to essential private and public services: eligibility for public benefits, creditworthiness scoring (with a narrow exception for fraud detection), pricing of life and health insurance, and dispatching of emergency first-response services.
  • Law enforcement: profiling, polygraphs, evaluation of evidence reliability, and prediction of recidivism for natural persons.
  • Migration, asylum, and border control: polygraphs, risk assessments of natural persons, examination of asylum and visa applications, and identification within border-control contexts.
  • Administration of justice and democratic processes: assistance to judicial authorities in researching and interpreting facts and the law, and influence on the outcome of elections or voting behaviour (excluding outputs that natural persons are not directly exposed to).

Article 6(3) introduces a derogation: a system listed in Annex III is not considered High-risk if it does not pose a significant risk of harm to health, safety, or fundamental rights, including by not materially influencing the outcome of decision-making. Four conditions in Article 6(3) describe when this derogation applies — narrow procedural tasks, improving the result of a previously completed human activity, detecting decision-making patterns without replacing human assessment, or preparatory tasks. Profiling of natural persons always counts as significant influence and never qualifies for the derogation. SMEs that want to rely on this derogation must document their reasoning and register the assessment in the EU database under Article 49.

Most High-risk obligations apply from 2 August 2026. The narrow product-safety high-risk strand under Annex II takes effect on 2 August 2027.

A practical note on conformity assessment under Article 43: most Annex III high-risk systems follow the internal-control route in Annex VI. The provider self-assesses conformity, draws up the EU declaration of conformity under Article 47, and affixes the CE marking. No notified body is involved. The exceptions are biometric systems under Annex III point 1, which require notified-body involvement under Annex VII, and any high-risk system that is also covered by Annex II EU harmonisation legislation, where the conformity assessment is integrated with the existing sectoral procedure (medical devices, machinery, and so on). For SMEs whose only high-risk exposure is on the Annex III non-biometric side, this is meaningful relief: the path runs through internal documentation and EU database registration, not through an external audit body and a notification fee.

Limited risk (Article 50)

The Limited-risk tier is a transparency regime, not a conformity regime. There is no notified body, no CE marking, no EU database registration. There are four obligations.

  • Article 50(1): providers of AI systems intended to interact directly with natural persons must design and develop them so that affected persons are informed they are interacting with an AI system, unless this is obvious to a reasonably well-informed person.
  • Article 50(2): providers of AI systems generating synthetic audio, image, video, or text content must mark outputs in a machine-readable format and ensure they are detectable as artificially generated or manipulated, with technical solutions that are effective, interoperable, robust, and reliable.
  • Article 50(3): deployers of an emotion-recognition system or a biometric categorisation system (where not prohibited) must inform the natural persons exposed to them.
  • Article 50(4): deployers of an AI system that generates or manipulates image, audio, or video content constituting a deepfake must disclose that the content has been artificially generated or manipulated. Where the content is part of an evidently artistic, creative, satirical, or fictional work, the disclosure obligation is relaxed. Deployers of AI-generated text published to inform the public on matters of public interest must disclose that the text has been artificially generated or manipulated, unless an editorial human review has taken place and a natural or legal person holds editorial responsibility.

Limited-risk transparency obligations take effect on 2 August 2026. The compliance lift is small in absolute terms — a banner, a watermark, an "AI assistant" disclosure label — but the scope is broad, and most SME chatbots and content workflows are in scope.

Minimal risk

Everything that is not Unacceptable, High, or Limited sits in the Minimal-risk tier. Spam filters, AI in video games, AI-driven inventory forecasting, recommendation engines on retail sites, and most internal productivity assistants fall here. There are no mandatory obligations under the AI Act for Minimal-risk systems. Article 95 invites providers and deployers to apply the obligations of Article 50 voluntarily and to adopt codes of conduct — useful for trust-building with customers, but not legally required. The volume of systems in this tier dwarfs the others, and the right operating posture is a documented assessment that lands the system in Minimal, plus a standing review trigger if features change.

The fifth piece: General-Purpose AI models

Articles 51 to 55 add a parallel regime for General-Purpose AI (GPAI) models — large language models, image generators, multimodal foundation models, and similar systems. The GPAI regime sits beside the four-tier system, not inside it: a deployed AI use case carries both its tier-specific obligations and any applicable GPAI obligations from upstream model providers. GPAI obligations took effect on 2 August 2025.

There are two layers of GPAI obligation. Regular GPAI providers — anyone placing a general-purpose AI model on the EU market — must, under Article 53, draw up and keep up to date technical documentation including training and testing process and evaluation results; make information and documentation available to downstream providers who integrate the model into their AI systems; put in place a policy to comply with EU copyright law, in particular to identify and respect text-and-data-mining opt-outs under Article 4(3) of Directive (EU) 2019/790; and publish a sufficiently detailed summary of the content used for training, following a template provided by the AI Office.

GPAI providers whose models are designated as posing systemic risk — Article 51 designates this where cumulative training compute exceeds 10^25 floating-point operations, or where the European Commission designates a model on the basis of equivalent capabilities — face additional Article 55 obligations: model evaluation including adversarial testing, identification and mitigation of systemic risks at Union level, tracking and reporting of serious incidents and possible corrective measures to the AI Office and competent authorities without undue delay, and an adequate level of cybersecurity protection for the model and its physical infrastructure.

The practical takeaway for SMEs is that you are almost certainly a deployer of GPAI, not a provider. When you embed GPT-4o, Claude, Gemini, Mistral Large, or Llama into your product, the model provider carries the GPAI obligations. Your obligations attach to the use case you are building on top — its risk tier under Articles 5/6/50 — and to the deployer-side flow-down obligations the upstream provider passes to you under Article 53(1)(b). One important exception in Article 25: if you substantially modify a GPAI or put a high-risk AI system on the market under your own name or trademark, you become the provider for AI Act purposes and inherit the corresponding obligations.

Two operational consequences follow from being a GPAI deployer. First, you must read and act on the upstream provider's Article 53(1)(b) instructions for use. These typically include limits on permitted use cases, prompts on acceptable input data categories, and notes on known limitations and failure modes. Ignoring those instructions is itself a compliance failure on your side, even if the upstream model provider has done everything correctly. Second, you should keep evidence of the upstream provider's Article 53 compliance posture — the published training-data summary, the technical documentation excerpts shared with downstream providers, the copyright policy statement. When a competent authority asks how your AI system handles training-data provenance, "we use a third-party model" is not a sufficient answer: you need to point to the upstream provider's documentation and demonstrate that you have integrated its constraints into your deployment.

Decision tree: classify your AI use case in 8 questions

Walk every AI use case in your inventory through these eight questions in order. The first triggering answer fixes the tier. Document the answers — the audit trail is itself part of the compliance posture under Article 17.

Question 1. Does the use case match any prohibition in Article 5? Subliminal manipulation causing harm; exploitation of vulnerabilities; social scoring by public authorities; predictive policing based on profiling alone; untargeted scraping for facial recognition; workplace or educational emotion recognition (outside safety/medical); biometric categorisation by sensitive attributes; real-time remote biometric identification in public for law enforcement. If yes — Unacceptable. Stop. The system cannot be deployed. Redesign or exit.

Question 2. Is the use case listed in Annex III, and does it not qualify for the Article 6(3) "non-significant risk" derogation? Annex III categories are biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration/border, and justice/democratic processes. If the system is in one of these areas and either profiles natural persons or materially influences decisions, the derogation does not apply. If yes — High-risk. Move to the High-risk obligation set.

Question 3. Is the AI a safety component of a product covered by Annex II EU harmonisation legislation that requires third-party conformity assessment? Medical devices, machinery, in-vitro diagnostics, lifts, toys, recreational craft, civil aviation, motor vehicles, marine equipment, agricultural and forestry vehicles. If yes — High-risk under Article 6(1). The conformity assessment is integrated with the existing sectoral assessment.

Question 4. Are you the provider or the deployer? You are a provider if you develop the AI system and place it on the EU market or put it into service under your name or trademark, or if you substantially modify a high-risk system already on the market. You are a deployer if you use an AI system under your authority for a professional activity. Most SMEs are deployers. The obligation set differs significantly: providers carry the conformity-assessment burden under Article 43, deployers carry use-side obligations under Article 26 (human oversight, monitoring, logging, informing affected workers and natural persons).

Question 5. Is your system built on a general-purpose AI model — an LLM, a multimodal model, an image generator? GPAI obligations under Articles 51 to 55 apply to the upstream model provider. As a downstream integrator, you receive technical documentation and instructions for use under Article 53(1)(b), and you must use them properly. Your use case still needs its own tier classification on top.

Question 6. Does the system interact directly with natural persons — chatbot, voice assistant, virtual agent, in-app helper? If yes — Limited-risk transparency under Article 50(1) applies. Affected persons must be informed they are interacting with an AI system, unless that fact is already obvious in context.

Question 7. Does the system generate or manipulate text, image, audio, or video content — synthetic media, AI-generated marketing copy, voice cloning, deepfake video? If yes — Article 50(2) requires machine-readable marking of outputs as artificially generated, and Article 50(4) requires user-facing disclosure for deepfakes and for AI-generated text published on matters of public interest. The disclosure obligation is relaxed for evidently artistic, creative, satirical, or fictional work and for human-edited editorial content.

Question 8. Is your model trained with cumulative compute exceeding 10^25 floating-point operations, or has the European Commission designated it as posing systemic risk? Almost no SME will answer yes. If yes — GPAI with systemic risk under Article 55, with model evaluation, risk mitigation, serious-incident reporting, and cybersecurity obligations on top of the regular GPAI obligations.

If none of questions 1, 2, 3, 6, or 7 trigger, the use case sits in the Minimal-risk tier and carries no mandatory obligations beyond standing inventory and review. Document the negative result — Article 4 (AI literacy) and Article 21 (cooperation with authorities) still apply.

Common misclassification traps

Five misclassification mistakes show up again and again in SME client work through 2025 and 2026. Each one leaves an organisation one short step away from a much heavier obligation set than the one they think they're carrying.

Trap 1: "We just use ChatGPT, we're not really doing AI." This is the most common framing and the most expensive. Plugging GPT-4o into a CV-screening workflow does two things at once: it makes the model provider an upstream GPAI provider with Article 53 obligations, and it makes you a deployer of a high-risk Annex III employment system with the full Article 26 obligation set, even if you wrote no model code. The "we just use" framing collapses two separate tier analyses into a single misleading statement. Treat them as two distinct classification questions.

Trap 2: "It's internal-only, so the AI Act doesn't really apply." The AI Act regulates systems placed on the market or put into service in the Union. Internal deployment is "putting into service" under Article 3(11) and is squarely in scope. An HR-screening AI used only on internal candidates is just as High-risk as one sold to other employers. The internal-only framing is also wrong about Article 26: deployers must inform natural persons subjected to the use of a high-risk system in the workplace, regardless of whether the system is a third-party SaaS or an internal build.

Trap 3: "We didn't train the model, so we're not the provider." Usually correct, but Article 25 has two override conditions. If you put a high-risk AI system on the market under your own name or trademark, you are the provider regardless of who built the underlying model. If you substantially modify a high-risk AI system already on the market, you become the provider for the modified system. Fine-tuning a foundation model on your own data with significant changes to its intended purpose is a frequent gateway into provider status — and into the full conformity-assessment burden under Article 43.

Trap 4: "Our chatbot is Minimal-risk." Not under Article 50. Any AI system intended to interact directly with natural persons triggers the Limited-risk transparency obligation. The compliance lift is light — a clear "you are chatting with an AI assistant" notice — but skipping it because the system "feels Minimal" creates avoidable exposure when the obligations bite on 2 August 2026.

Trap 5: "AI Act tier covers our automated decisions, so GDPR Article 22 is handled." It is not. Article 22 GDPR creates a right not to be subject to a decision based solely on automated processing that produces legal or similarly significant effects, with separate consent, contractual necessity, or explicit law conditions and a right to obtain human intervention. The AI Act human-oversight requirement under Article 14 and GDPR Article 22 share themes but are not interchangeable: Article 14 is a design-and-deploy requirement on the provider; Article 22 is a data-subject right against the controller. Compliance with one does not discharge the other. Map both for any system that makes consequential decisions about identifiable natural persons.

A sixth trap deserves a mention because it sneaks past most internal classification reviews: assuming a single tier holds across the entire product. AI Act tiers attach to use cases, not to vendors or platforms. The same machine-learning platform can support a Minimal-risk recommendation feature, a Limited-risk customer chatbot, and a High-risk recruitment workflow inside the same logical product. Treat each use case as a separate classification exercise with its own tree walk, its own documentation, and its own remediation backlog. A single product-level "we are Limited-risk" verdict is almost always a misclassification that buries one or more higher-tier features inside a comfortable label.

Tier-by-tier compliance checklist

Once a use case has a tier, the obligations follow mechanically. The checklist below summarises the substantive deliverables per tier. Use it to scope a remediation backlog after the inventory and decision-tree work.

Unacceptable: stop and exit

There is no compliance route. Decommission the use case, redesign it so it no longer falls within Article 5, or exit the EU market for that use case. Document the decision and retain it for audit purposes.

High-risk: thirteen-item compliance baseline

  • Risk management system (Article 9): a documented, iterative process across the system lifecycle covering identification, evaluation, and mitigation of foreseeable risks to health, safety, and fundamental rights.
  • Data and data governance (Article 10): training, validation, and testing data sets that are relevant, sufficiently representative, and to the best extent possible free of errors, with documented bias-mitigation measures.
  • Technical documentation (Article 11 and Annex IV): drawn up before the system is placed on the market, kept up to date, demonstrating conformity with all applicable requirements.
  • Record-keeping (Article 12): automatic logging of events relevant to risk identification and post-market monitoring, retained for an appropriate period.
  • Transparency and provision of information to deployers (Article 13): instructions for use that are concise, complete, correct, clear, relevant, accessible, and comprehensible for deployers.
  • Human oversight (Article 14): design measures enabling natural persons to oversee the system, intervene, override outputs, or disengage as needed.
  • Accuracy, robustness, and cybersecurity (Article 15): appropriate levels declared in the instructions for use, with resilience to errors, faults, and attempts at unauthorised use.
  • Quality management system (Article 17): documented, covering compliance strategy, design control, data management, post-market monitoring, incident reporting, communication with authorities, and resource management.
  • Conformity assessment (Article 43): internal control for most Annex III systems, third-party assessment for Annex II products and for biometric systems under Annex III point 1.
  • CE marking (Article 48): affixed visibly, legibly, and indelibly, with the identification number of the notified body where applicable.
  • EU database registration (Article 49): registration of the system in the EU database for high-risk AI systems before placing on the market or putting into service.
  • Post-market monitoring (Article 72): a documented system to collect and analyse data on system performance throughout its lifetime.
  • Serious incident reporting (Article 73): notification of any serious incident to the market surveillance authority of the affected Member State, within deadlines that depend on the type of incident (immediate for widespread infringement of fundamental rights, 15 days for most others).

Limited-risk: transparency only

Implement Article 50 disclosures: a clear notice that natural persons are interacting with an AI system; machine-readable marking of synthetic outputs; deployer disclosure of deepfakes and of AI-generated text published on matters of public interest; deployer disclosure when emotion-recognition or permitted biometric categorisation systems are used. Update terms of service and privacy notices to reflect these disclosures, and hold a record of the technical implementation.

Minimal-risk: standing posture only

Document the classification result, retain the assessment, and review whenever the system's features or context of use change materially. Optionally adopt the Article 95 voluntary code of conduct for trust-signalling. Maintain Article 4 AI literacy training for staff who operate or use the system.

GPAI providers (when Article 25 makes you one)

Draw up and maintain Article 53 technical documentation; provide downstream-provider information and instructions; implement an EU copyright compliance policy honouring text-and-data-mining opt-outs; publish the AI Office training-data summary template. If your model crosses 10^25 FLOPs cumulative compute or is designated under Article 51, add Article 55 model evaluation, systemic-risk mitigation, serious-incident reporting, and model-and-infrastructure cybersecurity.

Timeline of compliance deadlines

Six dates anchor the AI Act calendar. Calibrate your remediation roadmap against them.

  • 1 August 2024: AI Act entered into force.
  • 2 February 2025: Article 5 prohibitions enforceable. Article 4 AI literacy obligations apply to all providers and deployers — staff involved in operation and use of AI systems must have a sufficient level of AI literacy.
  • 2 August 2025: GPAI provider obligations under Articles 51 to 55 enforceable. Governance and penalty provisions (Articles 99 to 101) apply. Member State competent authorities and notifying bodies must be designated.
  • 2 February 2026: Commission guidelines and implementing acts on practical implementation expected, including the format for technical documentation and the GPAI training-data summary template. Codes of practice referenced under Article 56 expected to be finalised.
  • 2 August 2026: most substantive AI Act provisions apply — Annex III high-risk obligations (Articles 8 to 22, plus 26 deployer obligations), Article 50 transparency obligations, EU database registration. This is the headline date for SMEs with non-product high-risk systems and for any SME running customer-facing chatbots, deepfake tooling, or AI content generation.
  • 2 August 2027: Article 6(1) high-risk obligations apply for AI systems that are safety components of, or are themselves, products covered by Annex II EU harmonisation legislation requiring third-party conformity assessment.

What to do this quarter

A 30/60/90-day plan, calibrated for an SME with no mature AI governance function in place.

Days 1 to 30 — inventory and kick-off. Run a cross-functional discovery sprint covering product, engineering, HR, marketing, customer support, finance, and operations. Catalogue every AI system in production or planned for the next twelve months: the use case, the upstream model, the data, the affected natural persons, the decision context. Aim for completeness rather than depth — a single-page record per system is enough for now. Stand up a small AI governance group: a sponsor from the management body, a technical lead, a legal/compliance lead, and a DPO link. Allocate budget for the next two phases.

Days 31 to 60 — classification and gap assessment. Walk every inventory entry through the eight-question decision tree. Record the answers and the resulting tier. For every High-risk and Limited-risk system, perform a gap assessment against the corresponding obligations. For any borderline Annex III system, document the Article 6(3) reasoning explicitly — vague justifications will not survive an audit. Identify which systems trigger GDPR Article 22 in addition to AI Act obligations and route those for joint legal review.

Days 61 to 90 — prioritised remediation roadmap. Convert the gap assessment into a backlog with owners, deliverables, and deadlines, sequenced against the 2 August 2026 date. Front-load Article 50 transparency work for chatbots and content tools — it is high-volume and low-effort and demonstrates progress. Sequence the heavier High-risk deliverables (risk management file, technical documentation, post-market monitoring) over the remaining months. Present the roadmap to the management body for sign-off and request a standing AI Act item on the quarterly board agenda. Schedule the first internal audit for Q1 2027.

Two worked examples make the discipline concrete. Imagine a 50-person SaaS HR-tech vendor in Munich integrating GPT-4o for CV summarisation. Question 1: not prohibited. Question 2: yes, employment recruitment is Annex III point 4(a). Question 3: not an Annex II product. Therefore High-risk. Question 4: the vendor places the system on the EU market under its own brand — provider. Question 5: built on a GPAI model — OpenAI carries Article 53 obligations upstream; the vendor receives the technical documentation flow-down. Conclusion: full High-risk obligation set under Articles 9 to 49, plus Article 26 deployer obligations cascaded to its customers. Effective date for substantive obligations: 2 August 2026. Now imagine a 20-person Stockholm e-commerce company using a Gemini-powered customer-support chatbot for general product questions, with no automated decisions on consumer rights or pricing. Question 1: not prohibited. Question 2: not in Annex III. Question 3: not an Annex II product. Question 6: yes, direct interaction with natural persons — Limited-risk transparency under Article 50(1). Conclusion: Minimal underlying risk, with a single Article 50 disclosure obligation on the chatbot itself. Effective date: 2 August 2026. Both companies now have a tier, a deadline, and a backlog. That is the entire goal of the quarter.

Check your compliance readiness

Run our free GDPR, NIS2 & AI Act readiness assessment and get personalised recommendations in minutes.

Start Free Assessment

EU Compliance Weekly

Get the latest regulatory updates, compliance tips, and enforcement news delivered to your inbox every week.

We respect your privacy. Unsubscribe anytime.

Related Articles