The AI Act in 2026: Concrete Obligations for SMEs and Scale-ups
Intelligence artificielle
Stratégie IA
Gouvernance IA
Gestion des risques IA
Droit IA
In 2026, the AI Act is no longer just for lawyers or large corporations. If your SME uses a customer chatbot, a writing assistant, a resume screening tool, a CRM-connected agent, or an AI automation platform, you are likely affected.
May 09, 2026·14 min read
In 2026, the AI Act is no longer a topic reserved for lawyers or large groups. If your SME uses a customer chatbot, a writing assistant, a resume screening tool, an agent connected to the CRM, or an automation platform with AI, you are probably affected.
The good news: the AI Act does not require all SMEs to build a compliance factory. It imposes a logic proportionate to the risk. The bad news: waiting until the last moment can block an AI project, complicate a software purchase, or create an avoidable HR, customer, or regulatory risk.
This article translates the AI Act into concrete obligations for executives, ops, product, and IT managers, and scale-ups who want to continue deploying AI without losing control. This content is informative and does not replace legal advice tailored to your situation.
The AI Act 2026: Key Dates to Know
The reference text is the European Artificial Intelligence Act, EU Regulation 2024/1689. It entered into force in 2024, but its obligations apply gradually.
Date
What applies
Concrete impact for SMEs and scale-ups
August 1, 2024
Entry into force of the regulation
The regulatory countdown begins.
February 2, 2025
Prohibition of certain AI practices and AI literacy obligation
Certain practices are already banned. Teams must be educated on AI uses and risks.
August 2, 2025
Obligations related to general-purpose AI models and European governance
Mostly relevant if you develop or provide GPAI models.
August 2, 2026
Application of the majority of operational obligations
Key date for high-risk systems, transparency, and obligations of providers and deployers.
August 2, 2027
Application of certain rules for high-risk systems integrated into regulated products
Relevant for manufacturers, medtech, machinery, equipment, and CE-marked products.
For a standard SME, the priority milestone is August 2, 2026. By this date, you must be able to answer three simple questions: what AI systems do we use, what level of risk do they carry, and what evidence do we have that their use is controlled?
The AI Act Classifies Uses, Not Companies
A common mistake is asking: are we an AI company? The right question is rather: do we have AI systems that influence people, decisions, or sensitive processes?
A single scale-up can have several very different cases:
An internal assistant that rewrites sales emails, generally low risk.
A public chatbot that answers customers, transparency and quality control obligations.
A tool that pre-screens candidates, a potentially high-risk case.
An AI agent that triggers actions in an ERP, operational risk depending on authorized actions.
The AI Act therefore works by use case. This is why a simple AI register becomes one of the first deliverables to put in place.
Provider or Deployer: The Distinction That Changes Your Obligations
Before talking about compliance, you must identify your role. In the AI Act, obligations differ depending on whether you develop, sell, distribute, or use an AI system.
Role
Practical definition
SME or scale-up example
Why it matters
Deployer
You use an AI system under your authority, excluding personal use
You use an AI chatbot on your website or a meeting summary tool
You must train, use correctly, monitor, inform in certain cases, and comply with the GDPR.
Provider
You develop or have an AI system developed and place it on the market or put it into service under your name
You sell a SaaS platform with AI scoring or an integrated assistant
You bear obligations for design, documentation, testing, and compliance, especially if the system is high-risk.
Importer or distributor
You make an AI solution provided by a third party available in Europe
You resell a non-European AI solution to your clients
You must verify certain information and not distribute a manifestly non-compliant system.
Provider and deployer
You develop an AI system for your own internal use
You create an internal HR screening or sales decision support tool
You may accumulate design and usage obligations.
Common case: an SME using ChatGPT, Claude, Mistral, or Gemini via a standard interface is generally a deployer. However, a scale-up that integrates a third-party model into a product sold to its customers can become a provider of the AI system, even if it did not train the base model.
The 4 Risk Levels of the AI Act
The European regulation adopts a risk-based approach. The more the use can affect a person's rights, safety, or opportunities, the more the obligations increase.
Risk level
Concrete examples
Main obligations
Unacceptable risk (Prohibited)
Deceptive manipulation causing harm, exploitation of vulnerabilities, certain uses of emotion recognition at work or in education, social scoring
Do not deploy. These practices have already been banned since February 2025.
High risk
Recruitment, worker management, access to education, credit, health or life insurance, certain regulated products
Documentation, risk management, data quality, human oversight, logs, compliance, informing individuals depending on the case.
Limited risk or transparency
Public chatbot, synthetic content, deepfake, user interaction with an AI system
Clearly inform the user that they are interacting with an AI or that the content is artificially generated.
No specific heavy obligations, but AI literacy, security, GDPR, and internal governance remain necessary.
Classification does not depend solely on the technology. The same language model can be used to generate an email draft, which is low risk, or to rank job applications, which can become high risk.
Concrete Obligations for an SME Using AI
Maintain a Simple AI Use Register
The AI register is the starting point. It can fit in a spreadsheet at first. The goal is not to produce a perfect document, but to know what actually exists in the company.
A useful register contains at least: tool name, user team, purpose, data used, people impacted, provider, estimated risk level, business owner, validation rules, available logs, and review status.
This register also helps combat shadow AI, i.e., the scattered use of AI tools not validated by the company. To go further, you can rely on an enterprise AI audit approach to prioritize high-value and high-risk cases.
Train Teams in AI Literacy
Since February 2025, providers and deployers must take measures to ensure a sufficient level of AI literacy among the people who use or supervise these systems.
In concrete terms, an SME must be able to show that the relevant teams understand:
The limitations of the models, notably hallucinations, biases, and reasoning errors.
Confidentiality rules, for example, not pasting sensitive data into an unvalidated tool.
Cases where human validation is mandatory.
Internal rules on authorized tools and prohibited uses.
Useful training is not a generic conference on AI. It must be linked to business functions: support, sales, HR, finance, product, ops. This is exactly the role of an adoption program or an AI trainer in a company starting to structure its uses.
Inform Users When They Interact with an AI
If a customer speaks to an AI chatbot, they must generally be informed that they are interacting with an AI. If you publish synthetic content likely to be mistaken for real content, notably image, audio, video, or deepfake, the information must also be clear.
For a website, this can translate into a visible notice in the chatbot interface, a help page explaining the assistant's limitations, a human contact option, and internal rules on prohibited responses.
Transparency should not be reduced to a legal sentence hidden in the terms and conditions. It must help the user understand the expected level of reliability and the available recourses.
Strictly Regulate High-Risk Cases
High-risk systems are those to be treated with the most caution. For SMEs and scale-ups, the most frequent areas are recruitment, worker management, performance evaluation, access to certain essential services, credit, insurance, and certain regulated products.
If you use an AI to rank CVs, filter candidates, recommend a promotion, evaluate an employee, or influence access to an important service, do not treat this as a simple productivity tool. You must document the purpose, understand the provider, verify data quality, provide for real human oversight, keep available logs, and inform the people concerned when the regulation requires it.
In some cases, a fundamental rights impact assessment may be required, particularly for certain public deployers or certain uses related to credit and insurance. Even when it is not formally mandatory for your company, the logic remains sound: document who is impacted, what errors are possible, what recourses exist, and how humans remain in control.
Verify Your AI Providers
In 2026, buying an AI tool without verifying compliance becomes risky. Your provider due diligence must cover at least the following points: product documentation, data processing location, retention policy, data use for training, logs, security, subcontractors, DPA under the GDPR, reversibility, and incident support.
The GDPR continues to apply alongside the AI Act. If your system processes personal data, you must maintain a legal basis, apply minimization, regulate subcontractors, and, in some cases, carry out a data protection impact assessment. The text of the GDPR therefore remains central to AI projects.
Obligations If You Develop or Sell an AI Solution
If your scale-up develops an AI product, your obligations may be stronger than those of a simple user. The main issue is to know whether you are a provider of an AI system, a provider of a general-purpose AI model, or a provider of a high-risk system.
Situation
What to anticipate
You integrate an LLM into your B2B SaaS
You are potentially the provider of the AI system delivered to your clients. Prepare documentation, usage limits, security, logs, client information, and contractual clauses.
You sell an HR tool with candidate scoring or screening
Potentially high-risk use. You must plan for risk management, technical documentation, data quality, human oversight, compliance, and post-market monitoring.
You develop a support chatbot for clients
Often limited risk, but transparency, security, confidentiality, response control, and documentation are essential.
You train or make available a general-purpose AI model
Specific obligations for GPAI models may apply: documentation, information to integrators, copyright policy, summary of training data according to the regulation's requirements.
You manufacture a regulated product with an AI component
Obligations may articulate with CE marking and certain rules applicable from 2027.
For an AI solution sold to clients, compliance must be thought out by design. Retroactively fixing a product that has no logs, no separation of roles, no documentation, or no human oversight mechanism costs much more than integrating it into the initial architecture.
Quick Matrix of Common Use Cases in SMEs
Use case
Probable level
Priority action
Internal writing or synthesis assistant
Low to limited
Usage charter, training, confidentiality, human validation on sensitive content.
Customer support chatbot on the website
Transparency
Visible AI notice, controlled knowledge base, human escalation, logs, quality measurement.
RAG assistant on internal documentation
Low to limited depending on data
Access controls, sources of truth, citations, logging, rules on sensitive data.
Verify personal data, avoid commercial biases, document the scoring logic.
CV screening or candidate pre-selection
Potentially high risk
Legal analysis, reinforced documentation, human oversight, candidate information, bias testing.
Automated employee evaluation
Potentially high risk
High caution, worker information, HR governance, human recourse, documentation.
Emotion recognition at work
This matrix does not replace a legal qualification, but it is sufficient to identify the cases that must be prioritized in your governance.
30-Day Action Plan Before the August 2026 Deadline
For an SME or scale-up, the right goal is not to produce 80 pages of AI policy. The right goal is to have traceable decisions, identified owners, and proportionate safeguards.
Period
Deliverable
Expected result
Week 1
Inventory of AI tools and uses
You know who uses what, with what data, and for what purpose.
Week 2
Risk classification
Prohibited, sensitive, or high-risk uses are identified.
Week 3
Provider and data review
Contracts, DPAs, retention policies, access, and logs are verified for critical tools.
Week 4
Charter, training, and validation procedure
Teams know what to do, what to avoid, and when to escalate.
The final deliverable can be very simple: an AI register, a green-orange-red data policy, a provider checklist, a sheet per critical use case, and a monthly review ritual. To structure this into a broader roadmap, you can also draw inspiration from a 30-60-90 day enterprise AI plan.
How to Stay Agile Without Neglecting the AI Act
Compliance should not kill innovation. It must prevent bad decisions: an HR tool launched without control, a chatbot that invents contractual answers, an agent that modifies data without validation, or teams pasting customer data into unapproved tools.
The right approach for an SME is lightweight but mandatory governance. It relies on a few artifacts: AI register, data classification, business owner per use case, testing protocol, logs, human validation for sensitive decisions, and provider review.
This logic aligns with best practices for putting AI into production: security, observability, ROI measurement, and risk control. If you industrialize agents, RAGs, or automations, the subject is not only legal. It is also product, technical, and operational. Our guide on key risks and controls of artificial intelligence in business details these safeguards on the architecture and operations side.
FAQ
Does the AI Act apply to an SME that only uses off-the-shelf AI tools? Yes, if the use is professional and linked to the European Union, you can be a deployer. The obligations are often limited, but you must notably train teams, comply with the GDPR, inform users in certain cases, and regulate risky uses.
Is a customer chatbot automatically a high-risk system? No. A support or pre-sales chatbot often falls under transparency obligations rather than high risk. However, you must inform the user, control the answers, provide for human escalation, and protect personal data.
What needs to be done before August 2, 2026? Prioritize five actions: inventory AI uses, classify risks, train teams, verify providers, and document sensitive cases. HR, credit, insurance, education, or automated decision projects must be reviewed as a priority.
What is the difference between the AI Act and the GDPR? The GDPR protects personal data. The AI Act regulates AI systems according to their risk. The two add up. An AI tool can comply with certain AI Act requirements while still posing a GDPR problem if the data, legal basis, or transfers are not controlled.
Should tools like ChatGPT or Claude be banned in the company? Not necessarily. A total ban often pushes uses into the shadows. It is better to define authorized tools, data rules, prohibited cases, examples of good uses, and short training by business function.
Who is responsible if the AI provider is American or non-European? The provider may have its own obligations if its system is placed on the European market or if its outputs are used in the EU. But your company remains responsible for its use, its data, its internal decisions, and transparency towards the impacted individuals.
What are the penalties under the AI Act? The caps can be high, up to several million euros or a percentage of global turnover depending on the infringement, with specific rules for SMEs and start-ups. In practice, the primary issue for an SME is mainly to avoid prohibited uses, uncontrolled sensitive decisions, HR or customer risks, and contractual blockages with key accounts.
Transforming the AI Act into an Operational Advantage
The AI Act should not be seen solely as a constraint. Handled well, it forces the company to clarify its uses, its data, its responsibilities, and its quality criteria. This is exactly what distinguishes a fragile AI experiment from a reliable, integrated, and measurable AI solution.
Impulse Lab supports SMEs and scale-ups in this scaling up: AI opportunity audits, risk mapping, adoption training, process automation, integration with your existing tools, and development of custom web and AI platforms.
If you want to know which AI uses are priorities, which are risky, and how to prepare for August 2026 without slowing down your teams, you can contact Impulse Lab to frame an audit or a concrete action plan.