AI Progression in Business: 30-60-90 Day Plan for SMEs
Intelligence artificielle
Stratégie IA
Gouvernance IA
ROI
Automatisations
Moving from "testing ChatGPT" to real **AI progression** isn't about adding tools, it's about structuring a capability. For SMEs, the challenge is twofold: deliver visible gains quickly while avoiding chaos (data, costs, compliance, "shadow AI")...
February 10, 2026·9 min read
Moving from "we are testing ChatGPT" to real AI progression in business isn’t about adding tools, it’s about structuring a capability. In SMEs, the challenge is twofold: delivering visible gains quickly (otherwise the topic dies), while avoiding chaos (data, costs, compliance, "shadow AI").
This 30-60-90 day plan gives you a pragmatic path to transform AI into measurable results, with light governance adapted to a scaling organization.
What "AI Progression" Means in SMEs (and How to Recognize It)
In an SME, AI progression is seen less in the number of POCs, and more in 4 concrete signals:
Frequent use cases, plugged into workflows (not an isolated demo).
A baseline (time, volume, quality, cost) and 3 to 5 KPIs per use case.
Minimal integration into the IS (SSO if possible, logs, sources of truth).
Proportionate guardrails (data, security, human validation, traceability).
If you already check 2 out of 4 points, you can aim for a V1 in 90 days. If you check none, the goal of the first 30 days is to avoid "tool sprawl" and secure the terrain.
The 3 Rules That (Really) Accelerate Progression
1) Prioritize by frequency, not "wow" factor
A good SME use case is repetitive, time-consuming, and standardizable enough to be instrumented.
Typical examples: level 0 customer support, internal assistant on documentation (RAG), document extraction and routing (invoices, requests), structured drafting and synthesis.
2) Measure before optimizing
Without a baseline, you prove nothing. The result: the project becomes a discussion of opinions. First install a simple measurement protocol (even artisanal) before "tuning" a model.
To go further on choosing and setting up metrics, see the Impulse Lab guide on AI KPIs.
3) Treat AI like a mini-product
Even an internal copilot needs a minimum of "product": a scope, target users, feedback, a cadence, a version.
This is what avoids the trap of the POC that never launches.
Before Starting: Minimum Prerequisites (1 to 2 Days)
You don't need a complete "AI platform" to start, but you need a framework.
Decisions to make right away (and write down):
Who arbitrates (a business sponsor capable of deciding).
Which data is forbidden in AI tools (simple classification: red, amber, green).
How we validate (human in the loop on sensitive actions).
Where we log (at minimum: requests, responses, sources if RAG, errors, estimated costs).
To structure this scoping without drowning, a short audit often helps clarify opportunities and risks (example: strategic AI audit).
30-60-90 Day Plan: From Idea to Useful V1
The goal isn't "to have AI", it's to install an execution loop: choose, deliver, measure, secure, iterate.
Overview (Steering Table)
Period
Main Objective
Expected Deliverables
Success Criteria
Days 1-30
Scope and prove value on a narrow perimeter
1 prioritized use case, KPI baseline, instrumented prototype, data rules
A demo on real data + initial measurement + identified risks
Days 31-60
Stabilize and integrate (minimum viable)
Pilot with users, light integration, logs, guardrails
Active users + tracked quality + predictable cost
Days 61-90
Industrialize "just enough" and decide
V1 in controlled production, ROI/risk scorecard, scale plan
Clear decision: deploy, expand, or stop
The following details what to do, week by week, without turning your SME into a laboratory.
Days 1-30: Scope, Measure, Deliver a Prototype on a Frequent Case
Week 1: Choose 1 use case that "pays off"
Avoid the portfolio of 10 ideas. Choose one use case, possibly two if one is "foundation" (e.g., internal assistant) and the other "showcase" (e.g., customer support), but keep a single execution priority.
Use a simple scorecard (out of 5):
Criterion
Question
Score 1
Score 5
Frequency
Does it happen every day/week?
Rare
Daily
Measurability
Do we have volume, time, quality?
Vague
Easy
Data
Are sources accessible and reliable?
Scattered
Identified
Integration
Can we plug into the workflow?
No
Yes, easily
Risk
Sensitive data, serious errors?
High
Low
Choose the case with the best frequency x integration x measurability ratio, not the one that impresses the most.
Week 2: Define baseline + KPIs (3 to 5 max)
For an SME, keep it simple:
1 North Star KPI (e.g., average handling time, self-service rate, resolution rate).
1 Cost KPI (e.g., cost per ticket handled, human time saved).
1 Risk KPI if necessary (e.g., rate of responses without source on a documentary assistant).
The baseline can be "low-tech": ticket export, manual sampling, measurement over 1 week. The important thing is to have a starting point.
Week 3: Build an instrumented prototype (not a demo)
A useful prototype respects 3 principles:
Controlled context (which sources, which rules, which limits).
Traceability (logs, and sources when it's a documentary assistant).
Human handoff (when the AI is uncertain, or when it's critical).
On "knowledge" cases (internal documentation, procedures, support base), a common pattern is RAG (retrieve documents then answer). Impulse Lab also has a glossary entry on RAG (Retrieval-Augmented Generation) if you want to clarify the concept.
Week 4: Clarify risks and usage rules
Without creating a gas factory, align:
Data rules (what we don't send, what we anonymize).
Usage rules (which cases are allowed, and how to escalate).
For a reference framework, you can rely on the NIST AI Risk Management Framework (useful for structuring risks) and keep an eye on applicable obligations of the EU AI Act.
Days 31-60: Pilot in Real Conditions, Minimal Integration, Quality Under Control
At this stage, you have a prototype. AI progression now plays out in real usage.
Objective: Move from "it works on my machine" to "it works in the team"
Focus on:
A small pilot group (5 to 20 users depending on size).
Real scenarios (the 20 to 50 most frequent cases).
A weekly ritual (feedback, incidents, measurements, decisions).
Minimum Viable Integration (MVI)
Without over-engineering, aim for:
Connection to tools where work happens (support, CRM, drive, wiki, messaging), at least via a single entry point.
Rights management (avoid an assistant giving access to unauthorized info).
If you expose actions (ticket creation, CRM update, email sending), keep human validation at the beginning.
Quality: Set up a simple "golden set"
Create an internal test set: 30 to 100 representative questions, with expected answers or evaluation criteria. This is the basis for tracking regressions.
This mechanism is often more important than "changing the model".
Days 61-90: Controlled Production, ROI Scorecard, Scale Decision
Objective: A V1 that creates value and that you can defend
You aim for a progressive production launch: limited scope, monitoring, rollback plan.
In this phase, the most important thing is the decision: expand, reinforce, or stop. Good AI progression also includes the right to stop quickly when ROI isn't there.
The Decision Scorecard (to fill at end of Day 90)
Axis
Question
"Go" Signal
Value
Do KPIs move beyond variance?
Clear gain on 2 key KPIs
Adoption
Do users return without prompting?
Recurring usage, actionable feedback
Quality
Are critical errors rare and detected?
Guardrails + escalation + tests
Costs
Is cost predictable at 2x volume?
Estimable budget, caps possible
Risk
Are data and compliance under control?
Logs, access, rules, proofs
Prepare the "Scale Pack" (without over-equipping)
If V1 is a success, the next step isn't "more POCs", it is:
Expand the scope (more intents, more sources, more teams).
Reinforce industrialization (tests, observability, prompt management, data quality).
Install an AI culture at the point of usage (short training, simple rules, concrete cases).
On adoption, you can also read the Impulse Lab article on AI culture in SMEs.
The Minimal Team to Last 90 Days (Without Hiring)
An SME can move fast with a small team, if roles are clear:
Business Sponsor: arbitrates, prioritizes, protects time.
Operational Lead (PO "use case"): defines scenarios, collects feedback.
Tech Lead / Integration: connects to IS, manages logs, security, quality.
Data / Sources Lead: identifies documents and access rules.
Compliance Lead (even part-time): GDPR, contracts, usage rules.
This isn't a "corporate" organization, it's a delivery unit.
The Mistakes That Break AI Progression (and How to Avoid Them)
"We want an assistant that knows everything"
This is the shortest path to hallucination, lack of KPIs, and user rejection. Start with narrow, but mastered coverage.
"We choose the tool, then look for a use case"
Classic inversion. Start with a frequent job-to-be-done, then choose the approach.
"We only measure usage"
The number of messages does not equal ROI. Measure a process or business impact.
"We integrate late"
Without integration, AI remains just another tab. The goal is to act within the workflow, even with minimal integration.
Frequently Asked Questions
What is "AI progression" in business, concretely? It is the ability to deliver AI use cases integrated into workflows, measured (baseline + KPIs), secured (data, rights, logs), and continuously improved, not an accumulation of tools.
Which use case to choose first in an SME? A frequent, measurable, low-risk, and integrable case. For example: internal assistant on documentation, level 0 customer support, document extraction, or automation of repetitive back-office tasks.
Can you really deliver something in 90 days? Yes, if you limit the scope, choose a frequent use case, instrument measurement from the start, and accept progressive production (piloting, guardrails, iterations).
Which KPIs to track in a 30-60-90 day plan? One main KPI (time, volume, resolution rate), 1 to 2 quality KPIs (errors, escalation, satisfaction), a cost KPI (cost per case, human time), and potentially a risk KPI (responses without source, data incident).
Do you need heavy governance to start? No, but you need minimal governance: data rules, human validation on sensitive actions, logs, and a sponsor who arbitrates. Governance must be proportionate to risk and criticality.
Accelerate Your 30-60-90 Day Plan with Impulse Lab
If you want to turn this plan into execution, Impulse Lab accompanies SMEs and scale-ups with AI opportunity audits, adoption training, and the development of custom web and AI solutions (automation, integrations, dedicated platforms).