AI Expert: How to Choose One Without Being Sold a Demo
Intelligence Artificielle
Stratégie IA
Gestion des risques IA
ROI
In 2026, being impressed by an AI demo is easy. The hard part is **integrating AI into your processes with measurable results**, without unnecessary risks (data, compliance, costs). Demos don't lie, but they prove almost nothing...
April 20, 2026·7 min read
In 2026, being impressed by an AI demo is easy. What is difficult is integrating AI into your processes with measurable results, without unnecessary risks (data, compliance, costs). The problem isn't that demos lie, it's that they prove almost nothing about what truly matters in business: integration, reliability, operations, and adoption.
This guide helps you choose an AI expert (freelance, consulting firm, or agency) without falling into the "wow effect" trap. The goal: leaving a meeting with proof, not promises.
Why AI demos are misleading (even when done in "good faith")
A demo is often: a well-prepared prompt, a perfect context, no GDPR constraints, no integration, and no measurement. However, in production, your AI will have to:
survive real-world usage (latency, spikes, process changes).
A good AI expert isn't trying to "do a demo," they are trying to reduce risk and prove value within a clear scope.
What you are really buying when you hire an AI expert
Before choosing someone, clarify what you expect. A useful AI expert isn't just "someone who knows how to prompt" or "who knows the models." In an SMB/scale-up context, you are generally paying for:
ROI-oriented scoping: transforming a vague idea ("let's make an AI assistant") into a testable use case.
Pragmatic architecture choices: API, RAG, agent, automation, or hybrid, depending on the risk.
Integration: connecting the AI to your tools (CRM, helpdesk, ERP, Drive, Slack, etc.).
If a provider doesn't cover (or doesn't know how to cover) these topics, they are probably selling you a capability, not a solution.
The "anti-demo" checklist: 7 criteria to evaluate an AI expert
The idea: judge based on verifiable artifacts. Not on a sales pitch.
1) Do they start with the business problem (and a KPI) or the tool?
A serious AI expert starts by asking you about: frequency, current cost, impact, users, exceptions.
Proof to demand: a scoping document (even a simple one) with:
objective and scope,
target user,
current and future workflow,
"North Star" KPI + baseline,
risks and guardrails.
If the conversation starts with "we're going to implement GPT/Claude" without KPIs or scope, red flag.
2) Can they explain "build vs buy vs assemble" without dogma?
In 2026, many needs are met through assembly (tool + integration + governance) rather than fully custom builds.
Good sign: the expert knows how to say "let's not build that" and proposes a phased approach.
3) Do they talk about integration from the start?
The main differentiator between a demo and value is integration into the workflow.
Useful questions:
"In which tool does the user trigger the action?"
"Where is the result written (CRM, ticket, doc)?"
"What happens when the AI hesitates?"
Proof to demand: a V1 architecture diagram (even minimal) showing: sources, permissions, orchestration, write points, logs.
4) Do they have a reproducible testing method (not a "one-shot" demo)?
Without evaluation, you aren't managing anything. And you won't know if quality is improving.
Proof to demand: a testing protocol including:
a set of real-world cases (10 to 50),
success criteria,
a scoring method,
an iteration plan.
If you don't have "test cases," you don't have a product, you have an intuition.
5) Do they know how to reduce hallucinations concretely?
A good AI expert doesn't promise "zero hallucinations." They implement mechanisms: RAG on sources of truth, citations, constrained responses, refusals when the source is missing.
Strong signal: they talk to you about sources, traceability, and "degraded modes" (human escalation, partial response, request for clarification).
6) Are they solid on data, security, and compliance (GDPR, AI Act)?
Two minimum reflexes: minimization (not sending too much data) and access control (the AI must not bypass your rights).
You can rely on public frameworks to set the boundaries:
the European regulatory framework via the EU AI Act (references and consolidated texts).
Proof to demand: a "security and data" page in the proposal (retention, subcontractors, logs, RBAC, consent, DPIA if applicable).
7) Do they consider operations and total cost of ownership (TCO)?
An AI in production often costs more in operations than in "initial development": monitoring, source updates, fixes, support, inference costs.
Proof to demand: a V1 mini runbook + an estimate of cost centers (even a rough one) and what triggers additional costs.
Proof table: what you should get before signing
What you should get
What it looks like concretely
The trap it avoids
ROI-oriented scoping document
Objective, scope, KPI, baseline, risks
"We're making a generic assistant"
V1 architecture diagram
Sources, rights, orchestration, write point, logs
Non-integrable demo
Testing protocol
Set of cases, scoring, success criteria
Quality impossible to manage
Anti-hallucination strategy
RAG, citations, constrained responses, refusals
"Random" answers
Security and data plan
Access, retention, minimization, compliance
Data leaks, GDPR-washing
Operations plan
Monitoring, runbook, ownership, costs
POC that dies at the first incident
The 60-minute express test to avoid the "wow effect"
If you only do one thing: replace the demo with a test on real cases.
Prepare 10 cases that resemble your daily routine
For example: 5 "simple" requests, 3 "ambiguous" requests, 2 "dangerous" cases (sensitive data, critical decision, out of scope).
Provide a production constraint
imposed sources (your docs, your database),
limited time,
imposed output format (e.g., ticket creation, structured email, CRM field).
Demand a result, not a discussion
A solid AI expert will: scope, ask for the necessary data, propose a V1 design, and explain how to measure and secure it.
A demo seller will: improvise, deflect, show a "smart" chat, and avoid topics of integration, testing, and rights.
Typical red flags (to take seriously)
"We'll see about KPIs later": often synonymous with an unmanageable project.
"The AI will learn over time" (without a data plan or evaluation): an elegant way to delay proof.
Not a word about access rights: immediate danger for internal assistants.
No human escalation strategy: operational risk.
Vague proposal on the run (operations): you are buying an implementation, not a sustainable capability.
Which format to choose: freelance, consulting firm, or agency?
The right choice depends mostly on complexity and risk.
Freelance: very good if you already have clear scoping, a mastered stack, and a focused need.
Consulting firm/consultant: useful for structuring strategy, governance, portfolio, or an audit.
Agency (web + AI): relevant when you need to deliver an integrated V1 (product + data + integration + security + UX) with a delivery rhythm.
The main criterion is not the status, it's the ability to provide the proofs listed above.
FAQ
What distinguishes a good AI expert from a "demo seller"? A good AI expert quickly talks about scope, KPIs, data, integration, testing, and operations. A demo seller maximizes the "wow" effect without verifiable artifacts.
Should I start with an AI audit or a prototype? If you have several ideas and little clarity, start with an opportunity audit. If you already have a frequent and measurable use case, an instrumented prototype might be faster.
What minimum deliverables should I ask for before launching a pilot? A scoping document, a V1 architecture diagram, a testing protocol, a security/data memo, and a run plan (even a light one).
How long does it take to prove value without falling into an endless POC? On a well-chosen case, a measurable and integrated V1 is often targeted in a few weeks, not months. The key point is to have KPIs and a validation protocol from the start.
How to manage compliance (GDPR, AI Act) without slowing down delivery? With proportionate governance: data classification, minimization, access control, traceability (logs), and risk-based validation. The goal is auditability, not paperwork.
Going from "the demo" to a measurable V1 with Impulse Lab
If you want to avoid AI projects that shine in meetings but never land in your tools, the right starting point is often an opportunity audit or an instrumented pilot.
Impulse Lab helps SMBs and scale-ups with the audit, integration, and development of web and AI solutions, focusing on execution (automation, integration into your stack, and adoption). You can describe your needs and constraints on impulselab.ai to get pragmatic scoping and compare options based on proof, not a demo.