Asking an Artificial Intelligence: 12 Truly Useful Prompts
Intelligence artificielle
Stratégie IA
Outils IA
Productivité
Automatisation
You can **ask an artificial intelligence** many things, but the difference between a "wow" result and a vague answer often comes down to 3 details: context, constraints, and output format. For an SME or scale-up, the goal isn't to "test an AI", it's to...
April 02, 2026·10 min read
You can ask an artificial intelligence many things, but the difference between a "wow" result and a vague answer often comes down to 3 details: context, constraints, and output format. For an SME or scale-up, the goal isn't to "test an AI", it's to save time, standardize quality, and reduce friction in workflows.
This guide gives you 12 truly useful prompts, execution-oriented (sales, marketing, operations, product/IT, management). Each prompt is designed to produce an actionable deliverable, not a discussion.
Before the prompts: the golden rule for getting reliable answers
A language model is good at synthesizing, structuring, proposing options, writing, explaining, generating outlines, and transforming drafts. It is less reliable for unprovided facts, numbers, legal matters, and anything that requires "ground truth" without a source.
Two best practices will save you from 80% of disappointments:
Provide a source of truth (text, notes, CRM extract, existing process, data table). Without it, the AI will fill in the blanks.
Require an output format (table, checklist, email, H2/H3 outline, JSON). Otherwise, you get generic text.
The "template" to copy to properly formulate a request
When you don't know how to start, use this skeleton. It forces the AI to work like an operational assistant.
Role: You are [role].
Objective: I want to get [deliverable] for [use case].
Context: [company, target, offer, constraints].
Inputs: Here is the information to use (do not invent anything):
- ...
Constraints:
- If information is missing, ask questions.
- State your assumptions.
- Provide a short version then a detailed version.
Output format: [table / checklist / email / outline / JSON].
Quality criteria: [accuracy, tone, length, vocabulary, compliance].
12 truly useful prompts (SMEs, scale-ups)
#
Objective
Expected deliverable
Best time to use it
1
Summarize and decide
Summary + decisions + actions
After reading a long doc
2
Convert notes into follow-up
Action plan + email
After a client/internal meeting
3
Leverage objections
Script + answers
When closing stagnates
4
Propose a clear offer
Structured proposal
Before sending a quote
5
Improve a landing page
Recos + tests
Before scaling traffic
6
Plan content
Calendar + briefs
To structure marketing
7
Identify quick wins
Prioritized list
To reduce ops load
8
Document a process
Actionable SOP
1) Summarize a document and extract decisions, risks, actions
Objective: transform a PDF, a spec, or an executive doc into an action plan.
You are an operational analyst.
Objective: Produce an actionable summary of the document below.
Input (source):
[PASTE THE TEXT OR A SIGNIFICANT EXTRACT]
Constraints:
- Do not invent anything, quote extracts when useful.
- Separate facts, decisions, open points.
- Point out unclear areas.
Output format:
1) Summary in 10 lines max
2) Explicit decisions (table: decision | impact | owner | date)
3) Actions to take (table: action | priority | dependencies | effort)
4) Risks and unknowns (short list)
5) Questions to ask to decide
This prompt is particularly effective if you also paste a "context" section (why the doc exists, who decides, when).
2) Transform meeting notes into an action plan + follow-up email
Objective: save time after a meeting and standardize follow-up quality.
You are a Chief of Staff.
Objective: Transform my meeting notes into an action plan and follow-up email.
Context: Meeting with [client/team], objective: [objective].
Raw notes:
[PASTE YOUR NOTES]
Constraints:
- Do not invent decisions.
- If a decision is implicit, mark it as an assumption.
Output format:
A) Action plan (table: action | owner | deadline | initial status)
B) Follow-up email (professional tone, 120 to 180 words)
C) Open questions (max 5)
Tip: if you use a call recorder, also ask for an "exact quotes" section to avoid misinterpretations.
3) Analyze client objections and create a response script
Objective: regain control when you keep hearing the same pushbacks.
You are a B2B sales coach.
Objective: Identify recurring objections and propose answers.
Context: We sell [offer] to [ICP]. Sales cycle: [duration].
Data (email extracts / call notes):
[PASTE 10 to 30 extracts]
Constraints:
- Categorize objections by theme.
- For each objection, propose: clarification question, short answer, proof to provide, pitfall to avoid.
Output format:
Table: objection | typical context | short answer | proof | question | pitfall
Don't look for "the magic formula". Look for a library of consistent answers, based on your proofs (cases, KPIs, demos).
4) Generate a structured commercial proposal (without fluff)
Objective: get a solid, personalized base that is quick to review.
You are a pre-sales consultant.
Objective: Write a structured commercial proposal.
Context:
- Client: [industry, size, challenge]
- Problem: [observed symptoms]
- Objective: [KPI, result]
- Constraints: [budget, deadline, security, GDPR]
Offer:
- What we do: [list]
- What we don't do: [list]
Output format:
1) Executive summary (6 lines)
2) Problem and impact
3) Approach (steps and deliverables)
4) Scope (in / out)
5) Assumptions and prerequisites
6) Indicative schedule (week by week)
7) Risks + mitigations
8) Next steps (CTA)
Constraints:
- No quantified promises without data.
- Direct style, no jargon.
If your offer involves AI, add a "guardrails" section (tests, sources, human validation, traceability). This reassures and avoids the demo effect.
5) Audit a landing page for conversion (and spot the gaps)
Objective: improve performance before scaling acquisition spend.
You are a CRO (conversion rate optimization) expert.
Objective: Audit a landing page and propose testable improvements.
Inputs:
- Page text: [PASTE CONTENT]
- Offer: [what you sell]
- Target: [ICP]
- Main CTA: [e.g., book a call]
Constraints:
- Propose 10 recommendations max.
- Each recommendation must include: hypothesis, effort, expected impact, A/B test.
Output format:
Table: problem | recommendation | hypothesis | effort (S/M/L) | test | KPI
You can add an "accessibility" pass (headings, explicit links, contrasts). To go further, Impulse Lab has a glossary entry on web accessibility.
6) Build a 4-week SEO content plan (intent-oriented)
Objective: stop publishing based on "gut feeling".
You are a B2B content manager.
Objective: Propose a 4-week SEO content plan.
Context:
- Company: [activity]
- ICP: [decision maker, problems]
- Offers: [3 offers max]
- Objectives: [leads, brand awareness, activation]
Constraints:
- Propose only 8 content pieces.
- For each piece, provide: angle, promise, H2/H3 outline, CTA, risk of overly generic content.
Output format:
Table: title | intent | audience | outline | CTA | required assets
The classic mistake is optimizing for "keywords" instead of optimizing for a decision to be made.
Objective: spot realistic quick wins, with minimal integration.
You are a process analyst.
Objective: Identify repetitive tasks and propose automations.
Context: [Role] team, current tools: [list].
Task list (one line per task):
[PASTE 20 to 50 tasks]
Constraints:
- Categorize into 3 buckets: simple rules (automation), assisted AI (copilot), AI + integration (agent/RAG).
- For each task: potential gain, risk, required data.
Output format:
Table: task | frequency | pain point | approach | prerequisites | risk | KPI
If you want to industrialize AI (APIs, RAG, agents), keep an integration mindset, not an isolated tool. RAG (Retrieval-Augmented Generation) is often the most profitable building block when you have a knowledge base.
8) Write an SOP (procedure) from a "human draft"
Objective: document a process without spending 2 hours on it.
You are a quality manager.
Objective: Transform my draft into a clear SOP.
Draft:
[PASTE YOUR TEXT]
Constraints:
- Make the procedure executable by a newcomer.
- Add: prerequisites, final check, common errors.
Output format:
Structured SOP with: Objective, Scope, Definitions, Steps, Controls, Exceptions
It's a simple use case, but highly profitable: it reduces dependency on key people and accelerates onboarding.
9) Define KPIs for an AI use case (before developing)
Objective: avoid the "we use it so it works" trap.
You are an ROI-oriented AI PM.
Objective: Define the KPIs for an AI use case and the measurement plan.
Use case: [e.g., internal support assistant].
Current process: [how it works today].
Baseline (if known): [volumes, time, error rate].
Constraints:
- Propose 1 North Star KPI, 2 steering KPIs, 2 guardrails (quality, risk).
- Describe how to instrument the measurement (events, logs, samples).
Output format:
Table: KPI | definition | formula | source | frequency | Go/No-Go threshold
10) Compare 3 tools and produce a decision matrix (TCO included)
Objective: choose a SaaS without getting trapped by a demo.
You are a CFO/COO.
Objective: Compare 3 tools for [need] and recommend a choice.
Tools to compare: A, B, C.
Context:
- Users: [number]
- Data: [sensitive or not]
- Required integrations: [CRM, helpdesk, etc.]
- Constraints: [GDPR, SSO, budget]
Constraints:
- Include a "hidden costs" section (implementation, integrations, run).
- Provide a recommendation, then a 10-day test plan.
Output format:
1) Scoring table (weighted criteria)
2) Risk analysis
3) Test plan
For AI risks, frameworks like the NIST AI RMF help structure a proportionate approach.
11) Diagnose a bug or regression with a patch plan
Objective: accelerate resolution, without blindly "copy-pasting code".
You are a senior engineer.
Objective: Diagnose the problem and propose a patch plan.
Context:
- Stack: [e.g., Next.js, Node, Postgres]
- Symptom: [what breaks]
- Impact: [affected users]
Data:
- Logs: [extract]
- Code: [minimal extract]
Constraints:
- Provide 3 hypotheses ranked by probability.
- For each hypothesis: validation test, minimal patch, risks.
Output format:
Table: hypothesis | expected proof | test | fix | risk
If your team wants to professionalize reviews, a good Pull Request culture also helps (see the Pull Request glossary entry).
12) Write a simple (and applicable) AI usage charter
Objective: reduce shadow AI and clarify what is allowed.
You are a DPO + transformation manager.
Objective: Write a generative AI usage charter for the company.
Context:
- Size: [SME/scale-up]
- Handled data: [types]
- Used tools: [list]
Constraints:
- Stay pragmatic (1 page).
- Include: data classification, allowed/forbidden examples, review rules, traceability.
- Add a "what to do if in doubt" section.
Output format:
Charter structured in 8 to 12 rules + 6 concrete examples.
For application security related to LLMs, the OWASP Top 10 for LLM Applications project is a good baseline for raising awareness without dramatizing.
How to use these prompts without spending your whole day on them
The trap is multiplying "one-shot" prompts. The most profitable method is to transform your best prompts into team standards.
Keep it simple: choose 2 prompts out of the 12, then standardize.
Add a mandatory "Context" field (offer, target, constraint).
Set a single output format (tables everywhere, for example).
Keep a review checklist (facts, numbers, decisions, tone).
When to move from a prompt to an integrated (and measured) solution
If your prompts become critical (support, sales, operations) you will quickly hit three limits: repetition, tool integration, and governance (rights, logs, costs).
This is often the time to move from a "chat" usage to a more robust approach: opportunity audit, targeted training, then a custom solution integrated into the workflow.
Impulse Lab actually supports SMEs and scale-ups on these three aspects, with AI audits, adoption training, and the development of custom platforms and automations. If you want to identify 2 priority use cases and transform them into a measurable V1, you can start with the article on the strategic AI audit, then contact us via impulselab.ai.