AI is everywhere in recruitment: CV sorting, matching, sourcing, chatbots, scoring... The productivity gain is real, but so is the risk: recruitment processes high-impact personal data. This guide offers a pragmatic GDPR compliance checklist to safely deploy AI in your hiring process.
AI is everywhere in recruitment: CV sorting, matching, sourcing, candidate chatbots, job ad writing assistance, scoring, interview summaries... The productivity gain is real, but so is the risk: recruitment processes high-impact personal data, and certain practices quickly cross the line into "profiling" or automated decision-making. As a result, an artificial intelligence recruitment project without GDPR safeguards can create legal, reputational, and social risks, right when your company is trying to structure itself.
This guide offers a pragmatic GDPR compliance checklist, designed for SMEs and scale-ups: what to check, what to document, and what evidence to demand from your tools and service providers.
Artificial intelligence recruitment: what exactly are we talking about?
In practice, we often slap the "AI" label on very different building blocks. On the GDPR side, what matters is not the vendor's marketing, but the type of processing and its impact.
Common examples in recruitment:
CV parsing and extraction (skills, experiences, degrees).
Job-candidate matching (scores, "fit").
Sourcing (searching internal databases, job boards, sometimes the web).
HR chatbot to pre-qualify, schedule, answer questions.
The more you do scoring, ranking, or automated recommendation, the more solid you must be on: transparency, minimization, DPIA, and human oversight.
The GDPR basics to frame before the checklist (without unnecessary jargon)
1) Who is responsible for what? (data controller vs. data processor)
Your company is generally the data controller: you decide the purposes (recruiting), the essential means, and the criteria.
The publisher of the AI tool (ATS, matching engine, LLM, interview tool, etc.) is often a data processor (or sometimes a joint controller if the purposes are shared).
This point determines your contractual obligations and your evidence (DPA, security measures, sub-processors, transfers).
2) What data are you processing? (and which are "at risk")
A CV can contain:
Identification data (name, email, phone).
Professional data (career path, skills).
Potentially sensitive or "high-risk" data depending on the context (photo, age, nationality, disability, opinions, trade union membership, etc.).
Even if a "special category" is not explicitly requested, it can be inferred (e.g., photo, address, associations). Minimization and usage rules are therefore central.
3) What legal basis? (often legitimate interest, sometimes consent)
In recruitment, the legal basis is often:
Pre-contractual measures: processing an application to evaluate and respond.
Legitimate interest: organizing recruitment, securing the process.
Consent is rarely the best basis, because it must be freely given and revocable without imbalance. However, it can be justified for certain optional processing (e.g., keeping a candidate in a talent pool beyond usual durations, according to your policy).
For practical benchmarks in France, the CNIL publishes recommendations on HR and recruitment processing, particularly retention periods (see CNIL).
GDPR checklist for an AI recruitment project ("evidence" oriented)
The goal of this checklist: to allow you to move from an AI pitch to a recruitable, auditable, and defensible system in the event of an internal question (Works Council / CSE), the exercise of rights, or an inspection.
A. Framing: purposes, scope, and "terms of use"
Before talking about models, write down in black and white:
Purpose: "select candidates for position X", "pre-qualify", "help sort", etc.
Input data: CVs, form answers, interview notes, transcriptions.
Who decides: the AI proposes, the human decides (or not).
Channels: ATS, email, chat, video call, phone.
Key point: if the AI significantly influences the sorting (even via "recommendations"), you must frame the usage rules to avoid implicit "autopilot".
B. Candidate information: transparency and enforceability
Check that your candidate information (form, career page, emails) covers:
The existence of recruitment assistance processing including automated tools.
The exact purposes (sorting, matching, pre-qualification, scheduling, summary).
The categories of data processed.
The recipients (HR, managers, service providers).
Retention periods.
Rights (access, rectification, objection, erasure, restriction, portability depending on the case).
Where applicable, transfers outside the EU and safeguards.
For high-impact processing, it is recommended to be able to explain the main logic (e.g., "searching for a match of skills and experiences against the job offer"), without revealing a trade secret.
Useful reference on information obligations: GDPR, Articles 13 and 14.
C. Automated decision-making and profiling (Article 22): major vigilance point
The GDPR regulates decisions based solely on automated processing that produce legal or significant effects (typically recruitment).
Ask yourself three simple questions:
Is the "rejection" decision made without real human review?
Are the criteria applied automatically (threshold, score)?
Does the candidate have an effective opportunity to contest and obtain human intervention?
If the answer leans towards "yes", you are in a zone where you need to strengthen:
The recruiter's role (effective review, ability to override).
The traceability of decisions.
The explanations and procedures for exercising rights.
D. Minimization: reduce the input, reduce the risk
In AI, minimization is not just about "collecting less", but also about sending less to external systems.
Concrete checks:
Mandatory fields strictly necessary.
Removal of unnecessary "free text" areas (often a source of sensitive data).
Photo policy: accepting it increases the risks of bias and sensitive data.
For LLMs, implementing "no paste" rules on certain data, or automatic redaction before sending.
E. Subcontracting: DPA, localization, transfers, and sub-processors
For each AI tool touching applications:
Signed Data Processing Agreement (DPA), with a description of the processing.
List of sub-processors (hosting, analytics, support) and objection mechanism.
Data localization, transfers outside the EU, and safeguards (e.g., Standard Contractual Clauses).
Commitments on data usage (training, product improvement, retention).
These are decisive elements to avoid "GDPR-washing": a commercial promise is not enough, you need clauses and evidence.
F. Security: access, encryption, logs, and environment separation
Recruitment is inherently sensitive processing. Check:
Access control (RBAC): HR, managers, admin, service providers.
MFA/SSO if possible.
Encryption in transit and at rest.
Logs: access, export, deletion.
Management of exports (CSV) and sharing.
Test environments: no real data in the sandbox.
G. Retention periods: simple rule, clear policy, real purging
You must define and apply a consistent retention period (unsuccessful application, talent pool, unsolicited applications). In France, CNIL recommendations are often used as an operational reference.
Concrete checks:
Durations documented in your register.
Automatic purge mechanism.
Proof of purge (logs, reports).
Process in case of an erasure request.
H. DPIA (Data Protection Impact Assessment): when to do it, and what it's for
A DPIA is often relevant, or even required, if you:
Do profiling/scoring, especially on a large scale.
The goal is not a document "for the sake of it", but a tool to decide:
What risks (bias, exclusion, leak, error)?
What measures (human review, minimization, tests, audits, logs)?
What is the acceptable residual risk?
I. Quality, bias, and explainability: the "GDPR + HR" angle
The GDPR is not a broad "anti-bias" law, but an AI recruitment process must be defensible.
Pragmatic measures:
Define explicit business criteria (skills, experiences, prerequisites).
Avoid proxy signals (address, school, gaps in the CV) if not justified.
Test on real cases: false positives, false negatives, atypical profiles.
Provide a channel for human recourse.
J. Documentation: your best ally in case of an inspection
Keep (and make accessible):
Record of processing activities (purposes, bases, durations, recipients).
DPIA if necessary.
DPA and security appendices.
Internal usage policy (who can do what, what not to do).
Procedures for exercising rights.
Validation elements (tests, criteria, go/no-go decisions).
"Expected evidence" scorecard: a table to copy
Checkpoint
Simple question
Expected evidence
Risk if absent
Candidate transparency
Does the candidate know an AI helps with sorting or pre-qualification?
Up-to-date GDPR notice / mentions
Non-compliance, mistrust, disputes
Automated decision
Can a human actually review and change it?
Procedure, logs, UI showing override
Article 22 risk, discrimination
Minimization
Are we only sending what is necessary?
Field mapping, redaction
Data leak, over-collection
Processors
Does the tool use the data to train itself? Where is the data?
DPA, usage clauses, localization
Lack of control, illegal transfer
Security
Who accesses applications and exports?
RBAC, MFA/SSO, logs
Leak, unauthorized access
Retention
Do we really delete at the deadline?
Rules + automatic purge + proof
Excessive storage, CNIL risk
DPIA
Is the risk assessed and reduced?
The classic trap in 2026: "we just used an LLM to help"
Many teams start by copy-pasting CVs into a consumer tool to summarize, classify, or draft responses. Typical risks:
Candidate data sent outside of any contract (no DPA).
History kept by default.
Lack of traceability and access rules.
Mixing sensitive data in prompts.
If you want to move fast without exposing yourself: formalize a charter of use, set up a gateway or a pro tool with a DPA, and standardize inputs (templates) to minimize.
FAQ
Is an AI CV sorting tool necessarily prohibited by the GDPR? No. It is possible if you respect GDPR principles (transparency, minimization, security), and if you frame the risk of automated decision-making with real human review.
Do we need to ask for candidates' consent to use AI in recruitment? Not necessarily. Many processing operations rely on pre-contractual measures or legitimate interest. However, you must inform clearly and allow the exercise of rights.
When should a DPIA be done for artificial intelligence recruitment? As soon as the processing presents a high risk, typically large-scale scoring/profiling, automated pre-selection, or combining sources. In practice, many AI recruitment systems justify a DPIA.
Can we keep CVs for future recruitment? Yes, but with a defined duration, clear information, and a purge mechanism. Refer to CNIL recommendations and your internal policy.
Can a service provider reuse our CVs to train its model? It depends on the contract. Without strict framing, it's a major risk. Demand clear clauses on usage (training or not), retention, localization, and sub-processors.
Need a solid GDPR framework before deploying AI in recruitment?
If you want to industrialize an artificial intelligence recruitment project without multiplying the risks, Impulse Lab can help you frame the use case, verify GDPR compliance (and associated governance), and then build an integrated solution with your tools (ATS, CRM, HRIS) with the necessary safeguards.
You can start with an opportunity and risk audit, or an adoption training for your HR teams and managers. Discover Impulse Lab's approach on impulselab.ai.