Generating text, images, video, or music via AI has become a reflex in many SMEs and scale-ups. But as soon as this content leaves the prototype stage (website, marketing campaign, product documentation, design, training), one question always returns: **is a work created...
February 03, 2026·9 min read
The generation of text, images, video, or music via AI has become a reflex in many SMEs and scale-ups. But as soon as this content leaves the prototype stage (website, marketing campaign, product documentation, design, training), one question always returns: is a work created by artificial intelligence protected, and by whom?
This subject is not theoretical. It affects the value of your assets (content, creations, brands), your contracts (clients, service providers, agencies), and your risk management (infringement, image rights, confidentiality).
1) The key point in France: copyright protects a creation… by a human
In France, copyright (droit d’auteur, Intellectual Property Code) rests on a structuring idea: the author is a natural person, and the work must be original, meaning marked by creative choices specific to its author.
Legal basis: general principles of copyright in the Intellectual Property Code, notably the opening provisions on the ownership of rights and the protection of works (Legifrance, CPI).
On the EU side, the notion of originality is classically formulated as an "intellectual creation of the author's own" in CJEU case law (e.g., Infopaq).
Practical consequence: if content is generated "all by itself" by a system (without sufficient human creative input), it is difficult to qualify it as a work protected by copyright in France.
2) Three frequent situations (and what they imply)
In real life, we don't just have "100% AI" or "100% human." We have gray areas. Here is a simple grid.
Situation
Example
Probability of copyright protection (FR/EU)
Business implication
Automatic generation
An image produced from a very generic prompt, without retouching
Low, as human input is difficult to characterize
You risk having an asset that is difficult to defend (copying by a third party, disputes)
AI-assisted creation
The human iterates, chooses, composes, retouches, arbitrates the final rendering
Higher if human creative choices are demonstrable
Documenting the process becomes strategic
AI as a tool in a creative workflow
AI used as a building block (variants, cleaning, draft), then production finalized by a creative
Generally the most defensible situation
Treat AI as a tool, like Photoshop or a synthesizer, with evidence
Key takeaway: it is not the tool that decides, it is the level and nature of human contribution.
3) "Who owns the rights?" depends first on "who is the author"
If we are in a scenario where copyright protection is plausible, the next question is: who is the author, and who exploits it?
Author (natural person) vs. company
Under French law, the author is in principle the natural person. A company can exploit the rights if:
it has obtained an assignment (contract, clear clauses, scope, territories, duration),
or if we are under certain specific regimes (e.g., software created by an employee in the course of their duties, where economic rights are allocated to the employer according to the CPI).
For marketing content, design, visuals, and text, the operational rule remains: contractualize.
Service provider, employee, co-creation: watch out for blind spots
Some frequent traps when AI enters the production chain:
A freelancer "generates" AI visuals and delivers them to you without a formal assignment: you have a file, not necessarily robust exploitation rights.
Several people "co-prompt" and assemble: you can create a co-author situation or at least a governance blur.
An AI tool imposes usage conditions (license, restrictions): even if your creative contributed, the tool's contract may limit certain uses.
4) The real risk is not just "absence of rights," it is infringement (and third-party rights)
Even if your output is not (or not surely) protectable, it can still infringe on existing rights. This is often where disputes arise.
Copyright and similarity
An AI can produce an output very close to an existing work (composition, distinctive elements, recognizable style in some cases). The risk increases when:
you explicitly ask for "in the style of…",
you work with very identifiable references,
you target universes where originality plays out in details (illustration, characters, packshots).
Trademarks, trade dress, distinctive signs
Even without copying a work, you can clash with a trademark (resembling logo, name too close, distinctive elements). For brand assets, a minimal availability check (similarity search) is often more useful than theoretically debating the author.
Image rights, voice, personal data
Face of a real person, voice imitation, "deepfake": the risk is both civil and reputational.
In a corporate setting, using personal data in a prompt (clients, employees, support tickets) creates a GDPR risk.
On this point, if you use free or poorly configured tools, the risk is primarily data leakage. Impulse Lab has published a useful guide on precautions to take with free tools: Free AI: Useful tools without compromising your data.
5) EU Framework: text and data mining, opt-out, and new transparency obligations
Two EU texts are already structuring many operational discussions in 2026.
DSM Directive (2019/790) and Text & Data Mining
The Directive (EU) 2019/790 provides for exceptions for text and data mining (TDM) with, depending on the case, the possibility for rights holders to express a reservation of rights (opt-out) in an appropriate manner.
Implication: depending on the sources and the presence of an opt-out, training or data collection can be more or less contestable. For a company, this translates primarily into a supplier question: where does the training data come from, and what contractual guarantees exist?
AI Act: transparency and GPAI
The AI Act (European regulation on AI) establishes transparency and compliance obligations for certain systems, including requirements related to general-purpose AI (GPAI) models.
Institutional reference: information and adoption steps on the Council of the EU website.
Implication: for product and legal teams, the trend is clear, more documentation requirements, and therefore more interest in choosing providers who give usable guarantees (terms of use, data policy, opt-out options, indemnification clauses, etc.).
6) How to secure your AI content, without blocking yourself
The goal is not to "do nothing anymore." The goal is to have a production hygiene adapted to your risk level.
A) Define a simple internal policy (3 levels are enough)
You can start with a very operational classification:
The higher the level, the more you require: human validation, traceability, source control, contractual clauses.
B) Document human contribution (proof, not poetry)
If you want to be able to claim rights (or at least defend the originality of a rendering), document:
iterations (key prompts, parameters, versions),
choices (selection, composition, art direction),
retouching (source files, layers, history),
the role of each contributor.
This can be light, but constant. A well-kept project folder is better than a debate ex post facto.
C) Contractually frame your service providers (and your deliverables)
For content produced with AI, your contracts (or purchase orders) should clarify:
who is responsible for prompts, sources, and validations,
the assignment or license of rights on delivered elements (when applicable),
a clause on the use of AI tools and associated constraints,
guarantees in case of infringement of third-party rights (realistic indemnification level, takedown process).
The goal is to avoid a "black hole" between production, delivery, and exploitation.
D) Implement a "brand and rights" check before diffusion
Without turning every campaign into an obstacle course, a proportionate check can include:
similarity search for names and slogans (trademarks),
checking visual elements that are too close to a known universe,
verification of image rights if a face is present,
GDPR check if data was injected.
7) Decision checklist: 10 questions to ask before publishing
Rather than looking for a single answer ("it's mine" / "it's free"), ask yourself these questions:
Is the asset strategic (brand, long-term reuse, massive distribution)?
Can human creative choices be described in the final rendering?
Do we have proof (versions, retouching, selection)?
Is a service provider involved, and does the contract cover assignment/license and guarantees?
Does the AI tool used give usage rights compatible with your exploitation (commercial, worldwide, exclusive or not)?
Did you explicitly ask for an artist's style, a brand, a character?
Are there identifiable persons (face, voice)?
Was internal or personal data used in the prompt or source files?
Is there a rapid takedown plan if a report arrives?
Who validates internally (marketing, product, legal) according to the risk level?
8) What leaders need to remember (short version)
A work created by artificial intelligence is not automatically protected by copyright in France; human contribution remains central.
The main risk, in practice, is often infringement of third-party rights (works, trademarks, image rights, data).
The best strategy is a combination: light process, traceability, contracts, proportionate controls.
Frequently Asked Questions
Is an image generated by AI protected by copyright in France? Often, protection is uncertain if the image is produced without characterizable human creative input. If a human directs, selects, composes, and retouches substantially, protection is more defensible.
Who is the author of content created with a tool like ChatGPT or an image generator? Under French law, the author is in principle a natural person. A tool is not an author. The question therefore becomes: who, among the humans involved, made sufficient creative choices on the final result.
Can one claim exclusivity on AI content? Exclusivity depends on two things: the possibility of protection (human originality) and contractual conditions (AI tool, service provider). Without these two pillars, exclusivity is fragile.
Is a prompt protected? A prompt can be protected if it is original and expresses a creation (it is not automatic). In the company, the subject is primarily contractual and regarding confidentiality: who owns and reuses the prompts, and how they are documented.
What are the most frequent legal risks for an SME? The most frequent are the involuntary reproduction of elements too close to a known work, infringement of a trademark (name, logo), and uncontrolled use of data (GDPR, trade secrets) in AI tools.
What does the AI Act change for companies using generators? For most SMEs, the indirect effect is the most important: transparency and documentation requirements on the supplier side, and rising client expectations (contracts, compliance, traceability).
Need to secure your AI content and workflows without slowing down your delivery?
At Impulse Lab, we help SMEs and scale-ups put AI into production with a value and risk-oriented approach: scoping, integration, automation, and team training.