News

Who Owns Your AI Advertising Mistakes?

Who Owns Your AI Advertising Mistakes?

What advertisers need to know about AI liability in advertising production. With legal perspective from Chris Mammen, Womble Bond Dickinson

AI is being used in the production workflow but is your legal framework ready?

AI tools are generating ad copy, images, video edits, and versioned creative across the advertising industry. There are speed and cost advantages. But is your legal infrastructure around ownership, liability, and protection of the content these tools produce keeping pace?

The AI questions facing advertisers are playing out in active litigation, in recent regulatory decisions, and in contract negotiations that should reflect how production work is being done today.

We spoke with Chris Mammen, an AI thought leader and intellectual property attorney at Womble Bond Dickinson, about where the law currently stands and what advertisers should be doing about it. The conversation covered copyright, right of publicity, brand protection, contractual safeguards, insurance, and internal governance. The short version: the legal resolution of many of these issues is expected to take 3 to 5 years. 

AI-generated content is not eligible for copyright protection in the United States

The U.S. Copyright Office has taken a clear position: AI-generated content cannot be registered for copyright protection. Under U.S. law, authors and inventors must be human. In March 2026, the Supreme Court declined to hear a challenge to this position (denying certiorari), which means the lower court ruling stands and the Copyright Office's policy remains in effect.

For advertisers, this means that if campaign imagery, ad copy, or video content was generated by AI, that content may not be protectable as intellectual property. A competitor could potentially use similar or identical outputs without legal consequence, because there is no copyright to enforce.  Hybrid human-AI content, where the work product is part human-created and part AI-generated, is copyrightable, but only to the extent of the human-created portion.

Chris Mammen points out that this creates a brand protection problem that goes beyond legal liability. A brand's visual identity, its distinctive creative assets, and its campaign imagery have traditionally been protectable. When those assets are AI-generated, that protection may not exist.

The international picture adds another layer. British law, at least for now, for example, has provisions for copyright of computer-generated works that U.S. law does not recognize. For global brands producing content that will run across multiple jurisdictions, the legal status of the same piece of content can differ depending on where it appears and/or where it was created. As a global resource for our clients, BBS recognizes that this divergence is directly relevant to how production agreements are structured across markets.

The black box problem: inputs, outputs, and what happens inside

AI tools are a black box. We control what goes in (prompts, reference images, brand assets, source material) and receive what comes out (generated content). What happens between those two points is largely opaque and can’t be audited.

This matters because copyrighted material can be reproduced in AI outputs without the user's knowledge or intent. AI models are trained on large datasets which often include copyrighted works, and the outputs can reflect that training in ways that are difficult to detect. The question of whether this constitutes infringement, and who bears the liability when it does, is currently being tested in court. The Getty Images lawsuit against Stability AI and the New York Times lawsuit against OpenAI are among the most prominent cases addressing this issue.

For advertisers, the risk is that AI-generated content used in a campaign could contain elements derived from copyrighted material. If a rights holder identifies the infringement, the advertiser may face claims regardless of whether the infringement was intentional or even known. The question of who bears that cost, the advertiser, the agency, or the AI tool provider, depends on what the contracts say. In many cases, contracts may not say anything about it yet.

Right of publicity, digital likenesses, and state-level restrictions

Copyright is only one dimension of the legal landscape. The right of publicity, which covers the use of a person's name, image, likeness, and voice, is a separate area of law that AI has made more complicated.

Several states have enacted laws that specifically address AI in this context. California and Tennessee, among others, have restrictions on the use of AI-generated content when a human performer could be used instead. These laws are particularly relevant to advertising production, where talent, voice, and likeness are central to the work.

There have been a number of cases where these rights have been asserted; some of the higher-profile ones have not proceeded to litigation, or have been settled by the parties. As Chris Mammen notes, from a legal perspective, that pattern has a specific consequence: settlements do not create case law. Without decided cases establishing clear precedent, the legal boundaries remain undefined. Companies are paying to resolve individual disputes without the industry gaining any clarity about where the actual lines are.

For advertisers using AI voices, digital likenesses, or AI-generated imagery that resembles real people, the risk varies by jurisdiction and by the specific facts of each use. The absence of clear precedent means that compliance requires a conservative approach and careful attention to the laws of each market where the content will appear.

What contracts should say now

Given the current state of the law, the most actionable step for advertisers is to address AI liability directly in their production and agency contracts. Chris Mammen emphasizes several specific provisions that should be in place:

Copyrightability representations. Contracts between advertisers and agencies should specify whether the content delivered will be copyrightable. A requirement of copyrightability means requiring representations and warranties from the agency or production partner that the work product meets the legal standard for copyright protection, which under current U.S. law means it must be the product of human authorship.

AI disclosure requirements. Contracts should require agencies and production partners to disclose when and how AI tools are used in the production process. This includes specifying what inputs are being provided to AI tools, what role AI plays in the final deliverable, and whether the output has been modified by human creators to a degree that may affect its legal status.  At the same time, this should be subject to a rule of reason; with each passing month, more and more technology tools incorporate AI features as standard elements.  

Liability allocation. Contracts can allocate risk between the parties. If an agency delivers AI-generated content that turns out to infringe on a third party's rights, the contract should make clear who bears the cost. Chris Mammen makes an important distinction here: parties can agree between themselves on how to allocate risk, but they cannot contract with respect to the rest of the world. A contract can determine who pays between advertiser and agency, but it cannot prevent a third party from bringing a claim against either of them.

Indemnification. Standard indemnification clauses should be reviewed and updated to specifically address AI-related risks, including copyright infringement through AI outputs, right of publicity violations through AI-generated likenesses or voices, and failure to comply with state-level AI restrictions.

Insurance may not cover the exposure

A separate concern raised in the conversation is whether existing insurance coverage accounts for AI-related liability. Cyber liability policies have become standard for most agencies, but those policies were designed primarily around data breach and privacy exposure.

AI-generated content creates a different category of risk. The damage from an AI-related brand incident, such as campaign imagery that infringes on a competitor's intellectual property, or AI-generated content that triggers right of publicity claims, may arguably be different from a data breach. As Chris Mammen observes, the potential impact of brand damage can exceed the impact of data loss, and insurance policies should be evaluated to determine whether they cover this type of exposure.

The internal management challenge

Beyond the legal and contractual dimensions, there is a practical management issue to address. Employees and agency partners are using AI tools in their work, and in some cases they are not disclosing it. The reluctance is understandable. There is a perception that admitting to AI use undermines credibility or raises questions about the quality of the work.

This can translate as added risk for the advertiser. If AI use is not tracked and disclosed, the advertiser has no way to assess whether their content is copyrightable, whether AI-related right of publicity issues exist, or whether state-level restrictions have been observed. And because AI usage is often technically trackable (through metadata, tool logs, and digital forensics), a failure to disclose AI use that later comes to light during discovery can be more damaging than the use itself.

Organizations need clear internal policies governing when and how AI tools can be used, what disclosure is required, and what approvals are needed. BBS developed an AI Usage Policy for its own operations in early 2026 that addresses these issues, and we believe every organization involved in advertising production should have something similar in place. At the same time, because this technology is evolving so rapidly, AI policies should be reviewed and updated regularly.

What advertisers should be doing now

The legal landscape around AI in advertising production will take years to fully resolve. Chris Mammen estimates 3 to 5 years before the courts, regulators, and industry reach settled positions on the core questions of copyright, liability, and right of publicity as they relate to AI-generated content.

Advertisers do not have the luxury of waiting. AI is in the production workflow today, and the decisions being made now about how it is used, disclosed, and governed will determine the organization's exposure when the legal landscape does stabilize.

Based on our conversation with Chris Mammen, we would recommend that advertisers take the following steps:

1.     Audit your current production contracts and agency agreements for any mention of AI-generated content, copyright representations, and liability allocation. If those provisions are not present, they should be added.

2.     Require disclosure of AI use from all production partners, including agencies, production companies, post-production vendors, and content creators. Specify what level of disclosure is expected and at what stage of the production process.

3.     Include copyrightability representations and warranties in all content delivery agreements. If an agency or production partner is delivering creative work for which IP protections are important, the contract should state that the work is eligible for copyright protection.

4.     Review your agencies’ insurance coverages to determine whether cyber liability or other policies cover AI-related brand damage, intellectual property infringement through AI outputs, and right of publicity claims.

5.     Establish internal AI usage policies that govern how your own teams use AI tools, what approvals are required, and what disclosure obligations exist.

6.     Monitor state-level legislation, particularly in California and Tennessee, as well as international developments (such as the UK's different treatment of computer-generated works) that affect how AI-generated content is treated in the markets where your advertising appears. Quarterly or semi-annual check-ins with your intellectual property firm(s) may be a way to keep pace.

While we can’t wait for the law to settle, we can work to reduce total exposure in the meantime.

Chris Mammen is an AI thought leader and intellectual property attorney at Womble Bond Dickinson. BBS Worldwide is an independent advertising production consulting firm.

This article is for informational purposes only and does not constitute legal advice. The legal perspectives shared here reflect general observations about the current state of the law and should not be relied upon as guidance for specific legal decisions. Readers should consult qualified legal counsel for advice on their particular circumstances.

Search