Neuigkeiten
Retailers liability and AI Applications
November 2025
The question of whether the user (consumer) or the provider of an AI application is liable for the infringement of intellectual property rights as a result of the use of an artificial intelligence application is an interesting one and increasingly relevant to retailers as they roll out more and more AI applications. The recent Getty Images vs Stability AI decision of the 4th November 2025 has shed a bit more light on this thorny issue.
Liability of intermediaries: an old story
The question of the level of liability of intermediaries in the retail supply and promotion chain is not a new one. As referenced in the Getty Images vs Stability AI decision, the Court of Justice of the European Union (‘CJEU’) has handed down judgments with regards the liability of search engine platforms such as Google with regards keyword advertising in the well-known Google France SARL and Google Inc. v Louis Vuitton Malletier SA (C-236/08), with regards the storage and dispatch of goods in Coty Germany GmbH v Amazon Services Europe Sàrl and others (C‑567/18) and with the regards the use of trade marks by third party advertisers in Daimler AG v Együd Garage Gépjárműjavító és Értékesítő Kft. (Case C‑179/15). In all these cases, the intermediary was held not liable for trade mark infringement based essentially on defences of lack of active knowledge of any infringing activity and the platforms/parties being essentially non-active in the infringing activity. In essence the parties were held not to be using the trade marks concerned in the course of trade, an essential pre-requisite for the finding of trade mark infringement.
Now we come to the AI
Now, we come to the question of the liability of AI applications which is a new phenomenon to consider. Stability AI attempted to use the reasoning put forward in the Google France, Coty and Daimler cases in the Getty case and argued that it was the user, and the user alone, of its AI platforms who controlled the potential output of infringing images created by them via the prompts the user (consumer) entered into the AI applications. Stability AI was essentially arguing it was not liable; it was merely a tool not an active party in any infringing activity. Here the court was considering the question of trade mark infringement based on the production of the GETTY IMAGES and ISTOCK trade marks in images produced by Stability AI applications. On this point, Stability AI lost.
Counsel for Stability AI in cross examination with a witness argued:
“the model is a tool controlled by the user and the more detailed the prompt is, the more control is being imposed.”
In response the witness responded:
“That is partially true. The user has control over what is prompted. However, what the user does not have control over is what the model is trained on. We have no control over that. What the user does not have control over are any semantic guardrails that might be put on the prompt and any semantic guardrails that we put on the output. Absolutely, the user has control over what it asked for but does not have 100% control over what is coming out the other end.”
Stability AI lost this point because (a) it controlled the data on which its AI applications were trained and (b) in the Getty case at least the judge seemed to be swayed by the fact that most users did not want GETTY IMAGES or ISTOCK trade mark appearing on their outputted images, so it could be argued were actively not trying to produce infringing images.
Now we come to the retailers and user’s liability
The exchange between Stability AI and the witness quoted above for me highlights some of the key issues of liability of AI applications. A key issue seems to be control. It seems liability cannot be pushed to the user, if the user has no absolute control over the output of the AI application. Detailed prompts encouraging infringing activity, such as searching for counterfeit products, might make it more likely that liability could be pushed to user. However, given that a user would rarely have complete control over the outputs of an AI application, after all that is the point of an AI application.
It seems that retailers might wish to concentrate more on mitigating activities to counter infringing activities via their AI application than trying to push the buck to the user. So here we have to consider guardrails. It is interesting to note that Stability AI had put in place so-called ‘filtering functionality’ so the applications did not render photo-realistic likenesses of well-known public figures, which were added to combat concerns over fake news and propaganda. The prompts were scanned for celebrity names. So, it was at least aware of the potential of some concerning activity via its AI models.
So, what are the practical take aways for a retailer with an AI application in place? For a retailer to avoid liability as best it can and fall within the reasoning laid down in the Google France, Coty and Daimler cases, it must in my eyes:
- Screen the training data of the AI application to avoid infringing material.
- Put in place guardrails into the AI application to push users away from infringing activity.
- Put in place takedown procedures to remove infringing activity when it is brought to its attention.
The above would push the retailer as much as possible into the group of inactive enabler of commerce, akin to a keyword advertising platform such as Google, not an active participant in any infringing activity. Afterall the Google, Coty and Daimler cases all concerned some arguably infringing activity, but those parties succeeded in their defences because they were considered not active in such activity.
What the Getty v Stability AI case however does however seem to make clear is that once the AI application is deemed to using the infringing sign in the course of trade liability will rest with the owner/controller of the AI application, and that could in future be a retailer.
This article was prepared by Partner and Trade Mark Attorney Lee Curtis


