Share this article:

AI Observer, Issue 1: Reflections on the legal landscape

By Caroline Day, Partner

Welcome to the HLK Newsletter series tracking the fast-changing legal landscape around AI. Over the next few months, we’ll to be keeping a close eye on legislation, government policy and case law to help you stay on top of the issues which could impact you and your business.

Generative AI in the Courts

This week, we’ll be taking a look at Getty Images vs Stability AI. At the beginning of this year, stock photo provider Getty Images commenced proceedings before the High Court in London, as well as in Delaware in the US. This case is one to watch for a number of reasons. But first a bit of background.

Stability AI is one of the entities behind Stable Diffusion, which generates images based on text prompts. At high level, to train a Stable Diffusion AI model, an image which is produced by removing noise from a noisy image is compared to a training image and scored. High scoring outcomes may be associated with text describing the training image in the resulting model. Once a model is fully trained, this can allow images to be produced from a random noise field based on text prompts without any directly comparable ‘original’ image existing.

So why is this case one to watch? Well, first, Getty are alleging infringement at the point of processing data for training the model. This could be a more straightforward fight to win when compared to, say, arguing that the output image is a copy, or a derivative work, of the images in the training data. They allege that Stability AI use versions of the ‘text and image pairings’ available on its website to train its model. If you look for it, there are plenty of suggestions of copying there. Scraping data from Getty’s website could be copying, and a further encoding step could create a further copy or a derivative work. While Stability AI are likely to say that any such copying was fair use, Getty have argued that it is in express violation of prohibitions contained in the terms of use of their website.

It is worth noting that Getty do in fact allow use of their images for training data and indeed they have recently launched their own “Commercially Safe Generative AI offering”. This is not then an objection to the principles of generative AI, but more to Stable Diffusion having a freeride.

Secondly, there are a number of smoking guns: For a start, images generated by Stable diffusion can include something which is indisputably the result of having been trained on Getty’s data: a recognisable (if imperfect) reproduction of the Getty Images water mark. Even without that, Stable Diffusion is highly transparent about its training data, meaning Getty had access to the data set and could point to a number of images bearing its watermarks and which are also registered with the US Copyright Office (noting that, in the US, registration of work can be necessary before an infringement suit may be filed in court. The UK does not have an equivalent requirement)

Finally, Getty Images are not shy about enforcing their rights. Memorably, they attempted to invoice renowned photographer Carol Highsmith in relation to her own work, which she had donated to the Library of Congress for public use, rights-free. With that kind of dedication to pursuing licence fees, and Stability AI’s relatively deep pockets, we can be hopeful that this case will go all the way.

With substantive proceedings still some time out, we will update you with more information as it comes in.

Seen something you think deserves (more) attention? Let us know here.

Legislation watch

With law makers returning from their summer breaks, things are hotting up again in relation to AI legislation.

To remind you of where we left things after the second trilogue of the EU AI Act, while some parts of the Act are technically approved while awaiting a political assessment, other parts remain far from settled. The differences between the parties are clear from this fascinating document, in which the European parliament attempts to introduce particular controls around so called ‘foundation models’, i.e. models which can be put to a variety of applications rather than those targeted to a particular use case. Agreement of the text is still expected in 2023.

Work on the EU Data Act also took a step forward before the end of June, with agreement to limit the territorial scope of the data-sharing obligation to entities established in the EU and introducing protections for Trade Secret holders in the event that they are likely to suffer harm.

The UK Government says it is analysing feedback in relation to its white paper describing its plans for implementing AI regulation underpinned by five principles: Safety, security and robustness; Appropriate transparency and explainability; Fairness; Accountability and governance and Contestability and redress. We are expecting significant advances to be made in the UK regulatory framework this autumn, in particular as the UK is hosting an AI Safety Summit at Bletchley Park. Given the advanced state of the EU regulatory process described above, the UK may be feeling under pressure to catch up with its neighbours.

Bringing things more up to date, on 12th September, legislators in France put forward a draft bill to regulate AI through enhanced copyright protections which set out explicitly that integration of copyright works in to AI software is subject to the authorization of authors or rights holders, and proposes collective rights management (perhaps similar to that used to manage payment to rights holders in music) of the rights, with a charge being imposed on companies who make use of an AI system shaped by a body of copyright works. It’s early days on this one.

On the same day in the US, a subcommittee of the Senate Judiciary set out a proposed framework for a US AI Act which would establish an independent oversight body tasked with investigating “high-risk” AI and ensuring legal accountability for AI companies, specifically seeking to remove protections for tech companies in relation to content posted by third parties. Transparency in relation to training data is also mooted, along with consumer control over how their data is used in AI systems.

We’ll do our very best to keep up to date with developments, which expect to come thick and fast over the next few months.

Contact our team at

AI application of the week

How would you feel if deep fake technology was used to make you question whether your partner is cheating on you? That’s the premise behind Deep Fake Love, and depending where you are in the world, you can watch it on Netflix. Yikes.







This is for general information only and does not constitute legal advice. Should you require advice on this or any other topic then please contact or your usual HLK advisor.