Share this article:

AI Observer, Issue 3: Copyright in AI Generated Works 

By Caroline Day, Partner and Jamie Rowlands, Partner

Welcome to the third issue of the HLK Newsletter tracking the AI legal landscape. In this issue, we are taking a closer look at how copyright is being handled for AI generated works and update you on discussions regarding the AI Act and news about the imminent AI summit at Bletchley Park.

Copyright in AI Generated Works 

For anyone that has given Stability AI’s Stable Diffusion a go, the results can be both brilliant and sometimes a bit spooky. Enter the prompt “monkey on a bicycle with a pink tutu holding an umbrella in the rain” and within seconds, just such a picture materialises. Better still, if the picture isn’t quite what you were after, you can enter further prompts to refine the final output. Amazing. But is there copyright in the final output, and who owns it? Is it the person who writes the prompt, the AI tool itself that produces the final output, the owner of the AI tool or someone else?

Courts around the world are beginning to look at these issues. Perhaps unsurprisingly, although copyright laws are broadly similar in most countries in the world courtesy of the Berne Convention, nuances in legislation and different approaches by courts means that there isn’t a consistent answer across jurisdictions.

One issue that does appear to be universally accepted at least for the time being – is that the author of an original copyright work cannot be a machine itself. It must be a human. However, beyond this, jurisdictions diverge. 

There are several interesting US decisions on this topic. Anyone following AI legal developments will have heard of Dr Thaler who has run test cases around the world to assess whether his invention generating machine DABUS could be classed as the original inventor of a patent. See, for example, Thaler’s challenge to the UK Intellectual Property Office. Not content with taking on the patent system, he has also drawn swords with the US Copyright Office, attempting to register a copyright work claiming that the AI tool which created it is the original author. Whilst the US court acknowledged that “we are approaching new frontiers in copyright…”, it rejected the position that a machine could be the author of an original copyright work emphasising that “human authorship is a bedrock requirement of copyright”. The copyright work was refused registration but, as we reported in our last edition, the decision is being appealed. 

The question then turns to how much control a human must have in the final output from an AI tool to assess whether a human is capable of being the original author. This raises the importance of how prompts are used to create the final output. In another US case concerning the comic book illustrations of Zarya of the Dawn which were created by Midjourney’s AI tool, the comic’s author, Ms Kashtanova, claimed that she was the author of the illustrations because she had added creative input through the use of prompts. However, the US Copyright Office held that the prompts were not sufficiently instructive. It found: “Rather than a tool that Ms Kashtanova controlled and guided to reach her desired image, Midjourney generates images in an unpredictable way. Accordingly, Midjourney users are not the “authors” for copyright purposes of the images the technology generates”. Therefore, copyright registration for the illustrations was rejected.

The position is different in the UK. Built into section 9(3) of the Copyright, Designs and Patents Act, there is an express provision that sets out that for computer-generated works the author shall be the person “by whom the arrangements necessary for the creation of the work are undertaken”. In principle, this allows human authorship for AI generated outputs and removes some of the challenges faced in the US. However, there could still be questions as to who made the necessary arrangements – is it the user or owner of the AI tool? This is yet to be tested in the UK courts. 

It may be that a solution to this dilemma will come in the form of the terms of use of the AI tools themselves. For example, Open AI’s terms of use state that prompts and outputs belong to the user and it expressly assigns its rights, title and interest in them. 

We will be keeping you up to date as things develop. 

Legislation watch

Happy trilogue week to the AI Act! This week, the European Parliament, the Council of the European Union and the European Commission have been attempting to settle some of the remaining thorny issues in the draft AI Act. Foundation models – i.e. general purpose AI models – are a particular focus, and early suggestions are that good progress has been made. Reports here and here suggest that a tiered approach based on capabilities and user numbers has been proposed. Under this approach, very capable and/or widely used models may be subject to vetting and required to ensure that risk mitigation is in place. The ability of copyright holders to opt their works out of a training data corpus has also been proposed. Other areas of contention include the exact definitions of the prohibited “Article 5” AI systems and the level of fines. While good progress is being made, with significant work still to do and the next planned trilogue not taking place until December, the idea that the act might be done and dusted this year is perhaps slipping out of the EU’s grasp.

Meanwhile, there are suggestions that some key parties could be absent from the planned AI summit at Bletchley Park in November. The possibility of German Chancellor Olaf Scholz’s absence has been widely reported, and there is a question mark over leaders such as Macron and Trudeau. The US will be represented by Kamala Harris, and there are so far no suggestions that attendance by the Big Tech companies is compromised.

AI application of the week

While we wait for clarity on constraints on the inclusion of copyright material in training data for generative AI, a team at the University of Chicago are helping copyright holders fight back. Nightshade is a tool which “poisons” an image by adding pixels which are invisible to the eye but will confuse a model, causing it to misinterpret the image’s content. A relatively small number of poisoned images appears to have a significant disruptive impact on the output generated. It will be interesting to see how AI adapts to such innovative challenges.

 

 

 

This is for general information only and does not constitute legal advice. Should you require advice on this or any other topic then please contact hlk@hlk-ip.com or your usual HLK advisor.

HLK bubble graphic HLK bubble graphic

Keep up-to-date with the latest IP insights and updates as well as upcoming webinars and seminars via HLK’s social media.