Share this article:

AI Observer | Ross can’t get a break, UK looks to legislate and Bristol battles bias

By Caroline Day, Partner and Thomas Brick, Associate and Alex Roy, Trainee Patent Attorney

Welcome back to the AI Observer, HLK’s newsletter tracking the AI legal landscape.

In the latest issue, we take a deep dive into the first major AI copyright case which has been decided in the US, the UK government’s shift from its light-touch AI regulation to a more hands-on approach and a pioneering certification tool that mitigates the risk of bias in AI.

Did you miss the previous issue of the AI Observer? Check out all our previous issues and other AI updates here.

Ross can’t get a break as Thomson Reuters lays down the Law

One of the first major AI copyright cases has been decided in the US and rather pleasingly, it’s a case about caselaw.

With apologies for the spoiler, the case concludes that the particular use of copyright material in preparing training data by the defendant was indeed an infringement of copyright. Even so, it is worth reading on to appreciate the facts of the case and consider how it relates to other cases making their way through the courts

Read the full article here.

Legislation watch

In our final issue of series 2 of the AI Observer back in late 2024, we saw how the EU was steaming ahead with the AI Act, with many of the rules and articles set to come into effect in August this year – including rules concerning General-Purpose AI Models and Article 99: Penalties.

This side of the Channel, the UK government have been taking a somewhat more “hands-off” approach. Sunak’s government opted for minimal intervention in AI regulation, relying on existing regulatory frameworks. Starmer’s government plans – as set out in last summer’s King’s speech – to enact specific legislation focused on regulating only the most powerful AI systems. In January 2025, the Labor government released an independent report, AI Opportunities Action Plan, which provides 50 action points aimed at increasing investment in the foundations of AI, pushing for cross-economy AI adoption, and positioning the UK as an “AI maker”.

The UK’s light-touch approach may well become a little more hands-on soon. In early March this year, a Private Members’ Bill, the Artificial Intelligence (Regulation) Bill, was re-introduced into the House of Lords after failing to become law back in 2024. If enacted, the Bill would require establishment of a dedicated “AI Authority” regulatory body tasked with overseeing AI compliance and coordinating with sector-specific regulators.

The Bill would be likely popular with the general public. The Ada Lovelace Institute and the Alan Turing Institute recently reported results from a national survey of public attitudes to AI in the UK, finding that laws and regulation would make nearly three-quarters of the British public more comfortable with AI, up from 62% in 2023.

This Private Members’ Bill proposes a significant change to UK AI regulation, mirroring the EU’s risk-based model and departing from the UK’s current sectoral approach. However, due to limited parliamentary time and the need for broad cross-party support, it is unlikely that this will pass into law. The Bill’s fate rests on whether policymakers prioritise stricter oversight or maintain the current decentralised regulatory approach to boost AI innovation.

AI application spotlight

In this issue, we’re looking at a pioneering AI application from a researcher right on our Bristol office’s doorstep.

Bias in AI models presents a risk that must be mitigated if we are to safely and ethically utilise machine learning for decision-critical tasks in sectors such as healthcare, finance, and business management. Developing tools for certifying the fairness of AI models is therefore a priority for many in the industry.

Xiyue Zhang, a finalist in the University of Bristol’s Reward to Research competition, was recently awarded the People’s Choice award for her work researching and developing a certification tool that can be integrated with machine learning models to ensure their fairness before deployment. The £20K prize will facilitate Xiyue Zhang and her team in developing and commercialising this innovative translational research.

Given that this award was audience-voted, it is clear that the public see both the business and societal value in ensuring the safe and ethical implementation of machine learning models, particularly in view of the UK government’s recent proposals for harnessing the potential of AI.

We’re excited to see how Xiyue Zhang and her team progress and commercialise this research as part of Bristol’s growing AI sector!

This is for general information only and does not constitute legal advice. Should you require advice on this or any other topic then please contact hlk@hlk-ip.com or your usual HLK advisor.