10 November 2023
Data and commentary provided by Mathys & Squire has featured in an article by The Trademark Lawyer and Solicitors Journal, giving an insight into the issues that companies face amidst the recent surge in AI legal cases.
An extended version of the press release is available below.
The surge in new artificial intelligence (AI) legal cases to decide if infringements arise when copyrighted data is used to train AI systems, suggests that intellectual property law will struggle to keep pace with the speed of AI developments, says leading intellectual property law firm, Mathys & Squire.
Companies are increasingly concerned about both the risks of intellectual property infringements and confidentiality issues when using AI. Companies are even urging their advisors, such as law firms and professional services firms, not to input any of their information into AI systems such as large language models (LLMs) amid fears of data leaks.
Lack of clarity on AI related copyright breaches
Mathys & Squire says that, as with the growth of the internet, AI is going to see both the courts and legislation struggle to provide businesses clear direction on rapidly developing law relating to AI.
Clear infringements occur if generative AI systems are directly reproducing copyrighted information. However, it is unclear whether copyright infringements arise when the AI system producing them has been trained on this copyrighted information.
AI LLMs are fed massive amounts of data so that the model can automatically return results, which could be copyrighted material. This is creating confusion for owners of copyrighted material who want to protect their intellectual property rights.
Andrew White, Partner at Mathys & Squire says, “AI is creating a multitude of new challenges and questions in relation to intellectual property. Many legal cases are ongoing as individuals seek clarification on what their rights and responsibilities are. Companies with copyrighted material and AI companies themselves are in urgent need of clarification.”
Recent legal cases regarding AI copyright disputes include:
Some tech firms themselves have already taken action to provide clarity on the issue. Microsoft said it will take legal responsibility if customers get sued for copyright breaches while using its AI Copilot platform. Google has also reassured users of its AI tools on Cloud and Workspace platforms that it will defend them from copyright claims.
Companies fearful of AI data leaks
AI also presents serious potential confidentiality risks to companies. As data is stored in the cloud, data leaks can occur even if companies use their own private AI systems to safeguard against information being shared outside of their organisation.
Italy had previously banned ChatGPT over data protection concerns in April 2023 before reversing its ban later that month. Apple has restricted its employees from using AI tools over fears that confidential data could be leaked to outside sources.
Adds Andrew White, “It is important to have safeguards in place to ensure that confidential information is not input into these AI systems. Given the enormous risks, companies have made formal requests to their professional advisors like law firms and management consultants not to enter any of their information into AI systems.”
High-profile examples show that the entering confidential information into AI can have serious consequences. These include:
Sign up to our mailing list to receive Mathys Matters, our monthly newsletter covering the latest IP news, industry insights, events and case law.
If you are interested in receiving quarterly newsletters relevant to our core sector groups - IT & engineering ('Inside Wires') and life sciences & chemistry ('Under the Microscope') - please select your preference(s) below:
Please select your practice area(s) of interest: