AI Archives - Legal Cheek https://www.legalcheek.com/tag/ai/ Legal news, insider insight and careers advice Tue, 02 Sep 2025 07:20:09 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://www.legalcheek.com/wp-content/uploads/2023/07/cropped-legal-cheek-logo-up-and-down-32x32.jpeg AI Archives - Legal Cheek https://www.legalcheek.com/tag/ai/ 32 32 Lawyers using AI to boost billable hours, report finds https://www.legalcheek.com/2025/09/lawyers-using-ai-to-boost-billable-hours-report-finds/ https://www.legalcheek.com/2025/09/lawyers-using-ai-to-boost-billable-hours-report-finds/#comments Tue, 02 Sep 2025 06:53:03 +0000 https://www.legalcheek.com/?p=223718 Prioritising additional chargeable work over potential work–life balance benefits

The post Lawyers using AI to boost billable hours, report finds appeared first on Legal Cheek.

]]>

Prioritising additional chargeable work over potential work–life balance benefits


Lawyers are increasingly using AI tools to drive up billable hours, with more than half admitting they spend the time saved by automation on extra chargeable work.

The findings come from a new report by LexisNexis, The AI Culture Clash, which shows 61% of lawyers now use AI in their day-to-day work, up from 46% in January 2025.

Most lawyers (56%) said they used the time saved with AI to increase billable work, while nearly as many (53%) used it to improve their work-life balance.

Associates across firms of all sizes are prioritising billable work over wellbeing, with larger firms in particular focusing on the commercial gains AI can deliver.

“Lawyers are proving that AI delivers clear commercial returns,” said Stuart Greenhill, senior director of segment management at LexisNexis UK. “They’re using it to increase billable hours, rethink pricing models, and deliver more value to clients. Firms that treat AI as a strategic investment, not just an efficiency tool, will gain a decisive edge in profitability and client satisfaction.”

The 2025 Legal Cheek Firms Most List

Despite the surge in usage, the report highlights a cultural lag. Only 17% of lawyers said AI is fully embedded in their firm’s strategy and operations, while two-thirds reported their organisation’s AI culture is slow or non-existent.

Confidence is highest among those using tools designed specifically for the legal sector, with 88% of users reporting greater trust in outputs grounded in verified legal sources. This follows several high-profile incidents where lawyers used general AI tools, only to discover that the tools had fabricated cases, which were then inadvertently included in legal submissions.

The research also warned of a potential talent risk for firms that fall behind. Nearly one in five lawyers said they would consider leaving their organisation if it failed to adequately invest in AI — a figure that jumps to 26% at large firms.

Almost half of lawyers (47%) now expect AI to transform billing models, up from 40% earlier this year, with law firm leaders and general counsel among the most attuned to the shift.

The post Lawyers using AI to boost billable hours, report finds appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/2025/09/lawyers-using-ai-to-boost-billable-hours-report-finds/feed/ 12
From courtroom to code: How AI is shaping our legal system https://www.legalcheek.com/lc-journal-posts/from-courtroom-to-code-how-ai-is-shaping-our-legal-system/ https://www.legalcheek.com/lc-journal-posts/from-courtroom-to-code-how-ai-is-shaping-our-legal-system/#comments Mon, 18 Aug 2025 07:09:05 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=222569 BPP SQE student Eve Sprigings examines whether AI in the courtroom will enhance or erode justice

The post From courtroom to code: How AI is shaping our legal system appeared first on Legal Cheek.

]]>

BPP SQE student Eve Sprigings examines whether AI in the courtroom will enhance or erode justice


Artificial Intelligence isn’t just changing how we live and work — it’s quietly transforming the very foundations of justice in the UK. From courtrooms to corporate boardrooms, AI tools are reshaping how judges decide cases, how precedents evolve and how law firms operate. But as AI gains influence, it challenges age-old legal traditions, raising urgent questions about fairness, transparency and the very future of human judgment. Welcome to the new frontier where bytes meet briefs, and algorithms might just rewrite the law.

AI in the judiciary: A double-edged sword for legal precedents

AI’s potential to reshape binding precedents is no longer theoretical. The rise of predictive legal technologies is a prime example. Consider Garfield Law — the first AI-powered law firm recognised in England and Wales — using AI-led litigation to handle routine matters such as unpaid debt in small claims courts. Whilst this could make legal processes cheaper and faster, it arguably raises questions about maintaining quality and public trust where human judgment has historically been paramount.

Other AI tools like ROSS Intelligence and Legal Robot that help lawyers analyse judicial reasoning patterns even challenge the ethics behind today’s case law accessibility. For example, the antitrust claim “ENOUGH” on ROSS challenging legal paywalls imposed by large sites Westlaw and Thomson Reuters, pushing for broader access to public caselaw. Though not yet part of judicial decision-making, these AI systems hint at a future where algorithms influence precedent and legal interpretation—challenging outdated, gated legal services.

Since the internet’s rise, digitisation means the birth of AI is only taking this further. AI systems can process vast legal databases, potentially highlighting new interpretations or trends that allow for legal doctrine evolution.

A University of Cambridge study highlights AI’s ability to detect judicial decision patterns through case trend analysis, suggesting future shifts in legal standards. But it’s not flawless: AI can both augment and undermine the rule of law, reminding us that error and bias remain concerns.

The human element in AI-assisted justice

Human oversight remains critical. The Alan Turing Institute of Northumbria University scrutinises AI-assisted sentencing tools for errors or procedural irregularities. These considerations underscore the need for transparency, accountability, and human reasoning at the heart of justice—even as automated decision-making grows.

Tools like Harm Assessment Risk Tool (HART), used since 2017 to predict reoffending risks in England and Wales, are already influencing custodial decisions and police work. Such data-driven algorithms may reshape sentencing precedents, but concerns about socio-demographic bias — such as postcode-based discrimination — highlight challenges in balancing data insights with fairness.

AI and technology law: Intellectual property and beyond

AI’s impact on technology law, especially intellectual property (IP), raises thorny questions. Professor Ryan Abbott’s “The Reasonable Robot” explores whether AI-generated inventions deserve patent protection. The European Patent Office’s 2023 ruling on AI inventorship highlights ongoing legal uncertainty around AI’s ownership rights, signalling IP law’s rapid evolution.

UK parliamentary debates this year reflect broader concerns — AI is poised to reshape corporate governance, case management, and dispute resolution. Internationally, AI’s geopolitical importance grows: for instance, US-Saudi talks over Nvidia’s AI chips reveal AI as a new diplomatic currency, overtaking oil as the trade driver.

Want to write for the Legal Cheek Journal?

Find out more

China’s “Smart Courts,” launched by the Supreme People’s Court in 2016 offer a glimpse of AI-driven judicial innovation. Originally focused on traffic cases, these courts enabled smooth transitions to online procedures during COVID-19, balancing technological efficiency with legal norms. They demonstrate that AI’s role in justice isn’t about replacing human judgment but streamlining administration and maintaining court deadlines.

One notable case illustrating AI’s complexity in IP is Li v Liu [2023] from Beijing Internet Court. It involved an AI-generated image created with Stable Diffusion, and the Court considered copyright infringement claims amid AI’s growing role in artistic creation. Here, decisions remain highly case-specific, reflecting how nascent AI law still is.

AI beyond tech: Transforming wider legal practice

AI’s reach extends well beyond tech law. Automated contract drafting and predictive analytics now assist employment law firms in anticipating disputes, while recruitment agencies deploy AI tools to screen candidates—though risks of biased outcomes remain a worry.

Data privacy law, particularly under the UK’s General Data Protection Regulation 2021 (GDPR), exemplifies AI’s regulatory challenges. Companies increasingly use AI to ensure compliance, pushing legal governance toward greater rigor and transparency. AI isn’t just shaping law; it’s reshaping how firms manage legal risk internally.

AI in court operations: Building a new legal infrastructure

UK courts are rapidly digitising, with AI-driven tools streamlining everything from e-filing and case scheduling to virtual hearings. The HM Courts & Tribunals Service (HMCTS) employs AI to enhance operational efficiency, helping courts focus more on substantive justice rather than administrative logistics.

Online dispute resolution (ODR) systems powered by AI are also gaining ground, especially for small claims—reducing backlog and improving access. Yet critics warn that sensitive cases, such as family law disputes, demand nuanced human judgment that AI cannot replace.

Returning to China’s experience, their Smart Courts reveal that balanced AI use—strictly monitored and focused on organizational efficiency—can reduce backlog and enhance judicial fairness without undermining human decision-making. Systems like Shanghai’s “206 system” use AI for evidence analysis and sentencing support, illustrating how technology can create a more cost-effective, straightforward judiciary.

Conclusion: The future of law in an AI-driven world

AI is no futuristic fantasy—it’s here, reshaping the UK’s judiciary and legal culture with unprecedented speed. As AI influences criminal justice and beyond, ethical concerns about bias and judicial independence demand ongoing scrutiny.

The British Computer Society (BCS) notes AI’s potential to support health and social care decisions, mirroring AI’s intended role in law: to assist—not replace—human roles. Garfield Law’s pioneering AI-driven model exemplifies this future, easing public sector burdens whilst maintaining core legal values.

Whether AI becomes a subtle tool enhancing judicial reasoning or a key player in shaping legal norms, the next decade will see it fundamentally alter UK law. This shift offers fresh opportunities for emerging legal sectors but also challenges traditional case law and statutes that underpin our legal culture—wiping away centuries of tradition almost as swiftly as a digital swipe.

Worldwide, governments are in a high-tech arms race to regulate AI-related IP, compliance, and broader legal issues, seeking a delicate balance between protecting national priorities and fostering technological innovation.

The challenge? Ensuring that AI strengthens justice rather than dilutes it — guarding the human heart of law even as machines take their place in the courtroom.

Eve Sprigings is a law graduate currently undertaking the SQE Diploma at BPP University. She has garnered experience across chambers, commercial law firms, and international legal settings, with a focus on legal research, contract analysis, and in both contentious and non-contentious matters. Eve has a strong interest in commercial and corporate Law, as well as data protection, and is passionate about making modern legal frameworks accessible and understandable to all.

The Legal Cheek Journal is sponsored by LPC Law.

The post From courtroom to code: How AI is shaping our legal system appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/from-courtroom-to-code-how-ai-is-shaping-our-legal-system/feed/ 1
Should AI be given legal personhood? New Law Commission paper raises ‘radical’ possibility https://www.legalcheek.com/2025/08/should-ai-be-given-legal-personhood-new-law-commission-paper-raises-radical-possibility/ https://www.legalcheek.com/2025/08/should-ai-be-given-legal-personhood-new-law-commission-paper-raises-radical-possibility/#comments Thu, 07 Aug 2025 07:53:46 +0000 https://www.legalcheek.com/?p=223158 No legal status... for now

The post Should AI be given legal personhood? New Law Commission paper raises ‘radical’ possibility appeared first on Legal Cheek.

]]>

No legal status… for now


A new Law Commission discussion paper has floated the once sci-fi idea of giving artificial intelligence (AI) systems their own legal personality — meaning they could, in theory, be sued, held liable for harm or even pay damages.

The paper, titled AI and the Law, explores the legal challenges posed by the rise of autonomous, adaptive AI, including who should be liable when AI systems act independently and cause harm. While the paper stops short of proposing specific reforms, it suggests that a “potentially radical option” could be “granting some form of legal personality to AI systems”.

Currently, AI cannot be held legally liable as it has no legal status. But with AI systems becoming increasingly sophisticated and capable of completing complex tasks with little or no human input, the Law Commission warns that “liability gaps” could emerge where “no natural or legal person is liable for the harms caused by, or the other conduct of, an AI system”.

The paper states: “Current AI systems may not be sufficiently advanced to warrant this reform option. But given the rapid pace of AI development, and the potentially increasing rate of pace of development, it is pertinent to consider whether AI legal personality requires further discussion now, in the event that such highly advanced AI arrives in the near future.”

Legal personality — the ability to be sued or held accountable — is currently limited to natural persons (humans) and legal persons (such as companies). Extending it to AI systems would be unprecedented, and the Commission acknowledges this would represent a significant shift in legal thinking.

The 2025 Legal Cheek Firms Most List

The core problem arises when AI acts autonomously, making decisions that cannot easily be traced back to a developer or user. The Commission points out that “AI systems do not currently have separate legal personality and therefore can neither be sued or prosecuted”.

In such cases, victims might struggle to obtain compensation, or the state could be left requiring “assistance at public expense”. The Commission warns that this legal uncertainty could also hinder innovation, for instance by impeding insurance for AI-related risks.

While the idea of AI personhood remains speculative, the Commission argues that now is the time to discuss it, given the “rapidly expanding use of AI” and its likely impact across areas including product liability, public law, criminal law and intellectual property.

In the meantime, the Commission plans to monitor the legal impact of AI across its wider law reform work. It has already looked at AI in automated vehicles and deepfakes, with projects underway on aviation autonomy and product liability.

For now, AI remains a tool, not a person, but as the Commission notes: “It is not yet clear that those same [legal] systems will apply equally well to new technology that is also intelligent to varying degrees.”

The post Should AI be given legal personhood? New Law Commission paper raises ‘radical’ possibility appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/2025/08/should-ai-be-given-legal-personhood-new-law-commission-paper-raises-radical-possibility/feed/ 3
Government embraces AI in bid to speed up justice https://www.legalcheek.com/2025/08/government-turns-to-ai-to-speed-up-justice/ https://www.legalcheek.com/2025/08/government-turns-to-ai-to-speed-up-justice/#comments Tue, 05 Aug 2025 07:42:38 +0000 https://www.legalcheek.com/?p=223133 New plan aims to transform courts system with AI-powered tools, reduce backlogs and boost efficiency

The post Government embraces AI in bid to speed up justice appeared first on Legal Cheek.

]]>

New plan aims to transform courts system with AI-powered tools, reduce backlogs and boost efficiency

artificial intelligence
The Ministry of Justice (MoJ has unveiled an ambitious new strategy to roll-out AI across courts, prisons, and probation services in England and Wales, aiming to deliver faster, fairer and more efficient justice.

The AI action plan for Justice, released this week and backed by the Prime Minister and Lord Chancellor, outlines how the department plans to embed AI tools across the justice system over the next three years. It marks the first plan of its kind in the UK and sets out over 60 initiatives including AI-powered chatbots, transcription tools and predictive risk models.

The plan is built around three priorities: strengthening the system’s technical foundations, embedding AI into public-facing and operational services and investing in staff skills and partnerships. A newly appointed Chief AI Officer will lead a dedicated Justice AI Unit tasked with coordinating AI projects and ensuring public trust through robust ethical oversight. The MoJ also announced a Justice AI Fellowship to attract top AI talent from other industries and universities into government.

According to the MoJ, AI technologies are already being used in pilot schemes to help staff transcribe probation meetings, summarise court bundles and automate paperwork.

The department says it will provide secure AI assistants to all 95,000 staff by the end of 2025. These tools will support everyday tasks such as drafting, searching, and scheduling, with initial pilots reportedly saving staff 30 minutes per day on average.

The MoJ also pointed to examples from the private sector, including the use of AI in a high-profile criminal trial at the Old Bailey. Defence lawyers used AI software by UK firm Luminance to analyse 10,000 documents, reportedly saving £50,000 in costs and four weeks in review time.

Meanwhile, the Solicitors Regulation Authority (SRA) has approved the world’s first AI-driven law firm, Garfield AI, which helps businesses recover small debts via automated pre-action processes. The MoJ suggests such innovations could reduce pressure on the courts and improve access to justice.

While the plan highlights AI’s potential to reduce backlogs and improve decision-making, it also stresses the need for responsible adoption. “AI should support, not substitute, human judgment,” the report states, pledging to protect judicial independence and avoid algorithmic bias. AI tools that affect individual rights, such as risk assessments in custody, will face rigorous testing and oversight, with all use cases published for public scrutiny.

The MoJ is also working closely with regulators including the SRA and Bar Standards Board to ensure the legal sector’s approach to AI remains proportionate and evidence-based.

Critics, however, warn that infrastructure gaps and funding uncertainty could hinder progress. The MoJ has secured initial funding for its AI plan but notes that “long-term, sustained funding” is needed to scale successful pilots.

The department is also exploring new procurement models to help smaller UK AI firms secure government contracts. Initiatives such as ‘Reverse Pitch’ events allow startups to co-design solutions with MoJ staff, with several SMEs already developing tools for offender education and digital learning.

Year one of the rollout (from April 2025) will focus on “early wins”, such as scaling AI assistants, testing AI-powered search and transcription tools, and piloting citizen-facing chatbots to help the public navigate legal services. More advanced applications, like predictive models for sentencing and risk, will be tested later, subject to judicial and ethical review.

The post Government embraces AI in bid to speed up justice appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/2025/08/government-turns-to-ai-to-speed-up-justice/feed/ 2
‘How are junior lawyers using AI?’ https://www.legalcheek.com/2025/08/how-are-junior-lawyers-using-ai/ https://www.legalcheek.com/2025/08/how-are-junior-lawyers-using-ai/#comments Mon, 04 Aug 2025 07:04:57 +0000 https://www.legalcheek.com/?p=222797 Amid the hype and hyperbole, one Legal Cheek reader wonders what real impact technology is having on the day-to-day working lives of trainees and associates

The post ‘How are junior lawyers using AI?’ appeared first on Legal Cheek.

]]>

Amid the hype and hyperbole, one Legal Cheek reader wonders what real impact technology is having on the day-to-day working lives of trainees and associates


In our latest Career Conundrum, a curious reader reaches out to probe how firms are really using AI — and what ripple effects it might be having across the legal job market.

“This isn’t really a career conundrum, but I’d love it if you ran it anyway. I was reading your latest cc, ‘How’s everyone feeling about the NQ job market?‘, and noticed a lot of negative comments from September NQs about how tough things are right now when it comes to securing roles. One comment in particular stood out to me. It raises the question of whether firms’ use of AI might be contributing to the problem, especially given how many firms have recently reported fairly solid financial results. I’m curious to know what your readers think, especially trainees and junior lawyers. Are they using AI on daily basis? If so, what for? And perhaps more importantly, do they foresee it coming at the expense of NQ or junior lawyer roles any time soon?”

If you have a career conundrum, email us at tips@legalcheek.com.

The post ‘How are junior lawyers using AI?’ appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/2025/08/how-are-junior-lawyers-using-ai/feed/ 6
Aspiring lawyers urged to focus on firms’ AI prowess, not just big salaries https://www.legalcheek.com/2025/07/aspiring-lawyers-urged-to-focus-on-firms-ai-prowess-not-just-big-salaries/ https://www.legalcheek.com/2025/07/aspiring-lawyers-urged-to-focus-on-firms-ai-prowess-not-just-big-salaries/#comments Tue, 29 Jul 2025 10:14:17 +0000 https://www.legalcheek.com/?p=222814 Simmons senior partner argues that top tech could accelerate trainees' careers

The post Aspiring lawyers urged to focus on firms’ AI prowess, not just big salaries appeared first on Legal Cheek.

]]>

Simmons senior partner argues that top tech could accelerate trainees’ careers


A top lawyer has urged training contract hunters not to focus solely on the eye-watering salaries on offer at some City law firms, but to also consider how these firms are using artificial intelligence (AI) — a factor that could play a defining role in their future legal careers.

This career pointer comes courtesy of Simmons & Simmons senior partner Julian Taylor, who suggests that AI tools could reshape the legal industry’s ongoing “war for talent”, which has driven some newly qualified (NQ) solicitors’ salaries as high as £180,000.

Taylor argues that the career paths of “budding lawyers” are being irrevocably altered by AI, “becoming less linear and hierarchical”, with trainees taking on high-level work much earlier.

The 2025 Legal Cheek Firms Most List

Taylor — an employment law specialist helping Simmons on “its journey to becoming a next-generation law firm,” according to his online bio — says young associates who will thrive are those who use AI tools effectively. He advises them to look carefully at how their target firms are adopting the technology and how it can support their long-term career development, rather than getting distracted by six-figure salaries.

But it’s not just AI that should be on the TC checklist for would-be lawyers. Taylor also points to factors like culture, values, and purpose — though he quickly concedes that these often have to compete with firms’ “relentless pursuit of profit and growth”.

“I’m optimistic about the future of law and how AI is helping reshape it,” Taylor concludes. “As an industry we must continue to focus on societal value, balancing technological progress with personal service to clients, and embracing the dawn of a new legal culture.”

The post Aspiring lawyers urged to focus on firms’ AI prowess, not just big salaries appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/2025/07/aspiring-lawyers-urged-to-focus-on-firms-ai-prowess-not-just-big-salaries/feed/ 3
AI in court: rights, responsibilities and regulation https://www.legalcheek.com/lc-journal-posts/ai-in-court-rights-responsibilities-and-regulation/ https://www.legalcheek.com/lc-journal-posts/ai-in-court-rights-responsibilities-and-regulation/#comments Thu, 24 Jul 2025 08:38:37 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=222221 Birmingham Uni student James Bloomberg explores the challenges that AI poses to the justice system and concepts of legal personhood

The post AI in court: rights, responsibilities and regulation appeared first on Legal Cheek.

]]>

Birmingham Uni student James Bloomberg explores the challenges that AI poses to the justice system and concepts of legal personhood


The advancement of artificial intelligence (AI) presents a complex challenge to contemporary legal and ethical frameworks, particularly within judicial systems. This Journal explores the evolving role of AI in the courtroom, using recent high-profile cases, including fabricated legal citations and algorithmic hallucinations. It examines how AI’s integration into legal research and decision-making strains traditional understandings of accountability, responsibility and legal personhood. The discussion also considers AI’s broader societal impact.

The advancement of technology over the recent years has resulted in a seismic shift in the way societies interact, how businesses operate and how governments can regulate this change. AI is now a driving force changing how we live our lives, how students work at university, but most of all its ability to make quick decisions creates red flags, especially for law firms. With AI becoming a part of our everyday life, with AI now in-built on WhatsApp, X (formally Twitter) and elsewhere, questions have been raised: Should AI be granted legal rights? This discussion, far from hypothetical, would challenge existing legal frameworks, and ultimately lead to questions about the societal as well as ethical implications there would be, when AI is recognised as a legal entity.

Article 6 of the Universal Declaration of Human Rights addresses legal personhood, the status upon which an entity is granted the ability to hold rights and duties in the legal system. This can be anything from the legal persons being the owners of property, the ability to act and be held responsible for those actions or the ability to exercise rights and obligations, such as by entering a contact. In the past, corporations have been granted legal personhood. However, if this same concept was applied to AI systems such as ChatGPT, this introduces further complexities that transcend any current legal definitions. The European Parliament has previously explored whether AI systems should be granted a form of legal status to address accountability issues, particularly in cases where harm is caused by autonomous systems.

In 2024, a Canadian lawyer used an AI chatbot for legal research, which created “fictitious” cases during a child custody case in the British Columbia Supreme Court. This was raised by the lawyers for the child’s mother, as they could not find any record of these cases. The circumstance at hand was a dispute over a divorced couple, taking the children on an overseas trip, whilst locked in a separation dispute with the child’s mother. This is an example of how dangerous AI systems can be, and why lawyers today need to use AI as an assistant not as a cheat sheet. However, who is to blame here, the lawyers or the AI chatbot?

Want to write for the Legal Cheek Journal?

Find out more

A major argument against granting AI legal personhood is that it would contradict fundamental human rights principles. The High-Level Expert Group on Artificial Intelligence (AI HLEG) strongly opposes this notion, emphasising that legal personhood for AI systems is “fundamentally inconsistent with the principle of human agency, accountability, and responsibility”. AI lacks consciousness, intent, and moral reasoning — characteristics that underpin legal rights and responsibilities. Unlike humans or even corporations (which operate under human guidance), AI lacks an inherent capacity for ethical decision-making beyond its programmed constraints.

Another central issue is accountability. If AI were granted legal rights, would it also bear responsibilities? Who would be liable for its actions?

Another case saw a federal judge in San Jose, California, ordering AI company Anthropic to respond to allegations that it submitted a court filing containing a ‘hallucination’ created by AI as part of its defence against copyright claims by a group of music publishers. The claim sees an Anthropic data scientist cite a non-existent academic article to bolster the company’s argument in a dispute over evidence. Currently, clarity is needed as to whether liability for AI-related harm is at the hands of the developers, manufacturers, or users.

In the UK, the allocation of liability for AI-related harm is primarily governed by existing legal frameworks, the common law of negligence, and product liability principles. Under the Consumer Protection Act for example, manufacturers and producers can be held strictly liable for defective products that cause damage, which theoretically could extend to AI systems and software if they are deemed products under the Act. Developers and manufacturers may also face liability under negligence if it can be shown that they failed to exercise reasonable care in the design, development, or deployment of AI systems, resulting in foreseeable harm. Users, such as businesses or individuals deploying AI, may be liable if their misuse or inadequate supervision of the technology leads to damage. While there is currently no bespoke UK statute specifically addressing AI liability, the Law Commission and other regulatory bodies have recognised the need for reform and are actively reviewing whether new, AI-specific liability regimes are required to address the unique challenges posed by autonomous systems.

The use of legal personhood on AI may create situations where accountability is obscured, allowing corporations or individuals to evade responsibility by attributing actions to an “autonomous” entity.

Further, AI decision-making lacks transparency as it often operates through black-box algorithms, raising serious ethical and legal concerns, particularly when AI systems make decisions that affect employment, healthcare, or criminal justice. The European Parliament’s Science and Technology Options Assessment (STOA) study has proposed enhanced regulatory oversight, including algorithmic impact assessments, to address transparency and accountability. Granting AI legal rights without resolving these issues would only increase the risk of unchecked algorithmic bias.

The ethical implications extend beyond legal considerations. AI’s increasing autonomy in creative and economic spaces, such as AI-generated art, music, and literature has raised questions about intellectual property ownership. Traditionally, copyright and patent laws protect human creators, but should AI-generated works receive similar protections? In the UK, for example, computer-generated works are protected under copyright law, yet ownership remains tied to the creator of the AI system rather than the AI itself. Under the Copyright, Designs and Patents Act 1988, section 9(3), the author of a computer-generated work is defined as “the person by whom the arrangements necessary for the creation of the work are undertaken.” This means that, in the UK, copyright subsists in AI-generated works, but the rights vest in the human creator or operator, not the AI system itself. Recognising AI as a rights-holder could challenge these conventions, necessitating a re-evaluation of intellectual property laws.

A potential middle ground involves the implementation of stringent governance models that prioritise accountability without conferring rights upon AI. Instead of granting legal personhood, policymakers could focus on AI-specific liability structures, enforceable ethical guidelines, and greater transparency in AI decision-making processes. The European Commission has already initiated discussions on adapting liability frameworks to address AI’s unique challenges, ensuring that responsibility remains clearly assigned.

While AI continues to evolve, the legal framework governing its use and accountability must remain firmly rooted in principles of human responsibility. AI should be regulated as a tool, albeit an advanced one, rather than as an autonomous entity deserving of rights. Strengthening existing regulations, enhancing transparency, and enforcing accountability measures remain the most effective means of addressing the challenges posed by AI.

The delay in implementing robust AI governance has already resulted in widespread ethical and legal dilemmas, from biased decision-making to privacy infringements. While AI’s potential is undeniable, legal recognition should not precede comprehensive regulatory safeguards. A cautious, human-centric approach remains the best course to ensure AI serves societal interests without compromising fundamental legal principles.

While it is tempting to explore futuristic possibilities of AI personhood, legal rights should remain exclusively human. The law must evolve to manage AI’s risks, but not in a way that grants rights to entities incapable of moral reasoning. For now, AI must remain a tool, not a rights-holder.

James Bloomberg is a second year human sciences student at the University of Birmingham. He has a strong interest in AI, research and innovation and plans to pursue a career as a commercial lawyer.

The Legal Cheek Journal is sponsored by LPC Law.

The post AI in court: rights, responsibilities and regulation appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/ai-in-court-rights-responsibilities-and-regulation/feed/ 1
Will AI really replace paralegals? https://www.legalcheek.com/2025/07/will-ai-really-replace-paralegals/ https://www.legalcheek.com/2025/07/will-ai-really-replace-paralegals/#comments Wed, 09 Jul 2025 07:42:40 +0000 https://www.legalcheek.com/?p=222191 The Legal Cheek team discuss AI and the future of legal jobs — listen now 🎙️

The post Will AI really replace paralegals? appeared first on Legal Cheek.

]]>

The Legal Cheek team discuss AI and the future of legal jobs — listen now 🎙


The Legal Cheek Podcast returns this week as publisher Alex Aldridge and writer Lydia Fontes discuss two stories that have made the legal news in recent weeks.

In this week’s episode, we dig into the unorthodox tactics of Thomas Isaacs — the aspiring barrister who went viral on LinkedIn for taking his job search back to basics. Is this a brilliant new strategy to get noticed in an increasingly competitive job market? And what is this first-class AI and computer science graduate doing becoming a lawyer in the first place?

We also discuss the news that the “Godfather of AI”, Geoffrey Hinton, told the Diary of a CEO podcast that AI could spell trouble for paralegals, asking how often these sorts of predictions really come true. Is the legal market as vulnerable to AI replacement as is often made out?

You can listen to the podcast in full via the embed above, or on Spotify and Apple Podcasts.

The post Will AI really replace paralegals? appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/2025/07/will-ai-really-replace-paralegals/feed/ 9
Freshfields to ‘turbocharge’ future trainees with free tech-focused LLM https://www.legalcheek.com/2025/07/freshfields-to-turbocharge-future-trainees-with-free-tech-focused-llm/ https://www.legalcheek.com/2025/07/freshfields-to-turbocharge-future-trainees-with-free-tech-focused-llm/#comments Thu, 03 Jul 2025 06:19:27 +0000 https://www.legalcheek.com/?p=222018 KCL masters comes with £20k maintenance grant

The post Freshfields to ‘turbocharge’ future trainees with free tech-focused LLM appeared first on Legal Cheek.

]]>

KCL masters comes with £20k maintenance grant


Magic Circle law firm Freshfields has announced a new partnership with King’s College London (KCL) law school, offering future trainees a fully sponsored LLM in technology and law. The programme will cover topics such as artificial intelligence (AI), media, and crypto, with £20,000 in maintenance support included.

The firm describes the programme as a “first” for a law firm, highlighting its commitment to investing in “upskilling” the next generation of lawyers and meeting evolving client demands. Mark Sansom, managing partner for London and Dublin, said the initiative will “turbocharge” trainees’ “professional and personal growth”.

KCL describes the course as a “rare opportunity” for academic training at the intersection of law and technology. Electives on offer cover a range of heady topics, including AI, cryptocurrencies, cyberspace law, energy transitions and green tech, to name a few.

All this usually comes at a cost — but not for trainees. KCL’s LLMs typically cost over £20,000 for full-time students, rising to more than £35,000 for international students.

Besides covering the bill, Freshfields trainees studying the LLM will also receive a hefty £20,000 maintenance grant. After concluding their LLM, they’ll step into earning at least £56k as first year trainees, which increases to £61k in year two.

The 2025 Legal Cheek Firms Most List

Eligible rookies will include future trainees in the firm’s February 2026, August 2026, and February 2027 cohorts. A group in the August 2025 intake will have the opportunity to spearhead the programme this September.

Sansom said:

“We’re delighted to announce our new partnership with King’s College London to support the next generation of legal professionals. As technology and innovation continues to shape the legal industry, our firm is meeting that opportunity head on by investing in the upskilling of trainees at the very start of their careers. This opportunity allows them to develop valuable skills, turbocharge their professional and personal growth, and align with our strategic direction as a global firm.”

Alongside studies, the lucky cohorts will have opportunities to collaborate with Freshfields’ innovation team and the Freshfields lab, plus the firm will check in throughout students’ LLM studies.

KCL’s Professor Dan Hunter, the Dickson Poon School of Law’s executive dean, added: “It is fantastic to partner with Freshfields in supporting the next generation of legal professionals. Trainees undertaking the LL.M will become part of the global legal community of one of the world’s finest law schools, and benefit from the breadth and depth of our academic expertise in artificial intelligence and digital law.”

The post Freshfields to ‘turbocharge’ future trainees with free tech-focused LLM appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/2025/07/freshfields-to-turbocharge-future-trainees-with-free-tech-focused-llm/feed/ 10
‘We rather like hanging around law libraries’: Top judges reveal their attitudes towards AI https://www.legalcheek.com/2025/06/we-rather-like-hanging-around-law-libraries-top-judges-reveal-their-attitudes-towards-ai/ https://www.legalcheek.com/2025/06/we-rather-like-hanging-around-law-libraries-top-judges-reveal-their-attitudes-towards-ai/#comments Mon, 23 Jun 2025 08:15:38 +0000 https://www.legalcheek.com/?p=221592 Human input remains crucial

The post ‘We rather like hanging around law libraries’: Top judges reveal their attitudes towards AI appeared first on Legal Cheek.

]]>

Human input remains crucial


A new report has shed light on senior judges’ views on AI, highlighting both the areas of their work that could benefit from the technology and their concerns about its use.

The study used focus‑group discussions with 12 judges from across the UK judicial hierarchy, including five members of the Supreme Court, to dig into a range of attitudes towards AI.

A certain amount of enthusiasm for the technology was shown. One judge summarised AI’s benefits as “increasing productivity, reducing cost and reducing some of the drain on resources that we all have” while another commented that, “Anything that can improve efficiency and productivity whilst ensuring we don’t lose the essence of what justice is, is exciting and to be welcomed.”

The discussion on productivity seems to have centred on “boring” bulk administrative tasks such as disclosure exercises, bundling and summarising cases. There were also suggestions that AI could handle “small claims” as well as a possibility that AI could be used to create versions of judgments that could be easily understood by a child or by the general public.

Despite these benefits, there were many areas of judicial work that judges felt required a necessary human element. In cases which involve prison sentences or removing children from their parents, this human element was thought to be especially important. “People take comfort from having a human face, a human decision maker,” one participant commented. There was additional concern about the decisions AI might make in these sensitive cases, “Law is not a matter of pure logic,” was the view of one judge, “you need a practical, humane result to a problem if it’s humanly possible.”

Furthermore, a sense emerges from the report that these judges like being judges and are reluctant to share the most interesting parts of their workload with AI. “There’s a lot of people here [in the judiciary] who rather like hanging around law libraries,” one commented, reflecting on the “satisfaction of problem solving” and “following the footnotes.”

To the suggestion that AI could be used to produce judgments, another participant objected on the grounds that, “each of us, I think, enjoys writing, possibly in our own style.” Another comment reads, “When we come out of a case, we all meet together and discuss what we think about it and why. We can’t have a room of robots doing that.”

The post ‘We rather like hanging around law libraries’: Top judges reveal their attitudes towards AI appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/2025/06/we-rather-like-hanging-around-law-libraries-top-judges-reveal-their-attitudes-towards-ai/feed/ 2
‘Godfather of AI’ thinks tech won’t hurt plumbers — but could spell trouble for paralegals https://www.legalcheek.com/2025/06/godfather-of-ai-thinks-tech-wont-hurt-plumbers-but-could-spell-trouble-for-paralegals/ https://www.legalcheek.com/2025/06/godfather-of-ai-thinks-tech-wont-hurt-plumbers-but-could-spell-trouble-for-paralegals/#comments Wed, 18 Jun 2025 07:50:18 +0000 https://www.legalcheek.com/?p=221478 Intellectual labour at risk of replacement says Geoffrey Hinton

The post ‘Godfather of AI’ thinks tech won’t hurt plumbers — but could spell trouble for paralegals appeared first on Legal Cheek.

]]>

Intellectual labour at risk of replacement says Geoffrey Hinton


The computer scientist dubbed ‘the Godfather of AI’ has identified legal assistants and paralegals as among the roles most at risk of replacement by AI.

Geoffrey Hinton, famous for his work on artificial neural networks and winner of the 2024 Nobel Prize in Physics for his contribution to machine learning, appeared on the popular Diary of a CEO podcast for an interview heavily focused on the dangers of AI technology.

Amid a range of threats, spanning from an increase in cyber attacks to the development of autonomous lethal weapons, Hinton discussed the possibility that unemployment levels will increase as AI outperforms humans at what he terms “mundane intellectual labour”. He likens the development of this technology to the industrial revolution of the 19th century, which saw many manual labour jobs go extinct.

Hinton agreed with the now much-repeated phrase, “AI won’t take your job, a human using AI will take your job”, however stressed that as AI tools increase productivity, fewer human employees will be needed in many workplaces. “The combination of a person and an AI assistant can do the work that 10 people could do previously,” he explained.

When asked which roles specifically face the highest threat of AI replacement, Hinton replied, “Someone like a legal assistant or paralegal — they’re not going to be needed for very long.” He went on to mention call centre workers as another group at risk.

On the roles less likely to be replaced, skilled trades come out on top. “It’s going to be a long time until [AI] is as good at physical manipulation as us, so a good bet would be to be a plumber,” Hinton suggested.

Throughout the interview Hinton emphasises the difficulty of making confident predictions about the future of this technology. “Anybody who tells you they know just what’s going to happen and how to deal with it is talking nonsense,” he told listeners.

The extent of the effect that AI will have on the legal industry, including roles like paralegals and legal assistants, has been much debated. Back in March, Simmons & Simmons senior partner Julian Taylor told journalists that his clients “don’t completely trust” AI and would rather have real people handling the complex and high-stakes work they send to the firm. As a service industry heavily influenced by the wishes of its clients, this could suggest that the profession will be slower to replace staff with technology than Hinton suggests.

The post ‘Godfather of AI’ thinks tech won’t hurt plumbers — but could spell trouble for paralegals appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/2025/06/godfather-of-ai-thinks-tech-wont-hurt-plumbers-but-could-spell-trouble-for-paralegals/feed/ 7
Ex-Freshfields lawyers raise $30 million for AI tech business https://www.legalcheek.com/2025/06/ex-freshfields-lawyers-raise-30-million-for-ai-tech-business/ https://www.legalcheek.com/2025/06/ex-freshfields-lawyers-raise-30-million-for-ai-tech-business/#respond Tue, 10 Jun 2025 07:35:19 +0000 https://www.legalcheek.com/?p=221296 Fund further growth

The post Ex-Freshfields lawyers raise $30 million for AI tech business appeared first on Legal Cheek.

]]>

Fund further growth

Nnamdi Emelifeonwu, and Feargus MacDaeid

Two former Magic Circle lawyers have raised $30 million (£22 million) to fund their AI-powered legal tech company.

Nnamdi Emelifeonwu and Feargus MacDaeid — who worked together at Freshfields London office until 2017 — have secured additional funding to accelerate the growth of their company, Definely.

The company uses AI to help lawyers review and edit complex contracts in seconds, integrating with Microsoft Word rather than operating on a separate system or platform.

Law firms already using the tech include A&O Shearman, Slaughter and May, DLA Piper, and Dentons, along with in-house legal teams at companies such as BT and Deloitte.

The funding round includes investors from Europe and North America, led by growth investor Revaia, with participation from Alumni Ventures, Beacon Capital, and legal tech company Clio. This latest investment brings Definely’s total funding since its inception to $40 million (£30 million).

Emelifeonwu studied law at Queen Mary University of London before completing his training contract at Freshfields, qualifying in 2015. He spent two years at the Magic Circle firm before leaving to co-found Definely. MacDaeid trained at what was then Allen & Overy before joining Freshfields in 2013, where he spent four years.

“As a business, we are deeply committed to building human-first products, leveraging generative AI not for the sake of it — but to solve real and tangible problems that lawyers face today,” Emelifeonwu said. “As a former lawyer myself, I witnessed some of these problems first-hand. This funding will allow us to invest further in developing and integrating our products.”

The post Ex-Freshfields lawyers raise $30 million for AI tech business appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/2025/06/ex-freshfields-lawyers-raise-30-million-for-ai-tech-business/feed/ 0
High Court warns lawyers over AI use after ‘fake’ cases cited in submissions https://www.legalcheek.com/2025/06/high-court-warns-lawyers-over-ai-use-after-fake-cases-cited-in-submissions/ https://www.legalcheek.com/2025/06/high-court-warns-lawyers-over-ai-use-after-fake-cases-cited-in-submissions/#comments Mon, 09 Jun 2025 11:00:10 +0000 https://www.legalcheek.com/?p=221267 Barrister and solicitor escape contempt proceedings

The post High Court warns lawyers over AI use after ‘fake’ cases cited in submissions appeared first on Legal Cheek.

]]>

Barrister and solicitor escape contempt proceedings

Royal Courts of Justice
The High Court has issued a stern warning to the legal profession after examining two cases involving “fake” citations, suggesting in its judgment that senior lawyers may bear some responsibility for junior colleagues’ misuse of artificial intelligence (AI).

Legal Cheek reported last month on a case in which a pupil barrister cited five authorities that did not appear to exist. This followed an earlier case involving a solicitor who relied on legal research conducted by his client, which apparently included 18 fictitious cases.

The cases were later brought before Dame Victoria Sharp, President of the King’s Bench Division of the High Court, to determine whether the conduct in question amounted to contempt of court.

In the first of the two cases, the court held the instructing solicitor and his paralegal at Haringey Law Centre, a charity, were not responsible for the fake cases.

However, the pupil barrister’s conduct was found to have met the threshold for contempt of court — though the court ultimately decided not to initiate proceedings. “This is not a precedent,” Dame Sharp warned, taking into account mitigating factors, including that the barrister had been “criticised in public judgment” and was an “extremely junior” lawyer.

The 2025 Legal Cheek Chambers Most List

In the second case, it was found that the solicitor had relied on legal research provided by his client — which was the source of the “fake” cases. Although this meant the solicitor did not “know the true position,” Dame Sharp emphasised: “A lawyer is not entitled to rely on their lay client for the accuracy of citations of authority or quotations… It is the lawyer’s professional responsibility to ensure the accuracy.”

Both the barrister and the solicitor referred themselves to their respective regulators for investigation. The High Court also indicated that it would make its own referrals.

The 2025 Legal Cheek Firms Most List

Calling on the profession to do more, Dame Sharp also found the guidance for judges, barristers, and solicitors “is insufficient to address the misuse” of AI.

“We would go further”, the judgment reads, calling on those with “individual leadership responsibilities (such as heads of chambers and managing partners)”, along with regulators, to ensure lawyers understand and comply with their “professional and ethical obligations and their duties to the court if using artificial intelligence.”

The judgment concludes with a stark warning: “For the future, in Hamid hearings such as these, the profession can expect the court to inquire whether those leadership responsibilities have been fulfilled.”

The judgment also includes an appendix containing cases from across the world with suspected or found misuses of AI.

Read the judgment in full:

Ayinde v London Borough of Haringey and Al Haroun v Qatar National Bank

The post High Court warns lawyers over AI use after ‘fake’ cases cited in submissions appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/2025/06/high-court-warns-lawyers-over-ai-use-after-fake-cases-cited-in-submissions/feed/ 3
Copyright in the age of AI: The UK’s contentious proposal  https://www.legalcheek.com/lc-journal-posts/copyright-in-the-age-of-ai-the-uks-contentious-proposal/ https://www.legalcheek.com/lc-journal-posts/copyright-in-the-age-of-ai-the-uks-contentious-proposal/#respond Thu, 05 Jun 2025 07:46:38 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=218643 First year law student at Leeds Uni, Xin Ern Teow (Ilex) analyses the UK's proposals to resolve the tension between copyright and AI-produced content

The post Copyright in the age of AI: The UK’s contentious proposal  appeared first on Legal Cheek.

]]>

First year law student at Leeds Uni, Xin Ern Teow (Ilex) analyses the UK’s proposals to resolve the tension between copyright and AI-produced content

Credit: Cash Macanaya via Unsplash

What happens when cutting-edge technology collides with centuries-old concepts of creativity, privacy, and law? The UK government’s latest proposal, Copyright and AI: Consultation, to allow AI companies to use copyrighted works for training has sparked fierce debate, raising questions about the future of intellectual property in the AI era.

Imagine a world where AI-generated novels outsell human-written ones, where iconic artworks inspire machine-crafted masterpieces, and where centuries of cultural heritage are fed into algorithms to create something entirely new. At the heart of this transformative vision lies a contentious question: Who owns the rights to this creativity, the machines, their makers, or the creators whose works serve as the foundation?

What’s on the table?

The UK government’s Copyright and Artificial Intelligence consultation is the most recent official initiative addressing the intersection of AI and copyright law. This consultation sought public input on how to adapt and modernise the UK’s legal framework to support both the creative industries and the AI sector, balancing their needs and fostering innovation while protecting creative industries.

At the heart of this consultation lie three key objectives:

  1. Control: The framework seeks to ensure that rights holders retain control over their works. This means creators should have the ability to license, monetise, and safeguard their content when used by AI technologies.
  2. Access: AI developers require access to extensive datasets to train their models effectively. The government proposes streamlined access to copyrighted materials to prevent legal barriers from stifling technological progress.
  3. Transparency: The framework aims to establish greater transparency, ensuring all stakeholders — creators, developers, and consumers — understand how AI systems use copyrighted content and generate outputs.

In order to achieve these goals, the government proposes an “opt-out” system, allowing AI companies to use copyrighted works unless the rights holders explicitly object, thereby reducing administrative hurdles for developers. However, it places the burden of action on creators, who must proactively protect their intellectual property.

The creative industry backlash

Undoubtedly, there is a strong opposition ignited from the creative industries, which argue that the “opt-out” system threatens their livelihoods and the value of intellectual property. At the heart of this resistance is the “Make It Fair” campaign, supported by various news organisations, further underscoring the demand for equitable treatment and compensation for creators.

The potential loss of revenue is just one part of the broader concern. Creators fear the long-term ramifications of AI on the entire creative ecosystem. If AI systems are allowed to harvest copyrighted works without compensating creators or offering any recognition, it could lead to a “race to the bottom,” where the value of human creativity is overshadowed by algorithmically-generated content. In this scenario, emerging creators would struggle to profit from their work, as the very worth of their intellectual property would diminish in an AI-dominated marketplace.

Want to write for the Legal Cheek Journal?

Find out more

Many critics argue that this shift could foster a monopolistic environment in which only a handful of large tech companies profit from AI-generated content, while individual creators are left with little control or benefit. This concern is poignantly illustrated by the silent album, Is This What We Want?, a collaborative protest from over 1,000 musicians, including iconic figures like Kate Bush and Damon Albarn. The album’s track titles collectively convey the message, “The British government must not legalise music theft to benefit AI companies”. This symbolic gesture underscores a key point that this issue goes beyond mere financial gain — it’s about recognition and respect for human artistry in a world increasingly dominated by machines.

While there is broad support for fostering AI innovation, many creatives argue that the government’s approach needs to be balanced more carefully. If these concerns are not addressed, the unrest within the creative community suggests that the government’s proposal may not only face legal challenges but could also lead to a loss of public trust in the ethical development of AI.

The benefits of the proposal

While the proposal has faced significant backlash, the UK government has strongly defended its stance, arguing that the benefits far outweigh the concerns. From the government’s perspective, this initiative is crucial for fostering the growth of AI technology, ensuring the UK remains competitive on the global stage, and contributing to economic growth.

With access to vast datasets, AI models can improve and innovate faster, benefiting industries like healthcare, education, and finance. The government believes this will enable AI firms to create groundbreaking technologies without the delays of seeking permissions for every dataset.

Besides, by facilitating AI development, the UK aims to attract investment, create jobs, and position itself as a leader in AI research. In a global race for AI supremacy, providing open access to data can help the UK remain competitive, particularly against tech giants in the US and China.

Additionally, AI innovation can revolutionise industries, from self-driving cars to personalised medicine. By supporting AI companies, the UK hopes to foster new industries and technological advancements, which would contribute to long-term national growth and improved societal outcomes.

While the proposal acknowledges creator concerns, the government argues that promoting AI innovation justifies easier access to data. If implemented with a balanced legal framework, the UK’s approach could serve as a model for other nations grappling with AI and copyright challenges.

Conclusion

To sum up, the UK government’s proposal to allow AI companies to train their algorithms on copyrighted works without prior permission highlights the ongoing tension between fostering technological innovation and protecting creators’ rights. While the proposal aims to accelerate AI development and bolster economic growth, it raises critical concerns about the fairness of intellectual property distribution and the potential devaluation of human creativity.

Xin Ern Teow (Ilex) is a first-year law student at the University of Leeds with a strong passion for making a positive impact through volunteering. Her interests also extend to negotiation and exploring strategies for conflict resolution and collaborative problem-solving.

The Legal Cheek Journal is sponsored by LPC Law.

The post Copyright in the age of AI: The UK’s contentious proposal  appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/copyright-in-the-age-of-ai-the-uks-contentious-proposal/feed/ 0
Former solicitor blames Google for producing fake cases in strike-off appeal https://www.legalcheek.com/2025/05/former-solicitor-blames-google-for-producing-fake-cases-in-strike-off-appeal/ https://www.legalcheek.com/2025/05/former-solicitor-blames-google-for-producing-fake-cases-in-strike-off-appeal/#comments Tue, 20 May 2025 07:50:29 +0000 https://www.legalcheek.com/?p=220288 Denies using AI

The post Former solicitor blames Google for producing fake cases in strike-off appeal appeared first on Legal Cheek.

]]>

Denies using AI

Credit: Christian Wiediger on Unsplash

A former solicitor who appealed his strike-off was found by the High Court to have cited up to 27 “fake authorities” — a move that amounted to an abuse of process.

Venkateshwarlu Bandla was struck off the roll by the SRA in 2017 after it emerged that he didn’t have the correct insurance in place — despite claiming he did — and for abandoning his high street firm.

Bandla turned to the High Court to appeal the strike-off.

In a new ruling, Mr Justice Fordham found the former solicitor had cited over 50 cases — with more than half described as “non-existent”. In a witness statement, Bandla describes “endeavouring to identify and present cases bearing the closest resemblance to this appeal”.

The SRA’s research found that certain cases “did not exist.” Some cases could not be found, contained incorrect details, or did not say what Bandla claimed they did, although Mr Justice Fordham suggested that two of the 27 non-existent authorities may have been typos.

 The 2025 Legal Cheek Firms Most List

One case upon which the former solicitor relied upon had a summary recorded in the judgment. Yet neither the judge nor the SRA could locate this case. Bandla admitted he “did not write this summary himself”.

Fordham J added:

“He denied using AI or any source identifiable as AI. He claimed to have simply used a Google search for “case law in support of mental health problems”. He accepts that this case, and many other cases which he cited to this Court, do not in fact exist. He told me that he never “double-verified” them. He later accepted that he never checked them at all.

When asked why the court should not strike out his appeal for using the non-existent cases, Bandla argued that the “substance of the points which were being put forward in the grounds of appeal were sound, even if the authority which was being cited for those points did not exist”.

The judge was “wholly unpersuaded” by that. Fordham J emphasised needing to protect the court’s integrity against fake cases. He was especially vexed that the appellant was a former lawyer, and so he went ahead with striking out the appeal as an abuse of process.

The ruling separates the substantive legal judgment from “two further features” including citing fake authorities — the other being inaccurate CVs. Bandla blamed those errors on “old MS Word or due to other reason”.

The post Former solicitor blames Google for producing fake cases in strike-off appeal appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/2025/05/former-solicitor-blames-google-for-producing-fake-cases-in-strike-off-appeal/feed/ 1
AI and the erosion of artistic integrity https://www.legalcheek.com/lc-journal-posts/ai-and-the-erosion-of-artistic-integrity-a-comparative-copyright-law-analysis/ https://www.legalcheek.com/lc-journal-posts/ai-and-the-erosion-of-artistic-integrity-a-comparative-copyright-law-analysis/#comments Thu, 15 May 2025 05:25:33 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=218459 Leeds law school grad, Mohammad Anas, takes a deep-dive into the ramifications of Ghibli-style images on copyright law in the era of generative AI

The post AI and the erosion of artistic integrity appeared first on Legal Cheek.

]]>

Leeds law school grad Mohammad Anas takes a deep-dive into the ramifications of Ghibli-style images on copyright law in the era of generative AI


The rapid advancement of artificial intelligence (AI) presents a formidable challenge to the legal and ethical underpinnings of artistic expression, threatening the integrity of human creativity. This journal examines the 2024 proliferation of Studio Ghibli-style images on X (formerly Twitter) as a pivotal case study, analysing how AI-generated works strain the copyright frameworks of the United Kingdom, the European Union, and the United States.

Through a review of statutory provisions, judicial precedents, and ethical considerations, it exposes systemic deficiencies in current law and the erosion of artistic identity, exemplified by Ghibli’s anti-war, pro-earth, pro-humanity, and anti-consumerist principles.

The threat to creative authorship

In a Tokyo studio, Hayao Miyazaki meticulously crafts Princess Mononoke (1997), each frame a testament to decades of artistic mastery and a philosophy rooted in pacifism, ecological reverence, human dignity, and resistance to consumerism. By 2024, this vision will be replicated on X through AI-generated images of lush landscapes and ethereal figures that echo Ghibli’s aesthetic yet lack its purposeful soul. Concurrently, cartoonist Sarah Andersen confronts the unauthorized appropriation of her distinctive comic style by Stable Diffusion, her creative identity reduced to uncredited algorithmic outputs.

This phenomenon transcends technological innovation, raising profound legal and ethical questions. As copyright systems in the UK, EU, and US grapple with AI’s non-human authorship, they reveal a critical misalignment between statutory intent and modern reality. Can these frameworks adapt to protect the human essence of art epitomised by Ghibli’s principled vision against the systematic challenge posed by generative AI?

Legal frameworks under examination

UK copyright law: A framework under pressure

The Copyright, Designs, and Patents Act 1988 (CDPA) establishes protections for “original artistic works” (s.1(1)(a)), granting authors exclusive rights to control reproduction, adaptation, and distribution (ss.16–20). Infringement hinges on appropriating a “substantial part”, a qualitative standard clarified in Designers Guild Ltd v Russell Williams [2000] 1 WLR 2416. The House of Lords held that this includes both literal copying and the “look and feel” of a work, potentially applicable to AI-generated Ghibli-style images.

However, AI developers invoke the idea-expression dichotomy, upheld in Baigent v Random House [2007] EWCA Civ 247, arguing that style or thematic inspiration falls outside copyright protection. This defence is contested by Temple Island Collections Ltd v New English Teas [2012] EWPCC 1, which protects aesthetic arrangements. Ghibli’s deliberate fusion of anti-consumerist narratives and visual coherence arguably meets this threshold, suggesting a basis for protection against AI mimicry.

Section 9(3) of the CDPA further complicates the issue, attributing authorship of computer-generated works to the person arranging for their creation. In the context of AI, where training datasets are vast and often scraped without consent, this attribution becomes untenable. The case of Getty Images v Stability AI [2023] EWHC challenges the legality of mass scraping under s.17(2), highlighting the lack of clarity on data provenance, leaving artists vulnerable.

Despite these challenges, UK law continues to confront the issue head-on, with some legal scholars proposing that AI-generated works be viewed through a lens of fair use or transformative rights, which could offer a more balanced approach. Others argue that additional protections should be established to address the evolving nature of artistic authorship in the AI age.

EU copyright law: A doctrine misaligned

The EU’s copyright regime, anchored in Directive 2001/29/EC (InfoSoc Directive), requires protection based on the “author’s intellectual creation” (Infopaq International A/S v Danske Dagblades Forening, C-5/08). Ghibli’s works exemplify this, with Miyazaki’s pro-humanity ethos and ecological advocacy. AI-generated facsimiles, however, lack a human author, exploiting a doctrinal gap that undermines this foundation.

The Court of Justice’s ruling in Painer v Standard Verlags GmbH (C-145/10) protects stylistic choices reflecting an author’s personality, yet AI outputs derived from aggregated data challenge this precedent. The EU AI Act (Regulation 2024/1689) mandates data usage disclosure, but its enforcement mechanisms remain superficial, offering limited protection for artists. The current regulatory framework struggles to maintain the balance between allowing AI-driven innovation and preserving the authenticity of artistic authorship.

In practical terms, the lack of data transparency by AI companies poses a significant challenge to the effective enforcement of existing regulations. While some suggest that the AI Act could be a step forward, its applicability to art and creative industries remains unclear and may require further revisions to adequately address AI’s potential to mimic existing styles without permission.

Want to write for the Legal Cheek Journal?

Find out more

US copyright law: A system unprepared

US copyright law demands human authorship, a principle established in Burrow-Giles Lithographic Co. v. Sarony (1884). The Copyright Office’s 2023 decision on Zarya of the Dawn codifies this, denying protection to AI-generated works. However, this stance leaves rights holders without recourse, particularly concerning the appropriation of existing styles like Ghibli’s anti-war landscapes.

AI developers exploit the Feist Publications v. Rural Telephone Service (1991) low originality threshold, claiming their outputs transform rather than copy, despite relying on copyrighted inputs. In Andersen v Stability AI (2023), secondary liability is explored, but proving infringement remains difficult due to AI firms’ non-disclosure of data . California’s AB 2013 (2024) mandates AI art transparency, yet federal law remains silent.

Despite these obstacles, recent cases, including Nichols v Universal Pictures (1930), have begun to explore how AI-generated works might be treated under US copyright law, though the absence of clear guidelines leaves both artists and developers in a state of uncertainty. As the technology continues to advance, legal experts are calling for a more robust framework that can handle the complexities of AI-driven creativity.

Ethical dimensions: The value of human intent

Studio Ghibli’s creative process reflects a labour of intent. The Tale of the Princess Kaguya (2013) required 14 months of hand-drawn animation, each frame embodying a commitment to peace, ecology, and anti-consumerism . Miyazaki’s rejection of AI art as “an insult to life” resonates with Jung’s view of art as a psychological expression of the human soul, an act irreducible to algorithms. Sarah Andersen’s distress, “my identity, digested by a machine,” highlights the violation of creative agency when her style is mechanized without consent.

The intentionality behind human art is a critical element that AI-generated works cannot replicate. While AI can generate content that mimics existing styles, it lacks the deeper emotional and philosophical contexts that underpin human creation. This absence of agency raises ethical questions about authenticity, responsibility, and the commodification of art in an AI-driven landscape.

Judicial precedents: Seeking clarity

In the UK, Temple Island (2012) protects aesthetic coherence, while Designers Guild (2000) clarifies the “substantial part” standard. In the EU, Painer (2011) safeguards stylistic individuality, while Football Dataco (2012) defends curated effort. In the US, Nichols (1930) fails against AI’s complexity, though Getty v Stability AI (2023) signals a judicial shift toward accountability.

A call for reform: Strengthening legal protections

The UK’s CDPA revisions are stalled, the EU’s AI Act lacks enforceable specificity, and US federal law lags. Current frameworks, built for human authorship, fail to address AI’s appropriation of thematic essence, exposing a critical regulatory gap. Several reforms could help bridge this gap, ensuring artists are better protected while allowing for the responsible use of AI in creative industries.

Proposed Reforms

    1. Mandatory data transparency: Require AI developers to submit training dataset inventories to public registries, verified biannually. Non-compliance should incur fines.
    2. Strict liability standards: Impose liability for unlicensed use of copyrighted styles, with statutory damages tied to commercial exploitation.
    3. Redefining ‘substantial part’: Expand the term to include thematic consistency and philosophical intent.
    4. Artist empowerment mechanisms: Create opt-in registries for creators to license or prohibit AI use of their works.

These reforms shift the burden to AI developers, ensuring artists retain control over their creative legacies.

Conclusion: Upholding the human core of art

The proliferation of AI-generated Ghibli-style images exposes the inadequacies of copyright law in confronting non-human authorship. Yet, the resilience of human creativity persists in its intentionality, vividly embodied in Ghibli’s rejection of war, reverence for nature, celebration of humanity, and critique of consumerism. These principles forged through deliberate labour distinguish art from mechanical imitation. Legal systems must transcend reactive measures and adopt robust transparency and accountability, ensuring creativity remains a human endeavour.

Mohammad Anas is an aspiring solicitor who recently completed his LLB from the University of Leeds, with a strong interest in corporate law, banking and finance, and intellectual property.

The Legal Cheek Journal is sponsored by LPC Law.

The post AI and the erosion of artistic integrity appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/ai-and-the-erosion-of-artistic-integrity-a-comparative-copyright-law-analysis/feed/ 2
Should aspiring lawyers embrace AI? Decoding the mixed messages https://www.legalcheek.com/2025/05/should-aspiring-lawyers-embrace-ai-decoding-the-mixed-messages/ https://www.legalcheek.com/2025/05/should-aspiring-lawyers-embrace-ai-decoding-the-mixed-messages/#comments Wed, 14 May 2025 07:54:33 +0000 https://www.legalcheek.com/?p=219637 The Legal Cheek team explore the conflicting range of advice on AI – listen now 🎙️

The post Should aspiring lawyers embrace AI? Decoding the mixed messages appeared first on Legal Cheek.

]]>

The Legal Cheek team explore conflicting advice on AI in the legal profession — listen now 🎙


The Legal Cheek podcast returns this week as writers Lydia Fontes and Angus Simpson dig into the issue of AI and lawyers — covering the excitement from tech enthusiasts as well as the growing number of horror stories and cautionary tales told by sceptics. We chat through these mixed messages and ask whether law students should embrace AI and, if so, how?

From the Master of the Rolls’ feeling that judges and lawyers should embrace AI to the Bar Council’s more cautious approach, to embarrassing examples of AI-driven gaffes around the world, artificial intelligence is rarely out of the legal news. We discuss how this barrage of information can be perplexing for aspiring lawyers and share our experience of using this technology and our expectations of how it will shape our careers.

You can listen to the podcast in full via the embed above, or on Spotify and Apple Podcasts.

The post Should aspiring lawyers embrace AI? Decoding the mixed messages appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/2025/05/should-aspiring-lawyers-embrace-ai-decoding-the-mixed-messages/feed/ 1
UCL law school takes on ‘AI slop’ with assessment overhaul https://www.legalcheek.com/2025/05/leading-law-school-takes-on-ai-slop-with-assessment-overhaul/ https://www.legalcheek.com/2025/05/leading-law-school-takes-on-ai-slop-with-assessment-overhaul/#comments Fri, 09 May 2025 13:54:32 +0000 https://www.legalcheek.com/?p=219692 University College London will ensure most law assessments are artificial intelligence-proof

The post UCL law school takes on ‘AI slop’ with assessment overhaul appeared first on Legal Cheek.

]]>

University College London will ensure most law assessments are artificial intelligence-proof

Photo by Surya Prasad on Unsplash

UCL law school has stated its intention to “secure” assessments against artificial intelligence (AI), declaring the shift a “response to the future” in upholding trust and integrity amid “AI slop”.

In a chunky paper published by the top law school, leading academics analyse AI in legal education — the main message being that they will ensure more than half of assessments they run cannot be completed with AI assistance. Two main reasons underpin the shift:

“Our task as a law faculty is to ensure that our degrees are, and continue to be, both transformative educational journeys and powerful, internationally recognised and durable signals of our students’ achievements. AI does not change the core of that”.

The paper defines a “secure assessment” as one which guarantees that “AI does not substitute for the skills or knowledge acquisition being evaluated.” This includes written and oral in-person examinations.

UCL’s regulations for all departments already prohibit AI to “create or alter content” including in assessments, like coursework, “unless explicitly authorised…for a valid pedagogical reason”. However, the move by the law school to actively “secure” assessments is a new development which will affect undergrad and postgrad studies. The law school claims shifting to 50% or more AI-proof assessments is a return to how things were done before the covid pandemic, which since made coursework more common.

During the course of their legal careers, students may find themselves working with clients or in jurisdictions which are “still at the earliest stages of the digitisation of law, let alone the use of AI” and need to be prepared for this, the law school claims. Key skills like thinking on your feet in cross-examination and learning the ethical standards for working with sensitive evidence cannot be substituted by AI use, they argue.

The 2025 Legal Cheek Chambers Most List

The law school’s new approach is a response to AI tools which are continually improving. Assessors were wowed when chatbots scraped a pass mark in a 2022 Watson Glaser test and a contract exam in 2023.

Now, educators have to contend with the rise of “AI agents” who can perform certain tasks independently and proactively. ChatGPT’s recently-launched “deep research” function, which can take a research question and spend as long as half an hour scanning the web to generate a lengthier, more accurate essay-style product, is a further challenge.

The academics liken AI uptake to for-profit legal databases being sold cheaply to universities decades ago. The idea, the paper says, was for universities to train students to rely on the tools to “ensure a pipeline of future customers”. Universities like UCL tried to resist by setting up free case database BAILII, for example. Now, these academics suggest, AI companies are at the same trick, encouraging students to rely on their products rather than develop independent skills.

Microsoft’s AI, Copilot, is particularly interesting as the company have “embedded” it into their ubiquitous office suite, as the paper points out — making it obviously available to users. Shoosmiths recently announced a £1 million bonus pot if lawyers enter one million prompts into Copilot, showing inroads this tool has already made in the legal sector.

The 2025 Legal Cheek Firms Most List

The paper acknowledges that lawyers can and are using AI tools, but emphasises that education is different. Lawyers are in part “content creators” the paper says, whilst students, it argues, are not. The paper says their “trusted” degrees should test “underlying skills” rather than the ability to produce content. The paper goes further, saying plentiful AI-generated media makes text “cheap” — calling it “AI slop” — so that “creativity and critical thinking” will be needed for students to use AI “masterfully”.

Educators around the world have been wrangling with AI. Victoria University of Wellington, in New Zealand, have this week announced handwritten exams will return this trimester. UCL laws use their paper as a call to action:

“[C]onscious decisions must sit at the heart of universities’ approaches to AI and education. Universities must not be passive rule-takers. We must not simply ‘adjust to’ speculative educational and professional visions of the future marketed by technology firms in order to sell more cloud computing and increase reliance on tools with questionable and uncertain utility. Universities must steer, and if necessary, themselves create, the technology they need for their missions.”

The 2025 Legal Cheek Solicitor Apprenticeships Most List

Elsewhere, the legal profession has made historic moves to embrace AI. This month, the SRA approved Garfield.Law — a regulated law firm driven entirely by AI, a first for England and Wales. Over in the judiciary, judges recently received refreshed guidance on using tools like Copilot as well as identifying AI-generated submissions.

Meanwhile, this year the pupillage gateway prohibited AI use in applications while some law firms offered tips on how to use the technology in applications.

The somewhat mixed messaging from regulators, universities, and professional recruitment has at times made it difficult for students to know where they stand. Legal Cheek‘s latest podcast engages this issue from aspiring lawyers’ perspectives, reflecting on personal experience, and asking whether students should use AI to potentially gain skills or ignore it and risk falling behind.

The post UCL law school takes on ‘AI slop’ with assessment overhaul appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/2025/05/leading-law-school-takes-on-ai-slop-with-assessment-overhaul/feed/ 1
Judge fury after ‘fake’ cases cited by rookie barrister in High Court https://www.legalcheek.com/2025/05/judge-fury-after-fake-cases-cited-by-rookie-barrister-in-high-court/ https://www.legalcheek.com/2025/05/judge-fury-after-fake-cases-cited-by-rookie-barrister-in-high-court/#comments Thu, 08 May 2025 09:27:58 +0000 https://www.legalcheek.com/?p=219563 "I consider that it would have been negligent for this barrister, if she used AI and did not check it, to put that text into her pleading," says Mr Justice Ritchie

The post Judge fury after ‘fake’ cases cited by rookie barrister in High Court appeared first on Legal Cheek.

]]>

“I consider that it would have been negligent for this barrister, if she used AI and did not check it, to put that text into her pleading,” says Mr Justice Ritchie


A High Court judge has issued a scathing ruling after multiple fictitious legal authorities were included in court submissions.

The case concerned a homeless claimant seeking accommodation from Haringey council. Things took a sharp turn when the defendant discovered five “made-up” cases in the claimant’s submissions.

Although the judge could not rule on whether artificial intelligence (AI) had been used by the lawyers for the claimant, who had not been sworn or cross examined, he also left little doubt about the seriousness of the lapse, stating: “These were not cosmetic errors, they were substantive fakes and no proper explanation has been given for putting them into a pleading” said Mr Justice Ritchie, adding: “I have a substantial difficulty with members of the Bar who put fake cases in statements of facts and grounds.”

He added:

“On the balance of probabilities, I consider that it would have been negligent for this barrister, if she used AI and did not check it, to put that text into her pleading. However, I am not in a position to determine whether she did use AI. I find as a fact that Ms Forey intentionally put these cases into her statement of facts and grounds, not caring whether they existed or not, because she had got them from a source which I do not know but certainly was not photocopying cases, putting them in a box and tabulating them, and certainly not from any law report. I do not accept that it is possible to photocopy a non-existent case and tabulate it.”

Judge Ritchie found that the junior barrister in question, Sarah Forey of 3 Bolt Court Chambers, instructed by Haringey Law Centre solicitors, had acted improperly, unreasonably and negligently. He ordered both Forey and the solicitors to personally pay £2,000 each to Haringey Council’s legal costs.

Certainly the judge’s warning will echo across the profession:

“It would have been negligent for this barrister, if she used AI and did not check it, to put that text into her pleading.”

This case has sparked discussion on social media. Writing on LinkedIn, Adam Wagner KC of Doughty Street Chambers commented on the judgment, noting that while the court didn’t confirm AI was responsible for the fake cases, “it seems a very reasonable possibility.” Wagner added:

“A.I. can be a time saver, especially if you don’t really know where to start (as sometimes happens in law!), but the key lesson is that A.I. should only ever be the *starting point* of a research or drafting task.”

The case emphasised that responsibility for accuracy lies with lawyers. This news comes after judges received refreshed guidance on spotting AI-generated submissions last month. Meanwhile, the SRA approved the first ‘AI-driven’ law firm — which claims their AI cannot propose caselaw, to avoid hallucinations.

The 2025 Legal Cheek Chambers Most List

The post Judge fury after ‘fake’ cases cited by rookie barrister in High Court appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/2025/05/judge-fury-after-fake-cases-cited-by-rookie-barrister-in-high-court/feed/ 26
Book review: Susskind’s ‘How To Think About AI’  https://www.legalcheek.com/2025/05/book-review-susskinds-how-to-think-about-ai/ https://www.legalcheek.com/2025/05/book-review-susskinds-how-to-think-about-ai/#comments Thu, 08 May 2025 07:57:37 +0000 https://www.legalcheek.com/?p=219260 Polly Botsford delves into Professor Richard Susskind's 'darker' latest book and wonders: 'Where has Reassuring Richard gone?'

The post Book review: Susskind’s ‘How To Think About AI’  appeared first on Legal Cheek.

]]>

Polly Botsford delves into Professor Richard Susskind’s ‘darker’ latest book and wonders: ‘Where has Reassuring Richard gone?’

Professor Richard Susskind

Richard Susskind has always been an optimist, thinking and writing about tech and the law without being a doomsayer, never a naysayer, an everything-possible kind of guy. Even when he was talking about the end of lawyers, it sounded like a positive (even for lawyers).

But his latest book, How To Think About AI, an excellent companion to understanding where we are in all things AI, is darker around the edges. “Balancing the benefits and threats of artificial intelligence – saving humanity with and from AI – is the defining challenge of our age,” he tells us. “I am … increasingly concerned and sometimes even scared by the actual and potential problems that might arise from AI.” So he begins his chapter on ‘Categories of Risk.’ Where has Reassuring Richard gone?

And yet, of course, How to think about AI is full of clear thinking as well as being a call to action. Susskind covers a lot of ground: alongside an exploration of the various hypotheses on where AI will take us (is it hype or the end of the world as we know it?), he recaps on AI’s brief history, starting with Alan Turing’s paper ‘Computing Machinery and Intelligence’ in 1950, and including ‘AI winters’ where progress was stagnant, for example, when we all got distracted by the internet. He explores the alarming risks artificial intelligence brings, and how it might be possible to control those risks. He wraps up with a crash course in consciousness, evolution and the cosmos to explore the various theories about where all this AI business sits within the grander ideas about life, the universe and everything.

But let’s roll back to the here and now. For lawyers, Susskind gets to the core of how they (as with other professionals such as doctors or actuaries) misunderstand AI. He starts by talking us through a distinction between process-thinkers and outcomes-thinkers: the former camp will think about something in terms of what it does and the latter in terms of what are the results. A lawyer provides his services through building up knowledge and experience, and advises a client accordingly. That’s a lawyer’s process. But the client is not interested either in the process of law or legal services, nor a lawyer’s depth of knowledge nor their fantastic reasoning. The client is interested in specific outcomes: avoiding a legal problem, solving a legal problem, getting a dispute settled, getting certainty.

The 2025 Legal Cheek Firms Most List

Susskind points out: “Professionals have invested themselves heavily in their work – long years of education, training, and plain hard graft. Their status and often their wealth are inextricably linked to their craft.” This means their focus is on processes not outcomes. They cannot envisage another way of getting to the result that the client really wants. From an AI point of view, this is inherently limiting.

To elaborate this point, Susskind lays out the three ways in which AI will change what we do: first by automation, computerising existing tasks; then by innovation, by creating completely new technological solutions (we are not designing a car that a robot will drive, we are designing driverless cars), and elimination. Elimination is where AI might well get rid altogether of the problems that we are trying to solve. In this last category, he gives the example of the Great Manure Crisis of the 1890s where there was so much horse poo on the streets of the world’s cities, that it endangered everyone. What came along was not a machine to get rid of manure but cars. Problem eliminated. Lawyers, like other professionals, cannot imagine AI beyond automation: “They cannot conceive a paradigm different from the one in which they operate.” It’s not that AI might get rid of all legal problems, but it is very likely that there may be a lot of current problems that simply won’t exist (and will be replaced by a whole set of new problems) and professionals are just not thinking imaginatively enough to see this. And thus, Susskind warns us: “Pay heed, professionals – the competition that kills you won’t look like you.”

When it comes to the courts and justice, Susskind has long campaigned for these to be massively updated by using technology. His chapter, ‘Updating the Justice System’, follows in the same vein only more adamantly so. For instance, on lawmaking, AI will require ‘legislative processes that can generate laws and regulations in weeks rather than years.” On courts, he reminds us of his book, Online Courts and the Future of Justice where he set out a blueprint for digitised dispute resolution that only engaged real judges as a last resort.

But he also argues we will need new legal concepts. Intellectual property law will need to be completely ‘reconceptualised’. And if AI systems will be more like “high-performing aliens,” he argues (borrowing a description by a contemporary historian), we will need a new form of legal personality: “if we are to endow AI systems with appropriate entitlements, impose sensible duties on them, and grant them fitting permissions.”

How to Think About AI is an unsettling but informing read. To be enlightened is to be alarmed is to be armed when it comes to AI; so every professional needs to read this book now – before it’s too late.

Professor Richard Susskind OBE is the author of ten books about the future. His latest, How To Think About AI, is now available. He wrote his doctorate on AI and law at Oxford University in the mid-80s.

The post Book review: Susskind’s ‘How To Think About AI’  appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/2025/05/book-review-susskinds-how-to-think-about-ai/feed/ 6
UAE turns to AI to draft the laws of the future https://www.legalcheek.com/2025/04/uae-turns-to-ai-to-draft-the-laws-of-the-future/ https://www.legalcheek.com/2025/04/uae-turns-to-ai-to-draft-the-laws-of-the-future/#comments Wed, 23 Apr 2025 07:51:23 +0000 https://www.legalcheek.com/?p=218520 Making legislative system 'faster and more precise'

The post UAE turns to AI to draft the laws of the future appeared first on Legal Cheek.

]]>

Making legislative system ‘faster and more precise’

Credit: Shiekh Mohammed bin Rashid Al Maktoum via X

The United Arab Emirates is poised to become the first country in the world to use artificial intelligence to assist in lawmaking.

Announced last week by Sheikh Mohammed bin Rashid Al Maktoum, Vice President and ruler of Dubai, the country will be the first to use AI to help write new legislation and review and amend existing laws – rather than merely to improve efficiency. The Regulatory Intelligence Office, will oversee what the government is calling an “AI-driven legislative system”, capable of analysing court rulings, executive procedures and the real-world impact of laws, then proposing reforms in real time.

“This new legislative system, powered by artificial intelligence, will change how we create laws, making the process faster and more precise,” said Sheikh Mohammed on X, describing the change as a “paradigm shift” in government.

The system aims to create laws that reflect the needs of the UAE’s economy and diverse population. In practice, this means AI will help draft laws in multiple languages and plain terms, designed to be understood by everyone.

A key driver is efficiency, with the new role of AI seeing a potential reduction in legislative processing times by up to 70%, while providing recommendations based on global best practices — something Emirati leaders say is essential to keeping pace with technological and economic transformation.

This isn’t just using AI to write laws, according to UAE solicitor and law drafter Hesham Elrafei, speaking to The Telegraph. “It’s introducing a whole new way of making them. Instead of the traditional parliamentary model – where laws get stuck in endless political debates and take years to pass – this approach is faster, clearer, and based on solving real problems.”

The move builds on the UAE’s earlier investments in AI infrastructure, including the 2017 appointment of the world’s first AI minister and the launch of major funding initiatives like MGX, which backs AI research and global investment projects.

However, international experts have raised concerns about handing legislative responsibilities to AI. Researchers from Oxford University and elsewhere warn that generative AI models remain prone to “hallucinations”, bias and unpredictable reasoning which could have serious consequences if left unchecked in legal systems.

The post UAE turns to AI to draft the laws of the future appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/2025/04/uae-turns-to-ai-to-draft-the-laws-of-the-future/feed/ 14