artificial intelligence Archives - Legal Cheek https://www.legalcheek.com/tag/artificial-intelligence/ Legal news, insider insight and careers advice Tue, 02 Sep 2025 07:20:09 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://www.legalcheek.com/wp-content/uploads/2023/07/cropped-legal-cheek-logo-up-and-down-32x32.jpeg artificial intelligence Archives - Legal Cheek https://www.legalcheek.com/tag/artificial-intelligence/ 32 32 Lawyers using AI to boost billable hours, report finds https://www.legalcheek.com/2025/09/lawyers-using-ai-to-boost-billable-hours-report-finds/ https://www.legalcheek.com/2025/09/lawyers-using-ai-to-boost-billable-hours-report-finds/#comments Tue, 02 Sep 2025 06:53:03 +0000 https://www.legalcheek.com/?p=223718 Prioritising additional chargeable work over potential work–life balance benefits

The post Lawyers using AI to boost billable hours, report finds appeared first on Legal Cheek.

]]>

Prioritising additional chargeable work over potential work–life balance benefits


Lawyers are increasingly using AI tools to drive up billable hours, with more than half admitting they spend the time saved by automation on extra chargeable work.

The findings come from a new report by LexisNexis, The AI Culture Clash, which shows 61% of lawyers now use AI in their day-to-day work, up from 46% in January 2025.

Most lawyers (56%) said they used the time saved with AI to increase billable work, while nearly as many (53%) used it to improve their work-life balance.

Associates across firms of all sizes are prioritising billable work over wellbeing, with larger firms in particular focusing on the commercial gains AI can deliver.

“Lawyers are proving that AI delivers clear commercial returns,” said Stuart Greenhill, senior director of segment management at LexisNexis UK. “They’re using it to increase billable hours, rethink pricing models, and deliver more value to clients. Firms that treat AI as a strategic investment, not just an efficiency tool, will gain a decisive edge in profitability and client satisfaction.”

The 2025 Legal Cheek Firms Most List

Despite the surge in usage, the report highlights a cultural lag. Only 17% of lawyers said AI is fully embedded in their firm’s strategy and operations, while two-thirds reported their organisation’s AI culture is slow or non-existent.

Confidence is highest among those using tools designed specifically for the legal sector, with 88% of users reporting greater trust in outputs grounded in verified legal sources. This follows several high-profile incidents where lawyers used general AI tools, only to discover that the tools had fabricated cases, which were then inadvertently included in legal submissions.

The research also warned of a potential talent risk for firms that fall behind. Nearly one in five lawyers said they would consider leaving their organisation if it failed to adequately invest in AI — a figure that jumps to 26% at large firms.

Almost half of lawyers (47%) now expect AI to transform billing models, up from 40% earlier this year, with law firm leaders and general counsel among the most attuned to the shift.

The post Lawyers using AI to boost billable hours, report finds appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/2025/09/lawyers-using-ai-to-boost-billable-hours-report-finds/feed/ 12
From courtroom to code: How AI is shaping our legal system https://www.legalcheek.com/lc-journal-posts/from-courtroom-to-code-how-ai-is-shaping-our-legal-system/ https://www.legalcheek.com/lc-journal-posts/from-courtroom-to-code-how-ai-is-shaping-our-legal-system/#comments Mon, 18 Aug 2025 07:09:05 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=222569 BPP SQE student Eve Sprigings examines whether AI in the courtroom will enhance or erode justice

The post From courtroom to code: How AI is shaping our legal system appeared first on Legal Cheek.

]]>

BPP SQE student Eve Sprigings examines whether AI in the courtroom will enhance or erode justice


Artificial Intelligence isn’t just changing how we live and work — it’s quietly transforming the very foundations of justice in the UK. From courtrooms to corporate boardrooms, AI tools are reshaping how judges decide cases, how precedents evolve and how law firms operate. But as AI gains influence, it challenges age-old legal traditions, raising urgent questions about fairness, transparency and the very future of human judgment. Welcome to the new frontier where bytes meet briefs, and algorithms might just rewrite the law.

AI in the judiciary: A double-edged sword for legal precedents

AI’s potential to reshape binding precedents is no longer theoretical. The rise of predictive legal technologies is a prime example. Consider Garfield Law — the first AI-powered law firm recognised in England and Wales — using AI-led litigation to handle routine matters such as unpaid debt in small claims courts. Whilst this could make legal processes cheaper and faster, it arguably raises questions about maintaining quality and public trust where human judgment has historically been paramount.

Other AI tools like ROSS Intelligence and Legal Robot that help lawyers analyse judicial reasoning patterns even challenge the ethics behind today’s case law accessibility. For example, the antitrust claim “ENOUGH” on ROSS challenging legal paywalls imposed by large sites Westlaw and Thomson Reuters, pushing for broader access to public caselaw. Though not yet part of judicial decision-making, these AI systems hint at a future where algorithms influence precedent and legal interpretation—challenging outdated, gated legal services.

Since the internet’s rise, digitisation means the birth of AI is only taking this further. AI systems can process vast legal databases, potentially highlighting new interpretations or trends that allow for legal doctrine evolution.

A University of Cambridge study highlights AI’s ability to detect judicial decision patterns through case trend analysis, suggesting future shifts in legal standards. But it’s not flawless: AI can both augment and undermine the rule of law, reminding us that error and bias remain concerns.

The human element in AI-assisted justice

Human oversight remains critical. The Alan Turing Institute of Northumbria University scrutinises AI-assisted sentencing tools for errors or procedural irregularities. These considerations underscore the need for transparency, accountability, and human reasoning at the heart of justice—even as automated decision-making grows.

Tools like Harm Assessment Risk Tool (HART), used since 2017 to predict reoffending risks in England and Wales, are already influencing custodial decisions and police work. Such data-driven algorithms may reshape sentencing precedents, but concerns about socio-demographic bias — such as postcode-based discrimination — highlight challenges in balancing data insights with fairness.

AI and technology law: Intellectual property and beyond

AI’s impact on technology law, especially intellectual property (IP), raises thorny questions. Professor Ryan Abbott’s “The Reasonable Robot” explores whether AI-generated inventions deserve patent protection. The European Patent Office’s 2023 ruling on AI inventorship highlights ongoing legal uncertainty around AI’s ownership rights, signalling IP law’s rapid evolution.

UK parliamentary debates this year reflect broader concerns — AI is poised to reshape corporate governance, case management, and dispute resolution. Internationally, AI’s geopolitical importance grows: for instance, US-Saudi talks over Nvidia’s AI chips reveal AI as a new diplomatic currency, overtaking oil as the trade driver.

Want to write for the Legal Cheek Journal?

Find out more

China’s “Smart Courts,” launched by the Supreme People’s Court in 2016 offer a glimpse of AI-driven judicial innovation. Originally focused on traffic cases, these courts enabled smooth transitions to online procedures during COVID-19, balancing technological efficiency with legal norms. They demonstrate that AI’s role in justice isn’t about replacing human judgment but streamlining administration and maintaining court deadlines.

One notable case illustrating AI’s complexity in IP is Li v Liu [2023] from Beijing Internet Court. It involved an AI-generated image created with Stable Diffusion, and the Court considered copyright infringement claims amid AI’s growing role in artistic creation. Here, decisions remain highly case-specific, reflecting how nascent AI law still is.

AI beyond tech: Transforming wider legal practice

AI’s reach extends well beyond tech law. Automated contract drafting and predictive analytics now assist employment law firms in anticipating disputes, while recruitment agencies deploy AI tools to screen candidates—though risks of biased outcomes remain a worry.

Data privacy law, particularly under the UK’s General Data Protection Regulation 2021 (GDPR), exemplifies AI’s regulatory challenges. Companies increasingly use AI to ensure compliance, pushing legal governance toward greater rigor and transparency. AI isn’t just shaping law; it’s reshaping how firms manage legal risk internally.

AI in court operations: Building a new legal infrastructure

UK courts are rapidly digitising, with AI-driven tools streamlining everything from e-filing and case scheduling to virtual hearings. The HM Courts & Tribunals Service (HMCTS) employs AI to enhance operational efficiency, helping courts focus more on substantive justice rather than administrative logistics.

Online dispute resolution (ODR) systems powered by AI are also gaining ground, especially for small claims—reducing backlog and improving access. Yet critics warn that sensitive cases, such as family law disputes, demand nuanced human judgment that AI cannot replace.

Returning to China’s experience, their Smart Courts reveal that balanced AI use—strictly monitored and focused on organizational efficiency—can reduce backlog and enhance judicial fairness without undermining human decision-making. Systems like Shanghai’s “206 system” use AI for evidence analysis and sentencing support, illustrating how technology can create a more cost-effective, straightforward judiciary.

Conclusion: The future of law in an AI-driven world

AI is no futuristic fantasy—it’s here, reshaping the UK’s judiciary and legal culture with unprecedented speed. As AI influences criminal justice and beyond, ethical concerns about bias and judicial independence demand ongoing scrutiny.

The British Computer Society (BCS) notes AI’s potential to support health and social care decisions, mirroring AI’s intended role in law: to assist—not replace—human roles. Garfield Law’s pioneering AI-driven model exemplifies this future, easing public sector burdens whilst maintaining core legal values.

Whether AI becomes a subtle tool enhancing judicial reasoning or a key player in shaping legal norms, the next decade will see it fundamentally alter UK law. This shift offers fresh opportunities for emerging legal sectors but also challenges traditional case law and statutes that underpin our legal culture—wiping away centuries of tradition almost as swiftly as a digital swipe.

Worldwide, governments are in a high-tech arms race to regulate AI-related IP, compliance, and broader legal issues, seeking a delicate balance between protecting national priorities and fostering technological innovation.

The challenge? Ensuring that AI strengthens justice rather than dilutes it — guarding the human heart of law even as machines take their place in the courtroom.

Eve Sprigings is a law graduate currently undertaking the SQE Diploma at BPP University. She has garnered experience across chambers, commercial law firms, and international legal settings, with a focus on legal research, contract analysis, and in both contentious and non-contentious matters. Eve has a strong interest in commercial and corporate Law, as well as data protection, and is passionate about making modern legal frameworks accessible and understandable to all.

The Legal Cheek Journal is sponsored by LPC Law.

The post From courtroom to code: How AI is shaping our legal system appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/from-courtroom-to-code-how-ai-is-shaping-our-legal-system/feed/ 1
Should AI be given legal personhood? New Law Commission paper raises ‘radical’ possibility https://www.legalcheek.com/2025/08/should-ai-be-given-legal-personhood-new-law-commission-paper-raises-radical-possibility/ https://www.legalcheek.com/2025/08/should-ai-be-given-legal-personhood-new-law-commission-paper-raises-radical-possibility/#comments Thu, 07 Aug 2025 07:53:46 +0000 https://www.legalcheek.com/?p=223158 No legal status... for now

The post Should AI be given legal personhood? New Law Commission paper raises ‘radical’ possibility appeared first on Legal Cheek.

]]>

No legal status… for now


A new Law Commission discussion paper has floated the once sci-fi idea of giving artificial intelligence (AI) systems their own legal personality — meaning they could, in theory, be sued, held liable for harm or even pay damages.

The paper, titled AI and the Law, explores the legal challenges posed by the rise of autonomous, adaptive AI, including who should be liable when AI systems act independently and cause harm. While the paper stops short of proposing specific reforms, it suggests that a “potentially radical option” could be “granting some form of legal personality to AI systems”.

Currently, AI cannot be held legally liable as it has no legal status. But with AI systems becoming increasingly sophisticated and capable of completing complex tasks with little or no human input, the Law Commission warns that “liability gaps” could emerge where “no natural or legal person is liable for the harms caused by, or the other conduct of, an AI system”.

The paper states: “Current AI systems may not be sufficiently advanced to warrant this reform option. But given the rapid pace of AI development, and the potentially increasing rate of pace of development, it is pertinent to consider whether AI legal personality requires further discussion now, in the event that such highly advanced AI arrives in the near future.”

Legal personality — the ability to be sued or held accountable — is currently limited to natural persons (humans) and legal persons (such as companies). Extending it to AI systems would be unprecedented, and the Commission acknowledges this would represent a significant shift in legal thinking.

The 2025 Legal Cheek Firms Most List

The core problem arises when AI acts autonomously, making decisions that cannot easily be traced back to a developer or user. The Commission points out that “AI systems do not currently have separate legal personality and therefore can neither be sued or prosecuted”.

In such cases, victims might struggle to obtain compensation, or the state could be left requiring “assistance at public expense”. The Commission warns that this legal uncertainty could also hinder innovation, for instance by impeding insurance for AI-related risks.

While the idea of AI personhood remains speculative, the Commission argues that now is the time to discuss it, given the “rapidly expanding use of AI” and its likely impact across areas including product liability, public law, criminal law and intellectual property.

In the meantime, the Commission plans to monitor the legal impact of AI across its wider law reform work. It has already looked at AI in automated vehicles and deepfakes, with projects underway on aviation autonomy and product liability.

For now, AI remains a tool, not a person, but as the Commission notes: “It is not yet clear that those same [legal] systems will apply equally well to new technology that is also intelligent to varying degrees.”

The post Should AI be given legal personhood? New Law Commission paper raises ‘radical’ possibility appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/2025/08/should-ai-be-given-legal-personhood-new-law-commission-paper-raises-radical-possibility/feed/ 3
Government embraces AI in bid to speed up justice https://www.legalcheek.com/2025/08/government-turns-to-ai-to-speed-up-justice/ https://www.legalcheek.com/2025/08/government-turns-to-ai-to-speed-up-justice/#comments Tue, 05 Aug 2025 07:42:38 +0000 https://www.legalcheek.com/?p=223133 New plan aims to transform courts system with AI-powered tools, reduce backlogs and boost efficiency

The post Government embraces AI in bid to speed up justice appeared first on Legal Cheek.

]]>

New plan aims to transform courts system with AI-powered tools, reduce backlogs and boost efficiency

artificial intelligence
The Ministry of Justice (MoJ has unveiled an ambitious new strategy to roll-out AI across courts, prisons, and probation services in England and Wales, aiming to deliver faster, fairer and more efficient justice.

The AI action plan for Justice, released this week and backed by the Prime Minister and Lord Chancellor, outlines how the department plans to embed AI tools across the justice system over the next three years. It marks the first plan of its kind in the UK and sets out over 60 initiatives including AI-powered chatbots, transcription tools and predictive risk models.

The plan is built around three priorities: strengthening the system’s technical foundations, embedding AI into public-facing and operational services and investing in staff skills and partnerships. A newly appointed Chief AI Officer will lead a dedicated Justice AI Unit tasked with coordinating AI projects and ensuring public trust through robust ethical oversight. The MoJ also announced a Justice AI Fellowship to attract top AI talent from other industries and universities into government.

According to the MoJ, AI technologies are already being used in pilot schemes to help staff transcribe probation meetings, summarise court bundles and automate paperwork.

The department says it will provide secure AI assistants to all 95,000 staff by the end of 2025. These tools will support everyday tasks such as drafting, searching, and scheduling, with initial pilots reportedly saving staff 30 minutes per day on average.

The MoJ also pointed to examples from the private sector, including the use of AI in a high-profile criminal trial at the Old Bailey. Defence lawyers used AI software by UK firm Luminance to analyse 10,000 documents, reportedly saving £50,000 in costs and four weeks in review time.

Meanwhile, the Solicitors Regulation Authority (SRA) has approved the world’s first AI-driven law firm, Garfield AI, which helps businesses recover small debts via automated pre-action processes. The MoJ suggests such innovations could reduce pressure on the courts and improve access to justice.

While the plan highlights AI’s potential to reduce backlogs and improve decision-making, it also stresses the need for responsible adoption. “AI should support, not substitute, human judgment,” the report states, pledging to protect judicial independence and avoid algorithmic bias. AI tools that affect individual rights, such as risk assessments in custody, will face rigorous testing and oversight, with all use cases published for public scrutiny.

The MoJ is also working closely with regulators including the SRA and Bar Standards Board to ensure the legal sector’s approach to AI remains proportionate and evidence-based.

Critics, however, warn that infrastructure gaps and funding uncertainty could hinder progress. The MoJ has secured initial funding for its AI plan but notes that “long-term, sustained funding” is needed to scale successful pilots.

The department is also exploring new procurement models to help smaller UK AI firms secure government contracts. Initiatives such as ‘Reverse Pitch’ events allow startups to co-design solutions with MoJ staff, with several SMEs already developing tools for offender education and digital learning.

Year one of the rollout (from April 2025) will focus on “early wins”, such as scaling AI assistants, testing AI-powered search and transcription tools, and piloting citizen-facing chatbots to help the public navigate legal services. More advanced applications, like predictive models for sentencing and risk, will be tested later, subject to judicial and ethical review.

The post Government embraces AI in bid to speed up justice appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/2025/08/government-turns-to-ai-to-speed-up-justice/feed/ 2
AI in court: rights, responsibilities and regulation https://www.legalcheek.com/lc-journal-posts/ai-in-court-rights-responsibilities-and-regulation/ https://www.legalcheek.com/lc-journal-posts/ai-in-court-rights-responsibilities-and-regulation/#comments Thu, 24 Jul 2025 08:38:37 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=222221 Birmingham Uni student James Bloomberg explores the challenges that AI poses to the justice system and concepts of legal personhood

The post AI in court: rights, responsibilities and regulation appeared first on Legal Cheek.

]]>

Birmingham Uni student James Bloomberg explores the challenges that AI poses to the justice system and concepts of legal personhood


The advancement of artificial intelligence (AI) presents a complex challenge to contemporary legal and ethical frameworks, particularly within judicial systems. This Journal explores the evolving role of AI in the courtroom, using recent high-profile cases, including fabricated legal citations and algorithmic hallucinations. It examines how AI’s integration into legal research and decision-making strains traditional understandings of accountability, responsibility and legal personhood. The discussion also considers AI’s broader societal impact.

The advancement of technology over the recent years has resulted in a seismic shift in the way societies interact, how businesses operate and how governments can regulate this change. AI is now a driving force changing how we live our lives, how students work at university, but most of all its ability to make quick decisions creates red flags, especially for law firms. With AI becoming a part of our everyday life, with AI now in-built on WhatsApp, X (formally Twitter) and elsewhere, questions have been raised: Should AI be granted legal rights? This discussion, far from hypothetical, would challenge existing legal frameworks, and ultimately lead to questions about the societal as well as ethical implications there would be, when AI is recognised as a legal entity.

Article 6 of the Universal Declaration of Human Rights addresses legal personhood, the status upon which an entity is granted the ability to hold rights and duties in the legal system. This can be anything from the legal persons being the owners of property, the ability to act and be held responsible for those actions or the ability to exercise rights and obligations, such as by entering a contact. In the past, corporations have been granted legal personhood. However, if this same concept was applied to AI systems such as ChatGPT, this introduces further complexities that transcend any current legal definitions. The European Parliament has previously explored whether AI systems should be granted a form of legal status to address accountability issues, particularly in cases where harm is caused by autonomous systems.

In 2024, a Canadian lawyer used an AI chatbot for legal research, which created “fictitious” cases during a child custody case in the British Columbia Supreme Court. This was raised by the lawyers for the child’s mother, as they could not find any record of these cases. The circumstance at hand was a dispute over a divorced couple, taking the children on an overseas trip, whilst locked in a separation dispute with the child’s mother. This is an example of how dangerous AI systems can be, and why lawyers today need to use AI as an assistant not as a cheat sheet. However, who is to blame here, the lawyers or the AI chatbot?

Want to write for the Legal Cheek Journal?

Find out more

A major argument against granting AI legal personhood is that it would contradict fundamental human rights principles. The High-Level Expert Group on Artificial Intelligence (AI HLEG) strongly opposes this notion, emphasising that legal personhood for AI systems is “fundamentally inconsistent with the principle of human agency, accountability, and responsibility”. AI lacks consciousness, intent, and moral reasoning — characteristics that underpin legal rights and responsibilities. Unlike humans or even corporations (which operate under human guidance), AI lacks an inherent capacity for ethical decision-making beyond its programmed constraints.

Another central issue is accountability. If AI were granted legal rights, would it also bear responsibilities? Who would be liable for its actions?

Another case saw a federal judge in San Jose, California, ordering AI company Anthropic to respond to allegations that it submitted a court filing containing a ‘hallucination’ created by AI as part of its defence against copyright claims by a group of music publishers. The claim sees an Anthropic data scientist cite a non-existent academic article to bolster the company’s argument in a dispute over evidence. Currently, clarity is needed as to whether liability for AI-related harm is at the hands of the developers, manufacturers, or users.

In the UK, the allocation of liability for AI-related harm is primarily governed by existing legal frameworks, the common law of negligence, and product liability principles. Under the Consumer Protection Act for example, manufacturers and producers can be held strictly liable for defective products that cause damage, which theoretically could extend to AI systems and software if they are deemed products under the Act. Developers and manufacturers may also face liability under negligence if it can be shown that they failed to exercise reasonable care in the design, development, or deployment of AI systems, resulting in foreseeable harm. Users, such as businesses or individuals deploying AI, may be liable if their misuse or inadequate supervision of the technology leads to damage. While there is currently no bespoke UK statute specifically addressing AI liability, the Law Commission and other regulatory bodies have recognised the need for reform and are actively reviewing whether new, AI-specific liability regimes are required to address the unique challenges posed by autonomous systems.

The use of legal personhood on AI may create situations where accountability is obscured, allowing corporations or individuals to evade responsibility by attributing actions to an “autonomous” entity.

Further, AI decision-making lacks transparency as it often operates through black-box algorithms, raising serious ethical and legal concerns, particularly when AI systems make decisions that affect employment, healthcare, or criminal justice. The European Parliament’s Science and Technology Options Assessment (STOA) study has proposed enhanced regulatory oversight, including algorithmic impact assessments, to address transparency and accountability. Granting AI legal rights without resolving these issues would only increase the risk of unchecked algorithmic bias.

The ethical implications extend beyond legal considerations. AI’s increasing autonomy in creative and economic spaces, such as AI-generated art, music, and literature has raised questions about intellectual property ownership. Traditionally, copyright and patent laws protect human creators, but should AI-generated works receive similar protections? In the UK, for example, computer-generated works are protected under copyright law, yet ownership remains tied to the creator of the AI system rather than the AI itself. Under the Copyright, Designs and Patents Act 1988, section 9(3), the author of a computer-generated work is defined as “the person by whom the arrangements necessary for the creation of the work are undertaken.” This means that, in the UK, copyright subsists in AI-generated works, but the rights vest in the human creator or operator, not the AI system itself. Recognising AI as a rights-holder could challenge these conventions, necessitating a re-evaluation of intellectual property laws.

A potential middle ground involves the implementation of stringent governance models that prioritise accountability without conferring rights upon AI. Instead of granting legal personhood, policymakers could focus on AI-specific liability structures, enforceable ethical guidelines, and greater transparency in AI decision-making processes. The European Commission has already initiated discussions on adapting liability frameworks to address AI’s unique challenges, ensuring that responsibility remains clearly assigned.

While AI continues to evolve, the legal framework governing its use and accountability must remain firmly rooted in principles of human responsibility. AI should be regulated as a tool, albeit an advanced one, rather than as an autonomous entity deserving of rights. Strengthening existing regulations, enhancing transparency, and enforcing accountability measures remain the most effective means of addressing the challenges posed by AI.

The delay in implementing robust AI governance has already resulted in widespread ethical and legal dilemmas, from biased decision-making to privacy infringements. While AI’s potential is undeniable, legal recognition should not precede comprehensive regulatory safeguards. A cautious, human-centric approach remains the best course to ensure AI serves societal interests without compromising fundamental legal principles.

While it is tempting to explore futuristic possibilities of AI personhood, legal rights should remain exclusively human. The law must evolve to manage AI’s risks, but not in a way that grants rights to entities incapable of moral reasoning. For now, AI must remain a tool, not a rights-holder.

James Bloomberg is a second year human sciences student at the University of Birmingham. He has a strong interest in AI, research and innovation and plans to pursue a career as a commercial lawyer.

The Legal Cheek Journal is sponsored by LPC Law.

The post AI in court: rights, responsibilities and regulation appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/ai-in-court-rights-responsibilities-and-regulation/feed/ 1
Will AI really replace paralegals? https://www.legalcheek.com/2025/07/will-ai-really-replace-paralegals/ https://www.legalcheek.com/2025/07/will-ai-really-replace-paralegals/#comments Wed, 09 Jul 2025 07:42:40 +0000 https://www.legalcheek.com/?p=222191 The Legal Cheek team discuss AI and the future of legal jobs — listen now 🎙️

The post Will AI really replace paralegals? appeared first on Legal Cheek.

]]>

The Legal Cheek team discuss AI and the future of legal jobs — listen now 🎙


The Legal Cheek Podcast returns this week as publisher Alex Aldridge and writer Lydia Fontes discuss two stories that have made the legal news in recent weeks.

In this week’s episode, we dig into the unorthodox tactics of Thomas Isaacs — the aspiring barrister who went viral on LinkedIn for taking his job search back to basics. Is this a brilliant new strategy to get noticed in an increasingly competitive job market? And what is this first-class AI and computer science graduate doing becoming a lawyer in the first place?

We also discuss the news that the “Godfather of AI”, Geoffrey Hinton, told the Diary of a CEO podcast that AI could spell trouble for paralegals, asking how often these sorts of predictions really come true. Is the legal market as vulnerable to AI replacement as is often made out?

You can listen to the podcast in full via the embed above, or on Spotify and Apple Podcasts.

The post Will AI really replace paralegals? appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/2025/07/will-ai-really-replace-paralegals/feed/ 9
‘We rather like hanging around law libraries’: Top judges reveal their attitudes towards AI https://www.legalcheek.com/2025/06/we-rather-like-hanging-around-law-libraries-top-judges-reveal-their-attitudes-towards-ai/ https://www.legalcheek.com/2025/06/we-rather-like-hanging-around-law-libraries-top-judges-reveal-their-attitudes-towards-ai/#comments Mon, 23 Jun 2025 08:15:38 +0000 https://www.legalcheek.com/?p=221592 Human input remains crucial

The post ‘We rather like hanging around law libraries’: Top judges reveal their attitudes towards AI appeared first on Legal Cheek.

]]>

Human input remains crucial


A new report has shed light on senior judges’ views on AI, highlighting both the areas of their work that could benefit from the technology and their concerns about its use.

The study used focus‑group discussions with 12 judges from across the UK judicial hierarchy, including five members of the Supreme Court, to dig into a range of attitudes towards AI.

A certain amount of enthusiasm for the technology was shown. One judge summarised AI’s benefits as “increasing productivity, reducing cost and reducing some of the drain on resources that we all have” while another commented that, “Anything that can improve efficiency and productivity whilst ensuring we don’t lose the essence of what justice is, is exciting and to be welcomed.”

The discussion on productivity seems to have centred on “boring” bulk administrative tasks such as disclosure exercises, bundling and summarising cases. There were also suggestions that AI could handle “small claims” as well as a possibility that AI could be used to create versions of judgments that could be easily understood by a child or by the general public.

Despite these benefits, there were many areas of judicial work that judges felt required a necessary human element. In cases which involve prison sentences or removing children from their parents, this human element was thought to be especially important. “People take comfort from having a human face, a human decision maker,” one participant commented. There was additional concern about the decisions AI might make in these sensitive cases, “Law is not a matter of pure logic,” was the view of one judge, “you need a practical, humane result to a problem if it’s humanly possible.”

Furthermore, a sense emerges from the report that these judges like being judges and are reluctant to share the most interesting parts of their workload with AI. “There’s a lot of people here [in the judiciary] who rather like hanging around law libraries,” one commented, reflecting on the “satisfaction of problem solving” and “following the footnotes.”

To the suggestion that AI could be used to produce judgments, another participant objected on the grounds that, “each of us, I think, enjoys writing, possibly in our own style.” Another comment reads, “When we come out of a case, we all meet together and discuss what we think about it and why. We can’t have a room of robots doing that.”

The post ‘We rather like hanging around law libraries’: Top judges reveal their attitudes towards AI appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/2025/06/we-rather-like-hanging-around-law-libraries-top-judges-reveal-their-attitudes-towards-ai/feed/ 2
‘Godfather of AI’ thinks tech won’t hurt plumbers — but could spell trouble for paralegals https://www.legalcheek.com/2025/06/godfather-of-ai-thinks-tech-wont-hurt-plumbers-but-could-spell-trouble-for-paralegals/ https://www.legalcheek.com/2025/06/godfather-of-ai-thinks-tech-wont-hurt-plumbers-but-could-spell-trouble-for-paralegals/#comments Wed, 18 Jun 2025 07:50:18 +0000 https://www.legalcheek.com/?p=221478 Intellectual labour at risk of replacement says Geoffrey Hinton

The post ‘Godfather of AI’ thinks tech won’t hurt plumbers — but could spell trouble for paralegals appeared first on Legal Cheek.

]]>

Intellectual labour at risk of replacement says Geoffrey Hinton


The computer scientist dubbed ‘the Godfather of AI’ has identified legal assistants and paralegals as among the roles most at risk of replacement by AI.

Geoffrey Hinton, famous for his work on artificial neural networks and winner of the 2024 Nobel Prize in Physics for his contribution to machine learning, appeared on the popular Diary of a CEO podcast for an interview heavily focused on the dangers of AI technology.

Amid a range of threats, spanning from an increase in cyber attacks to the development of autonomous lethal weapons, Hinton discussed the possibility that unemployment levels will increase as AI outperforms humans at what he terms “mundane intellectual labour”. He likens the development of this technology to the industrial revolution of the 19th century, which saw many manual labour jobs go extinct.

Hinton agreed with the now much-repeated phrase, “AI won’t take your job, a human using AI will take your job”, however stressed that as AI tools increase productivity, fewer human employees will be needed in many workplaces. “The combination of a person and an AI assistant can do the work that 10 people could do previously,” he explained.

When asked which roles specifically face the highest threat of AI replacement, Hinton replied, “Someone like a legal assistant or paralegal — they’re not going to be needed for very long.” He went on to mention call centre workers as another group at risk.

On the roles less likely to be replaced, skilled trades come out on top. “It’s going to be a long time until [AI] is as good at physical manipulation as us, so a good bet would be to be a plumber,” Hinton suggested.

Throughout the interview Hinton emphasises the difficulty of making confident predictions about the future of this technology. “Anybody who tells you they know just what’s going to happen and how to deal with it is talking nonsense,” he told listeners.

The extent of the effect that AI will have on the legal industry, including roles like paralegals and legal assistants, has been much debated. Back in March, Simmons & Simmons senior partner Julian Taylor told journalists that his clients “don’t completely trust” AI and would rather have real people handling the complex and high-stakes work they send to the firm. As a service industry heavily influenced by the wishes of its clients, this could suggest that the profession will be slower to replace staff with technology than Hinton suggests.

The post ‘Godfather of AI’ thinks tech won’t hurt plumbers — but could spell trouble for paralegals appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/2025/06/godfather-of-ai-thinks-tech-wont-hurt-plumbers-but-could-spell-trouble-for-paralegals/feed/ 7
Copyright in the age of AI: The UK’s contentious proposal  https://www.legalcheek.com/lc-journal-posts/copyright-in-the-age-of-ai-the-uks-contentious-proposal/ https://www.legalcheek.com/lc-journal-posts/copyright-in-the-age-of-ai-the-uks-contentious-proposal/#respond Thu, 05 Jun 2025 07:46:38 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=218643 First year law student at Leeds Uni, Xin Ern Teow (Ilex) analyses the UK's proposals to resolve the tension between copyright and AI-produced content

The post Copyright in the age of AI: The UK’s contentious proposal  appeared first on Legal Cheek.

]]>

First year law student at Leeds Uni, Xin Ern Teow (Ilex) analyses the UK’s proposals to resolve the tension between copyright and AI-produced content

Credit: Cash Macanaya via Unsplash

What happens when cutting-edge technology collides with centuries-old concepts of creativity, privacy, and law? The UK government’s latest proposal, Copyright and AI: Consultation, to allow AI companies to use copyrighted works for training has sparked fierce debate, raising questions about the future of intellectual property in the AI era.

Imagine a world where AI-generated novels outsell human-written ones, where iconic artworks inspire machine-crafted masterpieces, and where centuries of cultural heritage are fed into algorithms to create something entirely new. At the heart of this transformative vision lies a contentious question: Who owns the rights to this creativity, the machines, their makers, or the creators whose works serve as the foundation?

What’s on the table?

The UK government’s Copyright and Artificial Intelligence consultation is the most recent official initiative addressing the intersection of AI and copyright law. This consultation sought public input on how to adapt and modernise the UK’s legal framework to support both the creative industries and the AI sector, balancing their needs and fostering innovation while protecting creative industries.

At the heart of this consultation lie three key objectives:

  1. Control: The framework seeks to ensure that rights holders retain control over their works. This means creators should have the ability to license, monetise, and safeguard their content when used by AI technologies.
  2. Access: AI developers require access to extensive datasets to train their models effectively. The government proposes streamlined access to copyrighted materials to prevent legal barriers from stifling technological progress.
  3. Transparency: The framework aims to establish greater transparency, ensuring all stakeholders — creators, developers, and consumers — understand how AI systems use copyrighted content and generate outputs.

In order to achieve these goals, the government proposes an “opt-out” system, allowing AI companies to use copyrighted works unless the rights holders explicitly object, thereby reducing administrative hurdles for developers. However, it places the burden of action on creators, who must proactively protect their intellectual property.

The creative industry backlash

Undoubtedly, there is a strong opposition ignited from the creative industries, which argue that the “opt-out” system threatens their livelihoods and the value of intellectual property. At the heart of this resistance is the “Make It Fair” campaign, supported by various news organisations, further underscoring the demand for equitable treatment and compensation for creators.

The potential loss of revenue is just one part of the broader concern. Creators fear the long-term ramifications of AI on the entire creative ecosystem. If AI systems are allowed to harvest copyrighted works without compensating creators or offering any recognition, it could lead to a “race to the bottom,” where the value of human creativity is overshadowed by algorithmically-generated content. In this scenario, emerging creators would struggle to profit from their work, as the very worth of their intellectual property would diminish in an AI-dominated marketplace.

Want to write for the Legal Cheek Journal?

Find out more

Many critics argue that this shift could foster a monopolistic environment in which only a handful of large tech companies profit from AI-generated content, while individual creators are left with little control or benefit. This concern is poignantly illustrated by the silent album, Is This What We Want?, a collaborative protest from over 1,000 musicians, including iconic figures like Kate Bush and Damon Albarn. The album’s track titles collectively convey the message, “The British government must not legalise music theft to benefit AI companies”. This symbolic gesture underscores a key point that this issue goes beyond mere financial gain — it’s about recognition and respect for human artistry in a world increasingly dominated by machines.

While there is broad support for fostering AI innovation, many creatives argue that the government’s approach needs to be balanced more carefully. If these concerns are not addressed, the unrest within the creative community suggests that the government’s proposal may not only face legal challenges but could also lead to a loss of public trust in the ethical development of AI.

The benefits of the proposal

While the proposal has faced significant backlash, the UK government has strongly defended its stance, arguing that the benefits far outweigh the concerns. From the government’s perspective, this initiative is crucial for fostering the growth of AI technology, ensuring the UK remains competitive on the global stage, and contributing to economic growth.

With access to vast datasets, AI models can improve and innovate faster, benefiting industries like healthcare, education, and finance. The government believes this will enable AI firms to create groundbreaking technologies without the delays of seeking permissions for every dataset.

Besides, by facilitating AI development, the UK aims to attract investment, create jobs, and position itself as a leader in AI research. In a global race for AI supremacy, providing open access to data can help the UK remain competitive, particularly against tech giants in the US and China.

Additionally, AI innovation can revolutionise industries, from self-driving cars to personalised medicine. By supporting AI companies, the UK hopes to foster new industries and technological advancements, which would contribute to long-term national growth and improved societal outcomes.

While the proposal acknowledges creator concerns, the government argues that promoting AI innovation justifies easier access to data. If implemented with a balanced legal framework, the UK’s approach could serve as a model for other nations grappling with AI and copyright challenges.

Conclusion

To sum up, the UK government’s proposal to allow AI companies to train their algorithms on copyrighted works without prior permission highlights the ongoing tension between fostering technological innovation and protecting creators’ rights. While the proposal aims to accelerate AI development and bolster economic growth, it raises critical concerns about the fairness of intellectual property distribution and the potential devaluation of human creativity.

Xin Ern Teow (Ilex) is a first-year law student at the University of Leeds with a strong passion for making a positive impact through volunteering. Her interests also extend to negotiation and exploring strategies for conflict resolution and collaborative problem-solving.

The Legal Cheek Journal is sponsored by LPC Law.

The post Copyright in the age of AI: The UK’s contentious proposal  appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/copyright-in-the-age-of-ai-the-uks-contentious-proposal/feed/ 0
AI and the erosion of artistic integrity https://www.legalcheek.com/lc-journal-posts/ai-and-the-erosion-of-artistic-integrity-a-comparative-copyright-law-analysis/ https://www.legalcheek.com/lc-journal-posts/ai-and-the-erosion-of-artistic-integrity-a-comparative-copyright-law-analysis/#comments Thu, 15 May 2025 05:25:33 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=218459 Leeds law school grad, Mohammad Anas, takes a deep-dive into the ramifications of Ghibli-style images on copyright law in the era of generative AI

The post AI and the erosion of artistic integrity appeared first on Legal Cheek.

]]>

Leeds law school grad Mohammad Anas takes a deep-dive into the ramifications of Ghibli-style images on copyright law in the era of generative AI


The rapid advancement of artificial intelligence (AI) presents a formidable challenge to the legal and ethical underpinnings of artistic expression, threatening the integrity of human creativity. This journal examines the 2024 proliferation of Studio Ghibli-style images on X (formerly Twitter) as a pivotal case study, analysing how AI-generated works strain the copyright frameworks of the United Kingdom, the European Union, and the United States.

Through a review of statutory provisions, judicial precedents, and ethical considerations, it exposes systemic deficiencies in current law and the erosion of artistic identity, exemplified by Ghibli’s anti-war, pro-earth, pro-humanity, and anti-consumerist principles.

The threat to creative authorship

In a Tokyo studio, Hayao Miyazaki meticulously crafts Princess Mononoke (1997), each frame a testament to decades of artistic mastery and a philosophy rooted in pacifism, ecological reverence, human dignity, and resistance to consumerism. By 2024, this vision will be replicated on X through AI-generated images of lush landscapes and ethereal figures that echo Ghibli’s aesthetic yet lack its purposeful soul. Concurrently, cartoonist Sarah Andersen confronts the unauthorized appropriation of her distinctive comic style by Stable Diffusion, her creative identity reduced to uncredited algorithmic outputs.

This phenomenon transcends technological innovation, raising profound legal and ethical questions. As copyright systems in the UK, EU, and US grapple with AI’s non-human authorship, they reveal a critical misalignment between statutory intent and modern reality. Can these frameworks adapt to protect the human essence of art epitomised by Ghibli’s principled vision against the systematic challenge posed by generative AI?

Legal frameworks under examination

UK copyright law: A framework under pressure

The Copyright, Designs, and Patents Act 1988 (CDPA) establishes protections for “original artistic works” (s.1(1)(a)), granting authors exclusive rights to control reproduction, adaptation, and distribution (ss.16–20). Infringement hinges on appropriating a “substantial part”, a qualitative standard clarified in Designers Guild Ltd v Russell Williams [2000] 1 WLR 2416. The House of Lords held that this includes both literal copying and the “look and feel” of a work, potentially applicable to AI-generated Ghibli-style images.

However, AI developers invoke the idea-expression dichotomy, upheld in Baigent v Random House [2007] EWCA Civ 247, arguing that style or thematic inspiration falls outside copyright protection. This defence is contested by Temple Island Collections Ltd v New English Teas [2012] EWPCC 1, which protects aesthetic arrangements. Ghibli’s deliberate fusion of anti-consumerist narratives and visual coherence arguably meets this threshold, suggesting a basis for protection against AI mimicry.

Section 9(3) of the CDPA further complicates the issue, attributing authorship of computer-generated works to the person arranging for their creation. In the context of AI, where training datasets are vast and often scraped without consent, this attribution becomes untenable. The case of Getty Images v Stability AI [2023] EWHC challenges the legality of mass scraping under s.17(2), highlighting the lack of clarity on data provenance, leaving artists vulnerable.

Despite these challenges, UK law continues to confront the issue head-on, with some legal scholars proposing that AI-generated works be viewed through a lens of fair use or transformative rights, which could offer a more balanced approach. Others argue that additional protections should be established to address the evolving nature of artistic authorship in the AI age.

EU copyright law: A doctrine misaligned

The EU’s copyright regime, anchored in Directive 2001/29/EC (InfoSoc Directive), requires protection based on the “author’s intellectual creation” (Infopaq International A/S v Danske Dagblades Forening, C-5/08). Ghibli’s works exemplify this, with Miyazaki’s pro-humanity ethos and ecological advocacy. AI-generated facsimiles, however, lack a human author, exploiting a doctrinal gap that undermines this foundation.

The Court of Justice’s ruling in Painer v Standard Verlags GmbH (C-145/10) protects stylistic choices reflecting an author’s personality, yet AI outputs derived from aggregated data challenge this precedent. The EU AI Act (Regulation 2024/1689) mandates data usage disclosure, but its enforcement mechanisms remain superficial, offering limited protection for artists. The current regulatory framework struggles to maintain the balance between allowing AI-driven innovation and preserving the authenticity of artistic authorship.

In practical terms, the lack of data transparency by AI companies poses a significant challenge to the effective enforcement of existing regulations. While some suggest that the AI Act could be a step forward, its applicability to art and creative industries remains unclear and may require further revisions to adequately address AI’s potential to mimic existing styles without permission.

Want to write for the Legal Cheek Journal?

Find out more

US copyright law: A system unprepared

US copyright law demands human authorship, a principle established in Burrow-Giles Lithographic Co. v. Sarony (1884). The Copyright Office’s 2023 decision on Zarya of the Dawn codifies this, denying protection to AI-generated works. However, this stance leaves rights holders without recourse, particularly concerning the appropriation of existing styles like Ghibli’s anti-war landscapes.

AI developers exploit the Feist Publications v. Rural Telephone Service (1991) low originality threshold, claiming their outputs transform rather than copy, despite relying on copyrighted inputs. In Andersen v Stability AI (2023), secondary liability is explored, but proving infringement remains difficult due to AI firms’ non-disclosure of data . California’s AB 2013 (2024) mandates AI art transparency, yet federal law remains silent.

Despite these obstacles, recent cases, including Nichols v Universal Pictures (1930), have begun to explore how AI-generated works might be treated under US copyright law, though the absence of clear guidelines leaves both artists and developers in a state of uncertainty. As the technology continues to advance, legal experts are calling for a more robust framework that can handle the complexities of AI-driven creativity.

Ethical dimensions: The value of human intent

Studio Ghibli’s creative process reflects a labour of intent. The Tale of the Princess Kaguya (2013) required 14 months of hand-drawn animation, each frame embodying a commitment to peace, ecology, and anti-consumerism . Miyazaki’s rejection of AI art as “an insult to life” resonates with Jung’s view of art as a psychological expression of the human soul, an act irreducible to algorithms. Sarah Andersen’s distress, “my identity, digested by a machine,” highlights the violation of creative agency when her style is mechanized without consent.

The intentionality behind human art is a critical element that AI-generated works cannot replicate. While AI can generate content that mimics existing styles, it lacks the deeper emotional and philosophical contexts that underpin human creation. This absence of agency raises ethical questions about authenticity, responsibility, and the commodification of art in an AI-driven landscape.

Judicial precedents: Seeking clarity

In the UK, Temple Island (2012) protects aesthetic coherence, while Designers Guild (2000) clarifies the “substantial part” standard. In the EU, Painer (2011) safeguards stylistic individuality, while Football Dataco (2012) defends curated effort. In the US, Nichols (1930) fails against AI’s complexity, though Getty v Stability AI (2023) signals a judicial shift toward accountability.

A call for reform: Strengthening legal protections

The UK’s CDPA revisions are stalled, the EU’s AI Act lacks enforceable specificity, and US federal law lags. Current frameworks, built for human authorship, fail to address AI’s appropriation of thematic essence, exposing a critical regulatory gap. Several reforms could help bridge this gap, ensuring artists are better protected while allowing for the responsible use of AI in creative industries.

Proposed Reforms

    1. Mandatory data transparency: Require AI developers to submit training dataset inventories to public registries, verified biannually. Non-compliance should incur fines.
    2. Strict liability standards: Impose liability for unlicensed use of copyrighted styles, with statutory damages tied to commercial exploitation.
    3. Redefining ‘substantial part’: Expand the term to include thematic consistency and philosophical intent.
    4. Artist empowerment mechanisms: Create opt-in registries for creators to license or prohibit AI use of their works.

These reforms shift the burden to AI developers, ensuring artists retain control over their creative legacies.

Conclusion: Upholding the human core of art

The proliferation of AI-generated Ghibli-style images exposes the inadequacies of copyright law in confronting non-human authorship. Yet, the resilience of human creativity persists in its intentionality, vividly embodied in Ghibli’s rejection of war, reverence for nature, celebration of humanity, and critique of consumerism. These principles forged through deliberate labour distinguish art from mechanical imitation. Legal systems must transcend reactive measures and adopt robust transparency and accountability, ensuring creativity remains a human endeavour.

Mohammad Anas is an aspiring solicitor who recently completed his LLB from the University of Leeds, with a strong interest in corporate law, banking and finance, and intellectual property.

The Legal Cheek Journal is sponsored by LPC Law.

The post AI and the erosion of artistic integrity appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/ai-and-the-erosion-of-artistic-integrity-a-comparative-copyright-law-analysis/feed/ 2
Should aspiring lawyers embrace AI? Decoding the mixed messages https://www.legalcheek.com/2025/05/should-aspiring-lawyers-embrace-ai-decoding-the-mixed-messages/ https://www.legalcheek.com/2025/05/should-aspiring-lawyers-embrace-ai-decoding-the-mixed-messages/#comments Wed, 14 May 2025 07:54:33 +0000 https://www.legalcheek.com/?p=219637 The Legal Cheek team explore the conflicting range of advice on AI – listen now 🎙️

The post Should aspiring lawyers embrace AI? Decoding the mixed messages appeared first on Legal Cheek.

]]>

The Legal Cheek team explore conflicting advice on AI in the legal profession — listen now 🎙


The Legal Cheek podcast returns this week as writers Lydia Fontes and Angus Simpson dig into the issue of AI and lawyers — covering the excitement from tech enthusiasts as well as the growing number of horror stories and cautionary tales told by sceptics. We chat through these mixed messages and ask whether law students should embrace AI and, if so, how?

From the Master of the Rolls’ feeling that judges and lawyers should embrace AI to the Bar Council’s more cautious approach, to embarrassing examples of AI-driven gaffes around the world, artificial intelligence is rarely out of the legal news. We discuss how this barrage of information can be perplexing for aspiring lawyers and share our experience of using this technology and our expectations of how it will shape our careers.

You can listen to the podcast in full via the embed above, or on Spotify and Apple Podcasts.

The post Should aspiring lawyers embrace AI? Decoding the mixed messages appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/2025/05/should-aspiring-lawyers-embrace-ai-decoding-the-mixed-messages/feed/ 1
Book review: Susskind’s ‘How To Think About AI’  https://www.legalcheek.com/2025/05/book-review-susskinds-how-to-think-about-ai/ https://www.legalcheek.com/2025/05/book-review-susskinds-how-to-think-about-ai/#comments Thu, 08 May 2025 07:57:37 +0000 https://www.legalcheek.com/?p=219260 Polly Botsford delves into Professor Richard Susskind's 'darker' latest book and wonders: 'Where has Reassuring Richard gone?'

The post Book review: Susskind’s ‘How To Think About AI’  appeared first on Legal Cheek.

]]>

Polly Botsford delves into Professor Richard Susskind’s ‘darker’ latest book and wonders: ‘Where has Reassuring Richard gone?’

Professor Richard Susskind

Richard Susskind has always been an optimist, thinking and writing about tech and the law without being a doomsayer, never a naysayer, an everything-possible kind of guy. Even when he was talking about the end of lawyers, it sounded like a positive (even for lawyers).

But his latest book, How To Think About AI, an excellent companion to understanding where we are in all things AI, is darker around the edges. “Balancing the benefits and threats of artificial intelligence – saving humanity with and from AI – is the defining challenge of our age,” he tells us. “I am … increasingly concerned and sometimes even scared by the actual and potential problems that might arise from AI.” So he begins his chapter on ‘Categories of Risk.’ Where has Reassuring Richard gone?

And yet, of course, How to think about AI is full of clear thinking as well as being a call to action. Susskind covers a lot of ground: alongside an exploration of the various hypotheses on where AI will take us (is it hype or the end of the world as we know it?), he recaps on AI’s brief history, starting with Alan Turing’s paper ‘Computing Machinery and Intelligence’ in 1950, and including ‘AI winters’ where progress was stagnant, for example, when we all got distracted by the internet. He explores the alarming risks artificial intelligence brings, and how it might be possible to control those risks. He wraps up with a crash course in consciousness, evolution and the cosmos to explore the various theories about where all this AI business sits within the grander ideas about life, the universe and everything.

But let’s roll back to the here and now. For lawyers, Susskind gets to the core of how they (as with other professionals such as doctors or actuaries) misunderstand AI. He starts by talking us through a distinction between process-thinkers and outcomes-thinkers: the former camp will think about something in terms of what it does and the latter in terms of what are the results. A lawyer provides his services through building up knowledge and experience, and advises a client accordingly. That’s a lawyer’s process. But the client is not interested either in the process of law or legal services, nor a lawyer’s depth of knowledge nor their fantastic reasoning. The client is interested in specific outcomes: avoiding a legal problem, solving a legal problem, getting a dispute settled, getting certainty.

The 2025 Legal Cheek Firms Most List

Susskind points out: “Professionals have invested themselves heavily in their work – long years of education, training, and plain hard graft. Their status and often their wealth are inextricably linked to their craft.” This means their focus is on processes not outcomes. They cannot envisage another way of getting to the result that the client really wants. From an AI point of view, this is inherently limiting.

To elaborate this point, Susskind lays out the three ways in which AI will change what we do: first by automation, computerising existing tasks; then by innovation, by creating completely new technological solutions (we are not designing a car that a robot will drive, we are designing driverless cars), and elimination. Elimination is where AI might well get rid altogether of the problems that we are trying to solve. In this last category, he gives the example of the Great Manure Crisis of the 1890s where there was so much horse poo on the streets of the world’s cities, that it endangered everyone. What came along was not a machine to get rid of manure but cars. Problem eliminated. Lawyers, like other professionals, cannot imagine AI beyond automation: “They cannot conceive a paradigm different from the one in which they operate.” It’s not that AI might get rid of all legal problems, but it is very likely that there may be a lot of current problems that simply won’t exist (and will be replaced by a whole set of new problems) and professionals are just not thinking imaginatively enough to see this. And thus, Susskind warns us: “Pay heed, professionals – the competition that kills you won’t look like you.”

When it comes to the courts and justice, Susskind has long campaigned for these to be massively updated by using technology. His chapter, ‘Updating the Justice System’, follows in the same vein only more adamantly so. For instance, on lawmaking, AI will require ‘legislative processes that can generate laws and regulations in weeks rather than years.” On courts, he reminds us of his book, Online Courts and the Future of Justice where he set out a blueprint for digitised dispute resolution that only engaged real judges as a last resort.

But he also argues we will need new legal concepts. Intellectual property law will need to be completely ‘reconceptualised’. And if AI systems will be more like “high-performing aliens,” he argues (borrowing a description by a contemporary historian), we will need a new form of legal personality: “if we are to endow AI systems with appropriate entitlements, impose sensible duties on them, and grant them fitting permissions.”

How to Think About AI is an unsettling but informing read. To be enlightened is to be alarmed is to be armed when it comes to AI; so every professional needs to read this book now – before it’s too late.

Professor Richard Susskind OBE is the author of ten books about the future. His latest, How To Think About AI, is now available. He wrote his doctorate on AI and law at Oxford University in the mid-80s.

The post Book review: Susskind’s ‘How To Think About AI’  appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/2025/05/book-review-susskinds-how-to-think-about-ai/feed/ 6
UAE turns to AI to draft the laws of the future https://www.legalcheek.com/2025/04/uae-turns-to-ai-to-draft-the-laws-of-the-future/ https://www.legalcheek.com/2025/04/uae-turns-to-ai-to-draft-the-laws-of-the-future/#comments Wed, 23 Apr 2025 07:51:23 +0000 https://www.legalcheek.com/?p=218520 Making legislative system 'faster and more precise'

The post UAE turns to AI to draft the laws of the future appeared first on Legal Cheek.

]]>

Making legislative system ‘faster and more precise’

Credit: Shiekh Mohammed bin Rashid Al Maktoum via X

The United Arab Emirates is poised to become the first country in the world to use artificial intelligence to assist in lawmaking.

Announced last week by Sheikh Mohammed bin Rashid Al Maktoum, Vice President and ruler of Dubai, the country will be the first to use AI to help write new legislation and review and amend existing laws – rather than merely to improve efficiency. The Regulatory Intelligence Office, will oversee what the government is calling an “AI-driven legislative system”, capable of analysing court rulings, executive procedures and the real-world impact of laws, then proposing reforms in real time.

“This new legislative system, powered by artificial intelligence, will change how we create laws, making the process faster and more precise,” said Sheikh Mohammed on X, describing the change as a “paradigm shift” in government.

The system aims to create laws that reflect the needs of the UAE’s economy and diverse population. In practice, this means AI will help draft laws in multiple languages and plain terms, designed to be understood by everyone.

A key driver is efficiency, with the new role of AI seeing a potential reduction in legislative processing times by up to 70%, while providing recommendations based on global best practices — something Emirati leaders say is essential to keeping pace with technological and economic transformation.

This isn’t just using AI to write laws, according to UAE solicitor and law drafter Hesham Elrafei, speaking to The Telegraph. “It’s introducing a whole new way of making them. Instead of the traditional parliamentary model – where laws get stuck in endless political debates and take years to pass – this approach is faster, clearer, and based on solving real problems.”

The move builds on the UAE’s earlier investments in AI infrastructure, including the 2017 appointment of the world’s first AI minister and the launch of major funding initiatives like MGX, which backs AI research and global investment projects.

However, international experts have raised concerns about handing legislative responsibilities to AI. Researchers from Oxford University and elsewhere warn that generative AI models remain prone to “hallucinations”, bias and unpredictable reasoning which could have serious consequences if left unchecked in legal systems.

The post UAE turns to AI to draft the laws of the future appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/2025/04/uae-turns-to-ai-to-draft-the-laws-of-the-future/feed/ 14
Shoosmiths offers £1 million bonus pot to encourage AI use among staff https://www.legalcheek.com/2025/04/shoosmiths-offers-1-million-bonus-pot-to-encourage-ai-use-among-staff/ https://www.legalcheek.com/2025/04/shoosmiths-offers-1-million-bonus-pot-to-encourage-ai-use-among-staff/#comments Wed, 02 Apr 2025 11:57:37 +0000 https://www.legalcheek.com/?p=217677 Cash rewards for reaching one million prompts on Microsoft Copilot

The post Shoosmiths offers £1 million bonus pot to encourage AI use among staff appeared first on Legal Cheek.

]]>

Cash rewards for reaching one million prompts on Microsoft Copilot


Shoosmiths has become the first major UK law firm to announce it will tie bonuses to how much lawyers use artificial intelligence (AI), encouraging its staff reach one million prompts to unlock a £1 million payout.

The firm has launched a scheme in which staff are encouraged to prompt the Microsoft Copilot AI one million times. If — or when — the one million prompt target is hit, Shoosmiths will “unlock” an extra £1 million for the collegiate bonus pot. All staff, except partners and business services directors (who are nevertheless encouraged to use the technology), will benefit.

According to the firm, if each employee used Copilot just four times per working day, the target would be easily reached.

“We don’t fear AI”, said Shoosmiths CEO David Jackson, who linked the scheme to the firm’s innovative side as well as seeking client “benefits”. Shoosmiths hopes the initiative “frees” lawyers’ time to do “the human-to-human work that really matters: solving problems, building trust, and supporting clients through complexity.” The statement added that AI “won’t replace” any staff.

Shoosmiths’ initiative is backed by a partnership with Microsoft, which includes training and a firmwide “knowledge hub” where teams will share AI use cases and success stories. New internal roles – including “innovation leads” under a “head of legal innovation” plus a “data manager” – have also been launched. The firm also claims AI usage will help them achieve their net zero goals by 2040. This will mean managing “upstream emissions from AI” in approaching sustainability.

The 2025 Legal Cheek Firms Most List

The move to offer a financial reward for using AI signals Shoosmiths’ commitment to “embedding AI into the day-to-day work” for lawyers and support staff. This follows Shoosmiths’ recent advice, where in a blog post for training contract seekers, Shoosmiths described AI use as “tool to refine and develop your own original thoughts, not replace them”, Legal Cheek reported.

AI is being embraced by some major law firms. A&O Shearman launched guidance for using AI in TC applications issued last year. The firm had previously hired the “Harvey” AI tool, which was also adopted by Macfarlanes.

But Shoosmiths strategy with integrating AI — not least with monetary rewards for using it – also diverges from other firms’ more cautious approaches. Hill Dickinson, for example, recently restricted AI tools following a surge in staff use. Meanwhile, prospective barristers were barred from using ChatGPT or other generative AI tools in their pupillage applications this year. Nevertheless, over 70% lawyers thought AI was a “force for good” last year, when Legal Cheek also reported over 40% lawyers used the technology already.

The post Shoosmiths offers £1 million bonus pot to encourage AI use among staff appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/2025/04/shoosmiths-offers-1-million-bonus-pot-to-encourage-ai-use-among-staff/feed/ 7
AI must uphold the rule of law, campaigners urge https://www.legalcheek.com/2025/01/ai-must-uphold-human-rights-and-the-rule-of-law-campaigners-urge/ https://www.legalcheek.com/2025/01/ai-must-uphold-human-rights-and-the-rule-of-law-campaigners-urge/#comments Fri, 31 Jan 2025 08:58:58 +0000 https://www.legalcheek.com/?p=214565 Developers should 'act responsibly'

The post AI must uphold the rule of law, campaigners urge appeared first on Legal Cheek.

]]>

Developers should ‘act responsibly’

AI Face

The campaign group JUSTICE has proposed the first “rights-based framework” to guide Artificial Intelligence (AI) use across the justice system, arguing that AI users and developers should be obliged to “act responsibly”.

The report, entitled ‘AI in our justice system’, asserts that “attempts to improve the system through reforms and innovations, should have the core tenants of the rule of law and human rights embedded in their strategy, policy, design and development.” To achieve this, JUSTICE puts forward two requirements.

The first requires AI developers to be “goal-led”, ensuring their innovations are “targeted at genuine use cases which can help deliver better outcomes”. AI tools should be developed with the justice system’s “core goals” in mind, those being “equal and effective access to justice, fair and lawful decision-making and openness to scrutiny.”

 The 2025 Legal Cheek Firms Most List

The second requirement is the “duty to act responsibly”. This would oblige “all those involved in the deployment of AI within the justice system” to “ensure that the core features of the rule of law and human rights are embedded at each stage.”

The report covers the benefits that AI tools could bring to the justice system, including easing the workload of “overburdened” courts, giving decision-makers access to “data-derived insights”, helping the police investigate online criminal activity as well as “combating bias”.

However, JUSTICE warns against “over-reliance” on AI systems, claiming that treating AI generated results as “fully accurate and certain” can lead to “adverse outcomes”. This follows the news that the Ministry of Justice is reconsidering their approach to computer evidence in the criminal justice system in response to the Post Office Inquiry, which revealed that a computer error led to 900 incorrect prosecutions against Post Office staff.

Sophia Adams Bhatti, report co-author and Chair of JUSTICE’s AI programme, acknowledged AI’s potential to solve some of the justice system’s issues. However, she said the technology, “equally has the potential, as we have already seen, to cause significant harms”. She recommends that the justice system approaches AI opportunities “with clear expectations of what good looks like, what outcomes we are seeking, the risks we are willing to take as society, and the red lines we want to put in place.”

The post AI must uphold the rule of law, campaigners urge appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/2025/01/ai-must-uphold-human-rights-and-the-rule-of-law-campaigners-urge/feed/ 2
Budding barristers barred from using AI in pupillage applications https://www.legalcheek.com/2025/01/budding-barristers-barred-from-using-ai-in-pupillage-applications/ https://www.legalcheek.com/2025/01/budding-barristers-barred-from-using-ai-in-pupillage-applications/#comments Thu, 16 Jan 2025 08:49:13 +0000 https://www.legalcheek.com/?p=213947 ChatGPT not an option

The post Budding barristers barred from using AI in pupillage applications appeared first on Legal Cheek.

]]>

ChatGPT not an option

Robot
Those looking to secure a pupillage in the current round of applications won’t be able to make use of new tech, with the Bar Council outlawing the use of generative AI.

Prospective barristers who apply for a pupillage via the Pupillage Gateway (used by the vast majority of chambers offering pupillages) are required to confirm within their applications that they have not used AI programmes “like ChatGPT”.

A notice, which comes at the end of each application in the “Application Summary and Agreements” section, requires confirmation that the work is the “sole creation and original work” of the applicant. Students are also required to acknowledge that they are “not permitted to use any Generative AI programmes, including Large Language Model (LLM) Programmes like ChatGPT, to write any of the responses contained within it”.

The segment goes on to say that “any application which has been written with the use of any generative AI LLMs like ChatGPT or any similar programme will be excluded from the shortlisting process of the relevant Authorised Education and Training Organisation”.

Applying for pupillage? Check out the Legal Cheek Chambers Most List for an in-depth look at life in over 50 of the UK’s top sets

This approach, however, is worlds away from other areas of the profession.

Those with their eyes on new megafirm A&O Shearman, for example, can find guidance from the firm on how AI can be used to enhance their applications. The outfit encourages students to use AI, a “highly effective resource”, to “help you articulate clearly and concisely”, although does warn that tech should not be “a substitute for your voice and capabilities”.

National outfit Shoosmiths similarly encourages students to “refine and develop your own original thoughts” through the use of AI, again noting however that tech should “not replace” an applicants ideas and thoughts, and that “integrity and honesty are fundamental attributes that cannot be replaced by technology.”

A poll run by Legal Cheek last year found that 1 in 5 students were already using AI to help with their job applications, with other data suggesting that as many as 40% of lawyers are now using AI to make themselves more productive.

Applying for pupillage? Check out our Pupillage application masterclass on Wednesday 22 January — with Gatehouse, Henderson, Keating, Landmark, Radcliffe Chambers and ULaw

The post Budding barristers barred from using AI in pupillage applications appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/2025/01/budding-barristers-barred-from-using-ai-in-pupillage-applications/feed/ 7
Over 70% of lawyers think AI is ‘force for good’ https://www.legalcheek.com/2024/11/over-70-of-lawyers-think-ai-is-force-for-good/ Thu, 07 Nov 2024 08:48:38 +0000 https://www.legalcheek.com/?p=211495 Concerns remain over accuracy and ethical issues

The post Over 70% of lawyers think AI is ‘force for good’ appeared first on Legal Cheek.

]]>

Concerns remain over accuracy and ethical issues

Robot hand using laptop
New research reveals that nearly three-quarters of lawyers and law firm employees view artificial intelligence (AI) as a “force for good”.

The 2024 Future of Professionals report, produced by legal information giant Thomson Reuters, found that 72% of lawyers saw developing AI tech in a positive light, with that figure rising to 74% for those working in law firms.

While optimism about AI is higher among those in business/corporate settings and tax firms, at 84% and 82% respectively, only 64% of government employees view AI as a positive force.

Out of the 2,200 professionals surveyed, 63% reported using AI in their practices. Less than 10% rated the output from AI technology as ‘poor’, while 78% considered it a basic or strong starting point. However, only 4% felt that AI produced results superior to their own capabilities.

APPLY NOW: The Legal Cheek November Virtual Law Fair is under two weeks away

Of the 37% who don’t use AI, the classic concerns surrounding AI remain. Forty-three perfect held concerns about the accuracy of AI responses, with 37% worried about data security, and 27% concerned about the ethics of new tech.

The research also looked at the uses of AI in the legal sphere, and the bounds of what lawyers think is ethically acceptable. The use of AI for basic admin tasks was almost unanimously agreed upon, with less than 20% contesting, on ethical grounds, the use of AI in research, analysis, and drafting basic documents.

Using AI for advice or strategic recommendations is more contentious, with 80% of respondents considering it unacceptable. This figure rises to 96% when lawyers are asked whether they would allow AI to represent a client in court or make final decisions on a case.

The post Over 70% of lawyers think AI is ‘force for good’ appeared first on Legal Cheek.

]]>
Travers launches ‘AI Academy’ https://www.legalcheek.com/2024/10/travers-launches-ai-academy/ Thu, 24 Oct 2024 07:02:53 +0000 https://www.legalcheek.com/?p=211035 New training initiative follows similar move by Linklaters

The post Travers launches ‘AI Academy’ appeared first on Legal Cheek.

]]>

New training initiative follows similar move by Linklaters

brain with circuit board
Travers Smith has launched a new ‘AI Academy’ to help its lawyers gain a deeper understanding of how artificial intelligence works and its potential impact on clients. This initiative comes as more City law firms take steps to ensure their lawyers stay up to date with the latest developments in what is rapidly evolving space.

The programme is open to everyone, including trainees, and is structured around a series of modules covering foundational resources, core mandatory AI training, and additional sessions on prompting and the wider legal and regulatory implications of AI use. The final module, dubbed ‘Boost’, is an ongoing element designed to support lawyers as AI technology continues to develop.

The Silver Circle player says the academy will help its lawyers remain “AI literate” as it continues to evolve and will feature a series of in-depth live sessions, supported by an extensive reference materials, including bite-size learning videos and further reading resources.

APPLY NOW:  The Legal Cheek November UK Virtual Law Fair takes place on Tuesday 19 November 2024

Emily Tearle, head of knowledge management and co-ordinator of the AI Academy, commented:

“Our aim has been to devise a comprehensive and accessible programme which responds to the learning needs of the whole firm, irrespective of where they are on their AI journey. The creation of the AI Academy does not mark the end of our efforts, but rather the beginning of an ongoing, firm-wide commitment to grow, adapt, and excel in the ever-evolving landscape of artificial intelligence. I am confident that the AI Academy will cement our position as leaders in leveraging AI for client work, productivity, and overall efficiency.”

News of the academy comes just weeks after fellow City firm Linklaters announced its collaboration with King’s College London to create a new AI training programme for its lawyers. According to the firm, the programme aims to enhance lawyers’ understanding of Generative AI and prompt engineering within the legal sector.

Research published earlier this year found that over 40% of lawyers now use AI in their daily work, with increased efficiency cited as the top benefit. The LexisNexis study also highlighted other advantages, such as improved client service and gaining a competitive edge over rival firms.

The post Travers launches ‘AI Academy’ appeared first on Legal Cheek.

]]>
Linklaters teams up with KCL to deliver AI training for lawyers https://www.legalcheek.com/2024/10/linklaters-teams-up-with-kcl-to-deliver-ai-training-for-lawyers/ Tue, 08 Oct 2024 06:43:28 +0000 https://www.legalcheek.com/?p=210317 Classroom lessons, practical exercises and hackathon

The post Linklaters teams up with KCL to deliver AI training for lawyers appeared first on Legal Cheek.

]]>

Classroom lessons, practical exercises and hackathon

artificial intelligence
Linklaters has teamed up with King’s College London to deliver training on generative AI (GenAI) to its lawyers.

The programme, dubbed the ‘GenAI Expert Training course’, has been put together by Linklaters’ GenAI programme team in collaboration with The Dickson Poon School of Law at KCL.

The new scheme follows the previous training run by the Magic Circle firm which saw over 80% of the firm’s staff complete an introductory course. “The new GenAI Expert Training course aims to take proficiency to the next level as the sophistication of these tools continues to develop”, the firm said.

The goal of the new programme is to provide an “in-depth understanding of GenAI and prompt engineering” in the legal space, with lawyers receiving classroom sessions and undertaking practical learning exercises, including a hackathon.

Legal Cheek understands that the new training is not yet available to trainees, although recruits will be required to complete the foundation training.

 The 2025 Legal Cheek Firms Most List

Earlier in the summer the firm rolled out Microsoft 365’s Copilot, a GenAI tool, to offices across the globe.

Shilpa Bhandarkar, partner at Linklaters and head of the firm’s client tech and AI offering, commented:

“Offering a global cohort of our people the opportunity to learn from leading academics and each other will help embed GenAI expertise across our business. We’ve already built the foundation on which this cohort can now bring their knowledge and creativity, identifying use cases and designing solutions that will help them transform the way they work and deliver client service.”

Professor Dan Hunter, executive dean at The Dickson Poon School of Law at KCL added: “We live and work in a rapidly changing legal and technical landscape, and equipping legal professionals with the tools to be able to utilise GenAI in their practice is vital to ensure we can keep up with these developments. I look forward to seeing how participants navigate the risks, mitigations, benefits and ethical issues they are presented with when considering how they use GenAI in legal practice.”

News of the training follows research which found more than 40% of lawyers now use AI in their daily work, with the ability to complete tasks faster seen as the top benefit. The findings, collected by LexisNexis, highlighted additional benefits such as enhanced client service and gaining a competitive edge over rival firms.

The post Linklaters teams up with KCL to deliver AI training for lawyers appeared first on Legal Cheek.

]]>
Mastering AI: Meet the lawyers redefining tech in law https://www.legalcheek.com/lc-careers-posts/mastering-ai-meet-the-lawyers-redefining-tech-in-law/ Tue, 24 Sep 2024 09:25:39 +0000 https://www.legalcheek.com/?post_type=lc-careers-posts&p=209874 Osborne Clarke associates Tom Sharpe and Thomas Stables discuss the transformative role of AI in reshaping legal practices, offering insights into their specialised approaches and the evolving landscape of technology law

The post Mastering AI: Meet the lawyers redefining tech in law appeared first on Legal Cheek.

]]>

Osborne Clarke associates Tom Sharpe and Thomas Stables discuss the transformative role of AI in reshaping legal practices, offering insights into their specialised approaches and the evolving landscape of technology law

Osborne Clarke associates, Tom Sharpe (left) and Thomas Stables (right)

Many lawyers and students alike are grappling with the intricacies of artificial intelligence (AI) after its headline boom across the legal industry. But what does AI law look like in practice? And what do training contract hopefuls need to do to prove their tech credentials?

Ahead of this afternoon’s virtual student event ‘AI, business and the law — with Osborne Clarke’, I sat down with two of Osborne Clarke’s tech lawyers leading the charge on AI at the firm. Tom Sharpe, associate director, and Thomas Stables, product regulation senior associate, offer their insights on the transformative impact of AI in law, detailing their innovative approaches and the broader implications for the legal industry.

Find out more about training as a solicitor with Osborne Clarke

SD: Could you walk me through your career journeys so far and what you’re currently working on at Osborne Clarke?

Stables: Sure! I’m a senior associate in regulatory disputes. I originally studied philosophy at Newcastle University, before jumping straight into the GDL. After that, I started my training contract at OC. I had a mix of seats—banking, regulatory disputes, a secondment at Tripadvisor, and six months in the commercial team where Tom [Sharpe] works. Over time, I’ve shifted from general regulatory work to becoming quite specialised in tech law, especially around AI. My job is mostly about keeping clients out of disputes before they happen—advising on compliance, helping them build products that meet regulatory standards, getting market access in different jurisdictions.

Sharpe: My path was along the more “traditional” law degree and LPC route. I studied law at Cardiff before doing the LPC in Bristol and then training and qualifying at OC. A couple of years after qualifying in Bristol, I decided to move to London, joining a Magic Circle law firm for a few years, before boomeranging back to OC — this time in the London office.

My practice is split into two buckets — digital transformation and AI. I started off with a practice very much focused on IT and outsourcing, which has now evolved into digital transformation which means advising on a wide range of technology-based agreements. Working in this space involves helping clients who are either transitioning from traditional business models over to an increasingly digital way of operating. At the other end of the scale, I advise other clients who are already very digital, but are pushing into new digital channels or adopting new and advanced tech (such as AI). Secondly, over the past 12-18 months, AI has really taken over an increasing share of my workload. So, basically, I badge myself as a digital transformation lawyer who also handles a lot of AI work.

Find out more about training as a solicitor with Osborne Clarke

SD: It sounds like AI and tech are huge parts of both your roles. I’d love to hear how the rise of AI, especially generative AI, has impacted your practices and career directions. How has it shaped the way you work?

Stables: I think the first hurdle is education. Over the past 18 months, the demand for training on AI has been huge. Clients are constantly asking, “What should we do about ChatGPT?” or “How do we prepare a policy for using these new tools?” It’s been a matter of getting the basics out there quickly. As people become more knowledgeable about AI, the challenge shifts to how fast we can move. We’re well positioned as the go-to advisors for our clients when AI-related issues come up; my career has definitely moved in that direction over the last 12-18 months. AI is a highly technical product, so my background in product compliance fits very neatly into advising clients on how to build and launch compliant AI systems.

Sharpe: Technologies like AI are exactly why I became a tech lawyer in the first place. It hits directly on one of the themes that I found super interesting when studying law at university — will regulation keep up with tech, or will the tech outpace regulation? AI is a great example of that race. Machine learning has been around for relatively quite a long time now, but what’s really captured imaginations and propelled AI to the front pages is the generative aspect. The release of AI tools like DALL·E and ChatGPT created a really rich interactive experience — AI was no longer something working in the background (doing things like filtering spam emails) — it’s now so tangible. That generative aspect is what has drawn the attention of the public, boardrooms and regulators.

We’re constantly advising clients on cutting-edge AI tools they’re developing – it’s really exciting to see “behind the curtain” and to advise on the bleeding-edge AI and use cases that clients are working on. It’s really cutting edge tech, and is such an interesting space given legislators are scrambling to catch up with the sheer pace at which AI is evolving.

SD: Are there any AI projects that Osborne Clarke is working on internally?

Sharpe: Absolutely. We’ve been rolling out AI solutions internally — one of which is our GPT-based chatbot, OCGPT. It’s designed to help with research and trained on our internal precedents, though no client data is involved. It’s really a tool for us to experiment with, figuring out how we can use AI to make our processes more efficient and deliver better client solutions. We also have some other exciting AI projects as a firm that we can’t quite talk about yet, but they’re coming soon!

Stables: Yeah, I used OCGPT this morning. It’s brilliant at cutting down time on routine tasks like summarising meeting notes. It’s an efficiency booster, no doubt. And we’ve got more AI tools in development, so watch this space.

Find out more about training as a solicitor with Osborne Clarke

SD: On the topic of AI regulation — there’s a lot of talk about whether regulation can keep up with the pace of tech. Does that pose unique challenges in your product regulation work?

Stables: Definitely. There’s a constant tension between encouraging innovation and ensuring safety. Last night, I attended a seminar led by one of the heads of the European AI Office, discussing how the EU is taking a risk-based approach similar to the way they regulate other products in Europe. The idea is to keep the market dynamic and allow people to innovate, but within solid guardrails to ensure safety. What’s exciting is how dynamic the regulatory structure is going to be. Legislators are making sure there’s room for rapid amendments to keep up with the industry’s fast pace. Sector-based regulators will also have a lot of power to tailor their guidance to specific risks, which allows for agility.

Sharpe: Yeah, and that’s key, especially for global clients. We’re seeing a patchwork of regulations pop up, with various jurisdictions taking different approaches — the UK approach is still taking shape and there are other areas where regulations are working their way through the legislative process (such as California). In the EU, the EU AI Act is now law (as of August this year) and much like the GDPR before it is widely considered to be the gold standard that clients will work towards in their compliance programmes.

Find out more about training as a solicitor with Osborne Clarke

SD: It sounds complex with different regulators and jurisdictions involved. Do you think that’s a challenge?

Stables: It can be. But that’s where we have an advantage at Osborne Clarke. We have offices across the world, with specialist tech lawyers in each jurisdiction. We work particularly closely with our colleagues in places like Germany, Belgium, and the Netherlands to ensure we’re giving joined-up advice to clients.

Sharpe: Exactly. A client raised this point in a session I was at yesterday—if they’re operating globally, what does this patchwork of regulation mean for them? It’s an evolving challenge, but with the EU AI Act as a reference point, companies are using that as their compliance anchor. From there, we tailor advice for specific local regulations. It’s a lot of moving parts, but it keeps things interesting!

SD: Finally, for future lawyers wanting to get into this sector, especially with AI becoming such a hot topic, what advice would you give?

Stables: I think everyone entering the legal profession now should have at least a baseline understanding of AI. It’s going to be fundamental in so many areas. The great thing is, AI tools themselves make it easy to learn. You can experiment with generating code or images — it’s all about being curious and playing around with the technology. Firms are already screening for people with an interest or knowledge in AI, so it’s definitely something that will stand out.

Sharpe: At Osborne Clarke, there’s a huge opportunity to shape your own career. If you’re passionate about an area, the firm will support you and people will be really pleased and happy to involve you in that space. The firm is very much about backing you, especially where you’ve got an idea or a passion for something. To my mind, that’s what makes it fulfilling to work here — there’s so much room to explore and help shape the firm’s approach to a wide range of cutting-edge areas.

Meet Osborne Clarke at ‘AI, business and the law — with Osborne Clarke’, a virtual event taking place THIS AFTERNOON (Tuesday 24 September). This event is now fully booked, but why not check out our list of upcoming events.

About Legal Cheek Careers posts.

The post Mastering AI: Meet the lawyers redefining tech in law appeared first on Legal Cheek.

]]>
Over 40% of lawyers now use AI to accelerate their work https://www.legalcheek.com/2024/09/over-40-of-lawyers-now-use-ai-to-accelerate-their-work/ https://www.legalcheek.com/2024/09/over-40-of-lawyers-now-use-ai-to-accelerate-their-work/#comments Tue, 24 Sep 2024 07:50:23 +0000 https://www.legalcheek.com/?p=209773 Despite growing use of AI in legal work, concerns persist over public tools producing fake information

The post Over 40% of lawyers now use AI to accelerate their work appeared first on Legal Cheek.

]]>

Despite growing use of AI in legal work, concerns persist over public tools producing fake information

typing on a computer
New research reveals that over 40% of lawyers now use artificial intelligence (AI) in their daily work, citing the ability to complete tasks faster as the main benefit.

A survey of more than 800 UK legal professionals at law firms and in-house legal teams found that a further 41% planned to use AI for work “in the near future”, up from 28% the previous year. Meanwhile, only 15% of respondents have no plans to adopt AI, a significant drop from 61% the previous year.

In other findings from the research conducted by LexisNexis, 71% of lawyers cited faster delivery of legal work as a key benefit, followed by improved client service (54%) and gaining a competitive advantage (53%).

Over a third (39%) of respondents in private practice believe AI will require them to adjust their billing practices, with 17% suggesting it could signal the end of the billable hour model. Forty percent believe the billable hour model will remain, while 42% are uncertain about AI’s impact on it.

 The 2025 Legal Cheek Firms Most List

Elsewhere, 60% of firms or legal departments have made internal changes due to AI adoption. These changes include offering AI-powered products to staff (36%), developing policies on the use of generative AI (24%), and providing AI-related training for employees (18%).

While an increasing number of lawyers are using AI in their legal work, over three-quarters of respondents (76%) expressed concerns about public AI tools producing inaccurate or fabricated information. However, 72% indicated they would feel more confident using a generative AI tool that is based on legal content sources and includes linked citations to verifiable authorities.

Stuart Greenhill, senior director of segments at LexisNexis UK, commented:

“The possibility of delivering work faster has seen widespread adoption, internal integration, and regular use of generative AI across the legal sector. There’s also a strong demand for AI tools that are grounded on reliable legal sources. Yet the impact of this efficiency on the billable hour is becoming a topic of debate. As a result, the number of firms reconsidering pricing models has doubled throughout the course of 2024.”

The post Over 40% of lawyers now use AI to accelerate their work appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/2024/09/over-40-of-lawyers-now-use-ai-to-accelerate-their-work/feed/ 5