Technology Archives - Legal Cheek https://www.legalcheek.com/topic_area/technology/ Legal news, insider insight and careers advice Fri, 22 Aug 2025 07:54:54 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://www.legalcheek.com/wp-content/uploads/2023/07/cropped-legal-cheek-logo-up-and-down-32x32.jpeg Technology Archives - Legal Cheek https://www.legalcheek.com/topic_area/technology/ 32 32 From courtroom to code: How AI is shaping our legal system https://www.legalcheek.com/lc-journal-posts/from-courtroom-to-code-how-ai-is-shaping-our-legal-system/ https://www.legalcheek.com/lc-journal-posts/from-courtroom-to-code-how-ai-is-shaping-our-legal-system/#comments Mon, 18 Aug 2025 07:09:05 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=222569 BPP SQE student Eve Sprigings examines whether AI in the courtroom will enhance or erode justice

The post From courtroom to code: How AI is shaping our legal system appeared first on Legal Cheek.

]]>

BPP SQE student Eve Sprigings examines whether AI in the courtroom will enhance or erode justice


Artificial Intelligence isn’t just changing how we live and work — it’s quietly transforming the very foundations of justice in the UK. From courtrooms to corporate boardrooms, AI tools are reshaping how judges decide cases, how precedents evolve and how law firms operate. But as AI gains influence, it challenges age-old legal traditions, raising urgent questions about fairness, transparency and the very future of human judgment. Welcome to the new frontier where bytes meet briefs, and algorithms might just rewrite the law.

AI in the judiciary: A double-edged sword for legal precedents

AI’s potential to reshape binding precedents is no longer theoretical. The rise of predictive legal technologies is a prime example. Consider Garfield Law — the first AI-powered law firm recognised in England and Wales — using AI-led litigation to handle routine matters such as unpaid debt in small claims courts. Whilst this could make legal processes cheaper and faster, it arguably raises questions about maintaining quality and public trust where human judgment has historically been paramount.

Other AI tools like ROSS Intelligence and Legal Robot that help lawyers analyse judicial reasoning patterns even challenge the ethics behind today’s case law accessibility. For example, the antitrust claim “ENOUGH” on ROSS challenging legal paywalls imposed by large sites Westlaw and Thomson Reuters, pushing for broader access to public caselaw. Though not yet part of judicial decision-making, these AI systems hint at a future where algorithms influence precedent and legal interpretation—challenging outdated, gated legal services.

Since the internet’s rise, digitisation means the birth of AI is only taking this further. AI systems can process vast legal databases, potentially highlighting new interpretations or trends that allow for legal doctrine evolution.

A University of Cambridge study highlights AI’s ability to detect judicial decision patterns through case trend analysis, suggesting future shifts in legal standards. But it’s not flawless: AI can both augment and undermine the rule of law, reminding us that error and bias remain concerns.

The human element in AI-assisted justice

Human oversight remains critical. The Alan Turing Institute of Northumbria University scrutinises AI-assisted sentencing tools for errors or procedural irregularities. These considerations underscore the need for transparency, accountability, and human reasoning at the heart of justice—even as automated decision-making grows.

Tools like Harm Assessment Risk Tool (HART), used since 2017 to predict reoffending risks in England and Wales, are already influencing custodial decisions and police work. Such data-driven algorithms may reshape sentencing precedents, but concerns about socio-demographic bias — such as postcode-based discrimination — highlight challenges in balancing data insights with fairness.

AI and technology law: Intellectual property and beyond

AI’s impact on technology law, especially intellectual property (IP), raises thorny questions. Professor Ryan Abbott’s “The Reasonable Robot” explores whether AI-generated inventions deserve patent protection. The European Patent Office’s 2023 ruling on AI inventorship highlights ongoing legal uncertainty around AI’s ownership rights, signalling IP law’s rapid evolution.

UK parliamentary debates this year reflect broader concerns — AI is poised to reshape corporate governance, case management, and dispute resolution. Internationally, AI’s geopolitical importance grows: for instance, US-Saudi talks over Nvidia’s AI chips reveal AI as a new diplomatic currency, overtaking oil as the trade driver.

Want to write for the Legal Cheek Journal?

Find out more

China’s “Smart Courts,” launched by the Supreme People’s Court in 2016 offer a glimpse of AI-driven judicial innovation. Originally focused on traffic cases, these courts enabled smooth transitions to online procedures during COVID-19, balancing technological efficiency with legal norms. They demonstrate that AI’s role in justice isn’t about replacing human judgment but streamlining administration and maintaining court deadlines.

One notable case illustrating AI’s complexity in IP is Li v Liu [2023] from Beijing Internet Court. It involved an AI-generated image created with Stable Diffusion, and the Court considered copyright infringement claims amid AI’s growing role in artistic creation. Here, decisions remain highly case-specific, reflecting how nascent AI law still is.

AI beyond tech: Transforming wider legal practice

AI’s reach extends well beyond tech law. Automated contract drafting and predictive analytics now assist employment law firms in anticipating disputes, while recruitment agencies deploy AI tools to screen candidates—though risks of biased outcomes remain a worry.

Data privacy law, particularly under the UK’s General Data Protection Regulation 2021 (GDPR), exemplifies AI’s regulatory challenges. Companies increasingly use AI to ensure compliance, pushing legal governance toward greater rigor and transparency. AI isn’t just shaping law; it’s reshaping how firms manage legal risk internally.

AI in court operations: Building a new legal infrastructure

UK courts are rapidly digitising, with AI-driven tools streamlining everything from e-filing and case scheduling to virtual hearings. The HM Courts & Tribunals Service (HMCTS) employs AI to enhance operational efficiency, helping courts focus more on substantive justice rather than administrative logistics.

Online dispute resolution (ODR) systems powered by AI are also gaining ground, especially for small claims—reducing backlog and improving access. Yet critics warn that sensitive cases, such as family law disputes, demand nuanced human judgment that AI cannot replace.

Returning to China’s experience, their Smart Courts reveal that balanced AI use—strictly monitored and focused on organizational efficiency—can reduce backlog and enhance judicial fairness without undermining human decision-making. Systems like Shanghai’s “206 system” use AI for evidence analysis and sentencing support, illustrating how technology can create a more cost-effective, straightforward judiciary.

Conclusion: The future of law in an AI-driven world

AI is no futuristic fantasy—it’s here, reshaping the UK’s judiciary and legal culture with unprecedented speed. As AI influences criminal justice and beyond, ethical concerns about bias and judicial independence demand ongoing scrutiny.

The British Computer Society (BCS) notes AI’s potential to support health and social care decisions, mirroring AI’s intended role in law: to assist—not replace—human roles. Garfield Law’s pioneering AI-driven model exemplifies this future, easing public sector burdens whilst maintaining core legal values.

Whether AI becomes a subtle tool enhancing judicial reasoning or a key player in shaping legal norms, the next decade will see it fundamentally alter UK law. This shift offers fresh opportunities for emerging legal sectors but also challenges traditional case law and statutes that underpin our legal culture—wiping away centuries of tradition almost as swiftly as a digital swipe.

Worldwide, governments are in a high-tech arms race to regulate AI-related IP, compliance, and broader legal issues, seeking a delicate balance between protecting national priorities and fostering technological innovation.

The challenge? Ensuring that AI strengthens justice rather than dilutes it — guarding the human heart of law even as machines take their place in the courtroom.

Eve Sprigings is a law graduate currently undertaking the SQE Diploma at BPP University. She has garnered experience across chambers, commercial law firms, and international legal settings, with a focus on legal research, contract analysis, and in both contentious and non-contentious matters. Eve has a strong interest in commercial and corporate Law, as well as data protection, and is passionate about making modern legal frameworks accessible and understandable to all.

The Legal Cheek Journal is sponsored by LPC Law.

The post From courtroom to code: How AI is shaping our legal system appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/from-courtroom-to-code-how-ai-is-shaping-our-legal-system/feed/ 1
AI in court: rights, responsibilities and regulation https://www.legalcheek.com/lc-journal-posts/ai-in-court-rights-responsibilities-and-regulation/ https://www.legalcheek.com/lc-journal-posts/ai-in-court-rights-responsibilities-and-regulation/#comments Thu, 24 Jul 2025 08:38:37 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=222221 Birmingham Uni student James Bloomberg explores the challenges that AI poses to the justice system and concepts of legal personhood

The post AI in court: rights, responsibilities and regulation appeared first on Legal Cheek.

]]>

Birmingham Uni student James Bloomberg explores the challenges that AI poses to the justice system and concepts of legal personhood


The advancement of artificial intelligence (AI) presents a complex challenge to contemporary legal and ethical frameworks, particularly within judicial systems. This Journal explores the evolving role of AI in the courtroom, using recent high-profile cases, including fabricated legal citations and algorithmic hallucinations. It examines how AI’s integration into legal research and decision-making strains traditional understandings of accountability, responsibility and legal personhood. The discussion also considers AI’s broader societal impact.

The advancement of technology over the recent years has resulted in a seismic shift in the way societies interact, how businesses operate and how governments can regulate this change. AI is now a driving force changing how we live our lives, how students work at university, but most of all its ability to make quick decisions creates red flags, especially for law firms. With AI becoming a part of our everyday life, with AI now in-built on WhatsApp, X (formally Twitter) and elsewhere, questions have been raised: Should AI be granted legal rights? This discussion, far from hypothetical, would challenge existing legal frameworks, and ultimately lead to questions about the societal as well as ethical implications there would be, when AI is recognised as a legal entity.

Article 6 of the Universal Declaration of Human Rights addresses legal personhood, the status upon which an entity is granted the ability to hold rights and duties in the legal system. This can be anything from the legal persons being the owners of property, the ability to act and be held responsible for those actions or the ability to exercise rights and obligations, such as by entering a contact. In the past, corporations have been granted legal personhood. However, if this same concept was applied to AI systems such as ChatGPT, this introduces further complexities that transcend any current legal definitions. The European Parliament has previously explored whether AI systems should be granted a form of legal status to address accountability issues, particularly in cases where harm is caused by autonomous systems.

In 2024, a Canadian lawyer used an AI chatbot for legal research, which created “fictitious” cases during a child custody case in the British Columbia Supreme Court. This was raised by the lawyers for the child’s mother, as they could not find any record of these cases. The circumstance at hand was a dispute over a divorced couple, taking the children on an overseas trip, whilst locked in a separation dispute with the child’s mother. This is an example of how dangerous AI systems can be, and why lawyers today need to use AI as an assistant not as a cheat sheet. However, who is to blame here, the lawyers or the AI chatbot?

Want to write for the Legal Cheek Journal?

Find out more

A major argument against granting AI legal personhood is that it would contradict fundamental human rights principles. The High-Level Expert Group on Artificial Intelligence (AI HLEG) strongly opposes this notion, emphasising that legal personhood for AI systems is “fundamentally inconsistent with the principle of human agency, accountability, and responsibility”. AI lacks consciousness, intent, and moral reasoning — characteristics that underpin legal rights and responsibilities. Unlike humans or even corporations (which operate under human guidance), AI lacks an inherent capacity for ethical decision-making beyond its programmed constraints.

Another central issue is accountability. If AI were granted legal rights, would it also bear responsibilities? Who would be liable for its actions?

Another case saw a federal judge in San Jose, California, ordering AI company Anthropic to respond to allegations that it submitted a court filing containing a ‘hallucination’ created by AI as part of its defence against copyright claims by a group of music publishers. The claim sees an Anthropic data scientist cite a non-existent academic article to bolster the company’s argument in a dispute over evidence. Currently, clarity is needed as to whether liability for AI-related harm is at the hands of the developers, manufacturers, or users.

In the UK, the allocation of liability for AI-related harm is primarily governed by existing legal frameworks, the common law of negligence, and product liability principles. Under the Consumer Protection Act for example, manufacturers and producers can be held strictly liable for defective products that cause damage, which theoretically could extend to AI systems and software if they are deemed products under the Act. Developers and manufacturers may also face liability under negligence if it can be shown that they failed to exercise reasonable care in the design, development, or deployment of AI systems, resulting in foreseeable harm. Users, such as businesses or individuals deploying AI, may be liable if their misuse or inadequate supervision of the technology leads to damage. While there is currently no bespoke UK statute specifically addressing AI liability, the Law Commission and other regulatory bodies have recognised the need for reform and are actively reviewing whether new, AI-specific liability regimes are required to address the unique challenges posed by autonomous systems.

The use of legal personhood on AI may create situations where accountability is obscured, allowing corporations or individuals to evade responsibility by attributing actions to an “autonomous” entity.

Further, AI decision-making lacks transparency as it often operates through black-box algorithms, raising serious ethical and legal concerns, particularly when AI systems make decisions that affect employment, healthcare, or criminal justice. The European Parliament’s Science and Technology Options Assessment (STOA) study has proposed enhanced regulatory oversight, including algorithmic impact assessments, to address transparency and accountability. Granting AI legal rights without resolving these issues would only increase the risk of unchecked algorithmic bias.

The ethical implications extend beyond legal considerations. AI’s increasing autonomy in creative and economic spaces, such as AI-generated art, music, and literature has raised questions about intellectual property ownership. Traditionally, copyright and patent laws protect human creators, but should AI-generated works receive similar protections? In the UK, for example, computer-generated works are protected under copyright law, yet ownership remains tied to the creator of the AI system rather than the AI itself. Under the Copyright, Designs and Patents Act 1988, section 9(3), the author of a computer-generated work is defined as “the person by whom the arrangements necessary for the creation of the work are undertaken.” This means that, in the UK, copyright subsists in AI-generated works, but the rights vest in the human creator or operator, not the AI system itself. Recognising AI as a rights-holder could challenge these conventions, necessitating a re-evaluation of intellectual property laws.

A potential middle ground involves the implementation of stringent governance models that prioritise accountability without conferring rights upon AI. Instead of granting legal personhood, policymakers could focus on AI-specific liability structures, enforceable ethical guidelines, and greater transparency in AI decision-making processes. The European Commission has already initiated discussions on adapting liability frameworks to address AI’s unique challenges, ensuring that responsibility remains clearly assigned.

While AI continues to evolve, the legal framework governing its use and accountability must remain firmly rooted in principles of human responsibility. AI should be regulated as a tool, albeit an advanced one, rather than as an autonomous entity deserving of rights. Strengthening existing regulations, enhancing transparency, and enforcing accountability measures remain the most effective means of addressing the challenges posed by AI.

The delay in implementing robust AI governance has already resulted in widespread ethical and legal dilemmas, from biased decision-making to privacy infringements. While AI’s potential is undeniable, legal recognition should not precede comprehensive regulatory safeguards. A cautious, human-centric approach remains the best course to ensure AI serves societal interests without compromising fundamental legal principles.

While it is tempting to explore futuristic possibilities of AI personhood, legal rights should remain exclusively human. The law must evolve to manage AI’s risks, but not in a way that grants rights to entities incapable of moral reasoning. For now, AI must remain a tool, not a rights-holder.

James Bloomberg is a second year human sciences student at the University of Birmingham. He has a strong interest in AI, research and innovation and plans to pursue a career as a commercial lawyer.

The Legal Cheek Journal is sponsored by LPC Law.

The post AI in court: rights, responsibilities and regulation appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/ai-in-court-rights-responsibilities-and-regulation/feed/ 1
The battle of redefining privacy in the digital world  https://www.legalcheek.com/lc-journal-posts/the-battle-of-redefining-privacy-in-the-digital-world/ https://www.legalcheek.com/lc-journal-posts/the-battle-of-redefining-privacy-in-the-digital-world/#respond Tue, 21 Jan 2025 08:53:30 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=213634 University of Liverpool graduate Pritpal Kaur Bhambra discusses how ever more powerful tech companies are using our data

The post The battle of redefining privacy in the digital world  appeared first on Legal Cheek.

]]>

University of Liverpool graduate Pritpal Kaur Bhambra discusses how ever more powerful tech companies are using our data


In today’s digital age, many of us are devoted to the online realm. We rely on the web for social interactions, financial transactions, work and entertainment. Yet, as we become more intertwined with technology, our personal data is being collected, tracked, and potentially exploited without our consent or understanding. As governments and corporations collect our personal data, do any of us have privacy? This growing intrusion upon our digital activity has created the urgent need for online consumers to become more aware about how their data is used. This will allow consumers to maintain their privacy in the digital space. If the disappearance of privacy is not addressed, many will likely be subjected to corporate surveillance without their knowledge. If this happens, digital ecosystems, rather than regulatory organisations, will redefine privacy in a digital space, potentially leading to an abuse of power.

Despite numerous marketing techniques to lull users into a false sense of security, online retailers and application providers could be more transparent in their commitment to user privacy. Currently, more clarity is needed on what expectations users have towards their perceived privacy and the distorted privacy they obtain. Moving from physical surveillance cues to a digital environment, there are growing concerns that tracking technologies facilitate the “unwanted gaze”. The UK’s regulator for data protection and information rights, known as the Information Commissioner’s Office (ICO), monitors companies and ensure they comply with data protection legislation. These standards are compliant with the European Convention on Human Rights (ECHR) which safeguards citizens’ fundamental freedoms. Nevertheless, it has become apparent over time that stronger regulations could be enforced to minimise loopholes when collecting data.

Turning to the Privacy and Electronic Communications Regulations 2003 (PECR), this UK regulation sets out the rules for protecting user privacy. The PECR regulates email and text marketing, tracking technologies such as cookies and service providers who handle electronic communications. As more people interact with each other online, the PECR’s relevance increases. This regulation helps protect users from unwanted marketing or intrusive tracking as well as providing a framework for businesses and service providers to comply with to enhance user trust. Regulation 6(1) of the PECR prohibits service providers from accessing a user’s data unless the user has been provided with and has consented to “clear and comprehensive information” about the data’s purpose. This demonstrates that the law acknowledges privacy is complex and is thus concerned with users exercising their right to several normative values such as self-determination, agency, and autonomy.

Regulation 6 of the PECR places the individual’s privacy at the heart of its principles. This notion aligns with Apple’s stated commitment to user privacy as a “core value” and a “fundamental human right”. Yet, according to Apple’s latest UK Transparency Report, Apple transferred 78% of its data requests to the government. Under the law, Apple states that its Transparency Report is limited to “disclose what…data may be sought through these requests” . Although Apple is in compliance with the law, it is difficult for users to understand whether these requests were necessary and proportionate as no context was given. Thus, policymakers should aim to minimise regulatory loopholes and strike a balance between public and private interests so that tech giants are unable to redefine privacy.

Moreover, the recent example of over 2.6 billion iCloud records leaked by hackers in two years raises concerns about how secure users’ data actually is. Although this statistic is alarming, Apple’s Senior Vice President of Software Engineering states that they will “keep finding ways to fight back on behalf of our users by adding even more powerful protections.” Accordingly, Apple’s innovative design features illustrate its proactive engagement with user privacy. For example, “In 2005, Safari became the first browser to block third-party cookies by default.” Since then, Apple has introduced Intelligent Tracking Prevention which minimises third-party tracking when using Safari browser. This illustrates that Apple’s claims towards enhancing user privacy are supported. To maintain its influence as a market monopoly for electronics and services like Apple News and Apple Music, Apple must go beyond innovating hardware design to protect data privacy. This will ensure its transparency mirrors its active commitment towards safeguarding consumers.

While Apple has made significant strides in enhancing user privacy, as seen in its proactive design features, there are broader concerns on how an ethical dilemma may arise if companies do not comply with regulations. The tensions between consumer choice, brand reputation, and regulatory compliance highlights the importance of ensuring that companies not only meet privacy standards but are also being transparent with their customers. Consumers may feel inclined to choose a service provider which promises to protect user privacy. However, consumers should have the freedom to choose their own electronic devices independently without unconscious bias. If consumers trust a company’s privacy claims without checking these assertions are true, this may create an unfair advantage for well-known brands. These popular companies may benefit from consumer trust, even if they do not fully uphold their privacy claims. In turn, a company’s marketing claims on safeguarding consumer privacy can affect competition law as consumers may favour well-known brands without knowing if their data is truly protected. In 2018, Anita Allen, Professor of Law and Philosophy at the University of Pennsylvania, explored how technology impacts normative values such as privacy, autonomy and self-determination. As a result, Allen encourages the idea that ethics should “complement…the law, rather than undermine…it” when addressing data privacy and shaping societal norms and values.

Want to write for the Legal Cheek Journal?

Find out more

In addition, market monopoly Facebook, demonstrates how it has made significant changes to protect user data. Two months after Facebook’s pledged compliance with privacy regulations, the Facebook-Cambridge Analytica scandal revealed that Facebook harvested 87 million users’ data without their consent in 2018. This scandal led to increased public awareness on safeguarding data privacy and placed an emphasis on tighter regulation. Mark Zuckerberg, CEO of Facebook, acknowledged a “breach of trust” had occurred and promised to “ban developers that had misused personally identifiable information”. Consequently, Facebook would “learn from this experience to secure our platform…and make our community safer”.

While Facebook’s response to the Cambridge Analytica scandal demonstrated an acknowledgement of the privacy breach, it also highlighted the ongoing challenges of ensuring privacy in the digital age. This scandal raised questions on the effectiveness of self-regulation and the need for stronger oversight. Computer scientists such as Tim Berners-Lee stress the importance of the ICO providing the benchmark for regulation compliance. This standardisation will help prevent companies from redefining privacy as this undermines the PECR. If self-regulation is “managed by big business[es] rather than…the electorate, we lose diversity and get a less democratic system.” As Zuckerberg in 2016 commented that “[t]he age of privacy is over”, this suggests tech corporations may not prioritise privacy or have the public interest at the forefront of their innovations. As such, it is uncertain whether users can trust that their data is safe and will be able to rely on legal protection.

Moving on to more upcoming trending retailers, Temu publicly demonstrates its commitment to data protection by not selling data to third-party companies. However, there have been allegations that Temu’s referral scheme proposes consumers trade their data such as their photo, opinions, and location for worldwide advertising purposes in exchange for £50. Alarmingly, this scheme would undermine Temu’s commitment to user privacy and the effectiveness of Regulation 6(2) of the PECR. Consumers do not know the purpose of this disproportionate data collection, who is accessing it and for how long. This implies that the user’s sanctity of home and private life has likely been violated under Article 8 of the ECHR. Subsequently, in light of recent events, Temu has now changed its terms and conditions because “they were overly broad and inadvertently included promotional uses that Temu does not engage in.” As a result, it has been made clear that Temu only uses “username and profile pictures in promotion for referral functionality and winner announcements.” Therefore, regulators like the ICO should refrain from the light-touch approach of self-regulation to prevent retailers finding loopholes which inadvertently violate consumer privacy and undermine PECR regulations.

Whilst it is not surprising online retailers and application providers publicly display their commitment to user privacy, greater transparency is needed. On balance, it is evident that dominant service providers aim to protect privacy rights. However, their shortcomings towards PECR obligations suggest there may be a trade-off between protecting users’ vulnerable data and companies prioritising their competitive position in the market. At present, it seems there is a power imbalance between users, corporate entities, and the State. The current regulation is not entirely flawed; however, stronger regulatory oversight will help limit quasi-regulatory power and strengthen the effectiveness of the PECR. Therefore, when service providers engage in unfair competition or systemic surveillance, there should be calls for a regulatory response so “that the Society we build with the Web is of the sort we intend.

Pritpal Kaur Bhambra is a University of Liverpool law with a year abroad graduate, currently pursuing the BTC with an integrated master’s at BPP University.

The Legal Cheek Journal is sponsored by LPC Law.

The post The battle of redefining privacy in the digital world  appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/the-battle-of-redefining-privacy-in-the-digital-world/feed/ 0
Meta and the billion-pound class action  https://www.legalcheek.com/lc-journal-posts/meta-and-the-billion-pound-class-action/ https://www.legalcheek.com/lc-journal-posts/meta-and-the-billion-pound-class-action/#respond Mon, 16 Dec 2024 08:36:30 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=212469 QMUL English and film graduate Tuka Hassan discusses the class action case against Meta and trends in competition litigation

The post Meta and the billion-pound class action  appeared first on Legal Cheek.

]]>

QMUL English and film graduate Tuka Hassan discusses the class action case against Meta and trends in competition litigation


With tech giants dominating the legal market in recent years following more stringent regulatory guidelines and data protection laws, Meta has been at the forefront of multiple anti-competition claims. One of these claims is a multibillion-pound class action which has been ongoing since 2022. Meta has recently lost an appeal at the UK’s Court of Appeal against the Competition Appeal Tribunal’s decision to certify the class-action, allowing the £3.1 billion compensation claim to go to trial. This article will explore the wider implications of this claim from a range of perspectives.

Background

Dr Liza Lovdahl Gormsen filed a claim as the proposed class representative (PCR) against Facebook UK Limited, Meta Platforms Ireland Limited and Meta Platforms Inc with the Competition Appeal Tribunal on behalf of 44 million Facebook users in the UK. In competition law claims, collective proceedings against one or more defendants (in this case, Facebook and its parent company, Meta) must follow the Collective Proceedings Order (CPO) regime in order to be brought before the Competition Appeal Tribunal (CAT) at trial.

The CPO regime was reformed under the Consumer Rights Act 2015 to allow for US-style ‘opt-out’ actions – allowing a party to bring a claim on behalf of an entire class without the express knowledge or permission of each member of that class. In this instance, Dr Gormsen was able to amass a claim on behalf of 44 million Facebook users, creating a high-profile and potentially lucrative case.


Dr Gormsen claims that Meta abused its dominant position in the personal social network market by imposing unfair terms and prices on Facebook users through data usage, an alleged breach of statutory duty under section 47B of the Competition Act 1998. He argues that exploited user data by making Facebook users pay with their personal data to access the social networking platform and tracking their online activity to rack up billions of pounds worth of profit. Seeking up to £3 billion in damages, the claim was awarded certification in February 2024 after satisfying the requirements needed and is awaiting trial at the tribunal after a failed appeal by Meta in October 2024.

The rise in collective actions

The introduction of ‘opt-out’ actions created a claimant-friendly regime which is vital to understanding why the number of collective actions has risen and will continue to rise in the future. As people are automatically included in a claim, large damages figures will be produced, garnering enhanced media attention and creating greater anticipation of a definitive decision on a claim.

Furthermore, the CAT’s intention to widen access to justice is most notably seen in its approach towards proposed class representatives (PCR). Dr Gormsen was unsuccessful in her first attempt to achieve certification as the tribunal was ‘not convinced by the methodology for assessing quantum‘ – the amount of money legally payable in damages. The application was instead postponed for six months, allowing Dr Gormsen to try again and redraft the CPO application as the CAT will ‘not allow an application to fail on technical grounds’. The tribunal’s unanimous decision to allow the class-action to proceed underscores its commitment to a more accessible justice system. Overlooking technicalities in this instance suggests a lowered bar for certification and will enable consumers to address mass grievances against corporate giants more easily.

Want to write for the Legal Cheek Journal?

Find out more

Litigation funders

As opt-out class-actions produce more substantial damages figures, some fear that the CPO regime could become a vehicle for litigation funders to achieve a return on investment rather than for the claimants themselves. Just last year, Meta was previously ordered to pay $725 million to Facebook users in the US following a privacy case, demonstrating just how profitable such claims can be for funders. In the UK, the top 15 funders reportedly have almost £2 billion of assets under their control. Claimants receive funding from a third-party funder (usually a hedge fund or private equity firm) to meet their legal costs. In exchange, funders receive a cut from any sum awarded by the courts. As funders are relying on a return on investment, such funding is usually only be available to high value claims, typically in the profitable commercial sector. Innsworth Capital Limited, the present litigation funder behind the claim against Meta and Facebook, has also funded mass legal actions against Mastercard, Ericsson and Volkswagen and so, this collective action could arguably be just the next potential hit towards a large corporate giant.

Why is this case controversial?

The overarching element of this case is about the value of data in the digital age. As the tech sector continues to advance at an exponential rate, personal data is rapidly evolving to be more of a currency. This case could influence other tech giants to consider how to adequately compensate users for their data.

Nevertheless, this claim could also be construed as an attack on Facebook’s business model, alleging that the platform’s service is fundamentally anti-competitive. At the core of this lawsuit is the fundamental question as to whether the acts of a company’s service only attract litigation claims once a company reaches a certain size and can therefore be accused of anti-competitive measures.

While this view seems plausible at first, it overlooks the years of public furor materialising from the uncertainty regarding the use of personal data and monetising it for capital gain. The effects of the Cambridge Analytica scandal during Trump’s 2016 presidential election as well as the EU Referendum in the same year marked the beginning of a ‘digital kleptocracy’ — high-level political power abused through stolen data of the public. Facebook was one of the key platforms used by people to spread highly targeted propaganda (in the form of videos, images, memes, blogs and advertisements among many others) aimed at the ‘persuadables’ — users who were considered politically and ideologically undecided through data mining technology and were therefore impressionable and exploitable. Many of those involved in the scandal would characterise these campaigns as forms of psychological warfare or ‘psyops’ — tactics intended to manipulate and influence people’s behaviour.

Platforms designed to connect us have arguable been weaponised, a shift that was nearly impossible to detect at the time, as Facebook also served as the space where we communicated with friends, shared baby pictures, and celebrated newlywed. Perhaps the sheer number of users necessitates a greater degree of responsibility to maintain ethical practice in the data domain and as a result, Dr Gormsen’s claim here has a legitimate precedent.

With tech companies such as Meta monopolising attention, there is increasing demand for legal regulation and protection of data privacy across the world. Whilst we can see change happening through the Competition and Markets Authority (CMA) and the Federal Trade Commission (FTC), there is also great demand for property rights in relation to personal data. Surpassing oil in value, granting the general population legal ownership of such a lucrative resource will be a step towards restoring trust in a constantly developing domain and reclaiming online identity.

Tuka Hassan is a first-class English and film graduate from Queen Mary’s University of London with an interest in politics, human rights, commercial law and IP and tech law.

The Legal Cheek Journal is sponsored by LPC Law.

The post Meta and the billion-pound class action  appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/meta-and-the-billion-pound-class-action/feed/ 0
Deepfakes and the law: navigating the blurred lines of reality in the digital age https://www.legalcheek.com/lc-journal-posts/deepfakes-and-the-law-navigating-the-blurred-lines-of-reality-in-the-digital-age/ https://www.legalcheek.com/lc-journal-posts/deepfakes-and-the-law-navigating-the-blurred-lines-of-reality-in-the-digital-age/#comments Wed, 04 Dec 2024 06:42:27 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=212290 Essex Uni law student Raksha Sunder unpacks the rise of deepfakes, their legal implications, and what global regulation could mean for this evolving digital frontier

The post Deepfakes and the law: navigating the blurred lines of reality in the digital age appeared first on Legal Cheek.

]]>

Essex Uni law student Raksha Sunder unpacks the rise of deepfakes, their legal implications, and what global regulation could mean for this evolving digital frontier

deepfake
In 2018, Sam Cole, a reporter at Motherboard, uncovered some troubling news on the internet. A Reddit user known as “deepfakes” was posting fake pornographic videos using an AI algorithm to swap celebrities’ faces with those of adult film actors. Cole raised awareness about the issue just as the technology gained momentum. By the following year, these deepfake videos had expanded well beyond Reddit, with apps available that could digitally “strip” a person’s clothing from a photo.

Deepfake technology has since been associated with these malicious purposes, and it is still used to create fake pornography. This has significant legal implications; for instance, the UK’s Online Safety Bill includes provisions aimed at criminalising the sharing of non-consensual deepfake pornography. There’s also the risk that political deepfakes will generate convincing fake news that could wreak havoc in unstable political environments.

The European Union’s Code of Practice on Disinformation highlights these dangers, calling for measures to combat the spread of manipulative deepfake content. Two deepfake advertisements were released by the nonpartisan advocacy group RepresentUs in the run-up to the 2020 US presidential election. Fake profiles showed North Korean leader Kim Jong-un and Russian President Vladimir Putin claiming that they “didn’t need to intervene in the US elections as America would destroy its democracy on its own”. RepresentUs aimed through these deepfake videos, to promote awareness of voter suppression in order to defend voting rights and boost turnout, despite experts’ concerns that technology could cause confusion and interfere with elections. In a 2020 campaign video, an Indian politician reportedly employed deepfake technology to replicate the sound of the Hindi dialect spoken by his target audience, Haryanvi.

Additionally, a deepfake was circulated recently, making it appear that singer Taylor Swift endorsed Donald Trump. This caused a media frenzy until Swift clarified that she did not support Trump and instead endorsed Kamala Harris. This event proved the disruptive potential of deepfakes in public opinion, showing how easily fabrications can manipulate perceptions of political endorsements.

How should deepfakes be regulated?

Governments worldwide have been discussing how to regulate these new technologies as the use of AI and deepfake technology spreads. There are still a lot of gaps in the laws and regulations that various jurisdictions have introduced to handle the special difficulties presented by deepfakes.

The DEEPFAKES Accountability Bill (H.R. 3230), which was introduced in the US, was a significant step in deepfake technology governance. Introduced in the 116th Congress, this proposed law includes provisions for labelling deepfake content and would require producers of deepfake content to disclose when an image or video has been altered with artificial intelligence. The purpose of the Bill is to stop the propagation of malicious deepfakes that may threaten people, circulate false information, or obstruct democratic processes.

Social media sites such as YouTube and Instagram already have standards in place to prevent harmful content from being hosted on their sites. However, these restrictions are frequently unenforced, as banned content may not always be detected by automated systems, and manual inspection procedures can be laborious or inefficient. Therefore, users continue to monetise content that includes deepfakes, especially if they evade detection, allowing them to gain profits while violating legal and/or platform guidelines.

The European Union (EU) has implemented the General Data Protection Regulation (GDPR) and the Code of Practice on Disinformation, both of which can be used to combat deepfakes. Deepfakes may be governed by the GDPR, which regulates data protection throughout the EU, if personal information or photos are used without consent. A person’s voice or appearance used in deepfakes constitutes personal information under Article 4 of the GDPR. Article 6  prohibits the processing of personal data without the subject’s approval. Meanwhile, tech companies are urged by the voluntary Code of Practice on Disinformation, which was introduced in 2018, to demonetise misinformation and promote transparency in political advertising to stop the spread of deepfakes and other manipulative content online. However, this code heavily relies on voluntary compliance, limiting its effectiveness in stopping the spread of harmful deepfakes.

Want to write for the Legal Cheek Journal?

Find out more

The regulation of deepfakes: a new path forward?

A direct global regulatory structure that targets deepfake technology would be one of the biggest leaps forward. This may depend on agreements already in place, such as the Convention on Cybercrime (Budapest Convention) of the Council of Europe, which establishes guidelines for national cybercrime laws and promotes international collaboration. To address the creation and spread of deepfakes, a treaty similar to this one might be created, emphasising disclosure and consent and applying the disclosure requirements outlined in Section 104 of the DEEPFAKES Accountability Act on a global scale.

However, requiring deepfake creators to disclose their content is not enough to deal with the growing challenges of these technologies. A better solution would be to establish international guidelines that include penalties for people who misuse AI and deepfakes. This way, creators would be required to disclose when they have altered content and for any harm caused by their creations. This idea could follow similar rules for other digital threats, like cybersecurity or online scams. By adding strict punishments for those who misuse deepfakes to deceive or damage reputations, there would exist a more robust defence against their adverse effects

Another way the misuse of deepfake technology could be dealt with is through international data protection agreements, akin to the EU-US Data Privacy Framework. Such agreements would standardise the protection of personal data used in deepfakes across borders, preventing data laundering by ensuring consistent safeguards regardless of jurisdiction. The agreement could incorporate a mechanism similar to the European Arrest Warrant (EAW), enabling the swift transfer of suspects involved in deepfake crimes to the country where the offence occurred. This would prevent perpetrators from evading justice by exploiting weaker legal systems in other countries.

The costs are higher than ever as deepfakes continue to blur the distinction between fact and fiction. The days of “seeing is believing” are coming to an end, and if the legal system doesn’t keep up, we might live in a society where reality is nothing more than a digital illusion.

Raksha Sunder is a law student at the University of Essex with a keen interest in corporate law. She is the Vice President of the Essex Law Society and enjoys competing in writing competitions during her free time.

The Legal Cheek Journal is sponsored by LPC Law.

The post Deepfakes and the law: navigating the blurred lines of reality in the digital age appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/deepfakes-and-the-law-navigating-the-blurred-lines-of-reality-in-the-digital-age/feed/ 1
The cryptocurrency dilemma: Should volatile digital assets receive legal recognition as property? https://www.legalcheek.com/lc-journal-posts/the-cryptocurrency-dilemma-should-volatile-digital-assets-receive-legal-recognition-as-property/ https://www.legalcheek.com/lc-journal-posts/the-cryptocurrency-dilemma-should-volatile-digital-assets-receive-legal-recognition-as-property/#comments Mon, 14 Oct 2024 07:18:45 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=210299 Oxford Uni student Niranjana Ramkumar explores the limitations of English property law when it comes to crypto

The post The cryptocurrency dilemma: Should volatile digital assets receive legal recognition as property? appeared first on Legal Cheek.

]]>

BCL student at the University of Oxford, Niranjana Ramkumar, explores the limitations of English property law when it comes to crypto

cryptocurrency
In September, the Law Commission’s draft bill regarding the proprietary status of digital assets, including cryptocurrency tokens, was introduced in parliament. This draft bill was the result of a years-long project carried out by the Law Commission, where it was asked by the government to recommend reforms to the law in this area. The introduction of the draft bill is the first step in a process that could lead to the eventual statutory recognition of crypto tokens and other digital assets as property.

Although some court decisions have recognised in principle that digital assets could be considered property, putting this on a statutory footing would ensure greater clarity and certainty in the law. This is especially important for companies and individual investors in the cryptocurrency market. Moreover, the fact that the bill was introduced in parliament in the first-place acts as a signal that the UK is looking to become a growth hub for the digital assets industry, which could draw investors and innovators to the UK, leading to an evolution in the market.

However, a lot has changed since the Law Commission began its project in 2021. Following the collapse of numerous cryptocurrencies and exchanges, including the high-profile collapse of FTX in November 2022, the crypto market has proven to be highly volatile. The market is also replete with scams and hacks, perhaps more so than with other kinds of investment. It is therefore worth considering why there was uncertainty as to the legal status of cryptocurrency and what the new draft Bill could mean for property law. It must also be asked exactly why cryptocurrency needs to be recognised as property. Although enabling innovation and encouraging new technologies is important, the recognition of crypto specifically risks giving the market a legitimacy which it may not entirely deserve.

Why is there uncertainty about the legal status of cryptocurrency?

Traditionally, there are two types of rights which are recognised as personal property rights:

  • choses in possession are rights to things which are capable of being possessed, such as my title to my pen
  • choses in action are rights to things which are not capable of being possessed because of their intangible nature and are only enforceable by legal action, such as a share in a company

Cryptocurrency does not fit into either of those categories. Since crypto tokens are digital assets, they are intangible and so they are not capable of being possessed. Title to a crypto token is therefore not a chose in possession. However, crypto tokens are also not capable of being enforced via legal action. Without getting into the details of the technology, the holder of a crypto token is essentially the holder of a private key allowing them to make transactions in the system. The crypto token does not consist of any underlying legal obligation that can be enforced against anyone, so the crypto token cannot be a chose in action either.

Gain advocacy experience with LPC Law

This may seem strange to non-lawyers; after all, if cryptocurrency is just a digital thing, like a pound coin in real life is a physical thing, then why can’t a cryptocurrency token be property just like a pound coin is? But the issue is that, in a technical sense, property is about rights rather than things, and when it comes to rights, as Robert Stevens points out, the categories of chose in possession and chose in action are exhaustive. Either the right relates to a thing capable of being possessed, or the right is only enforceable by legal action.

Some commentators suggest that this should not be a barrier to the recognition of cryptocurrency as property. Some characterise digital assets as the power to engage in transactions within a particular system. Cryptocurrencies and other digital assets are meant to exist outside of the legal system. As will be explored further below, the whole point of cryptocurrency is that it is independent of the legal system, and so any transactions made according to the rules of the code are valid and irreversible. This means that you have to be the holder of a crypto token to transact within the system, and some commentators suggest that the power granted by holding the crypto token is a right that can be conceptualised as a proprietary right.

But we don’t recognise such powers to transact as property rights in other contexts. For example, if I own a photocard of a particular K-Pop star, that gives me the power to engage in the often lucrative photocard trading and selling market. But my power to do so doesn’t exist as a property right independently of my ownership of the card. My property right is my title to the photocard. You might say that the photocard trading market isn’t meant to exist outside of the legal system in the way that the cryptocurrency market is. But there’s a tension there: if cryptocurrency as a system is meant to exist outside of our legal strictures, why should crypto tokens be recognised as property at all?

Should crypto tokens be recognised as property?

If the Law Commission’s draft bill is passed by parliament, there would no longer be the issue of whether cryptocurrency is property. The draft bill states that something is not prevented from being the object of personal property rights even if it isn’t a chose in action or a chose in possession, which gets rid of the problems outlined above. But an important question to consider is whether cryptocurrency should be recognised as property in the first place. Hardcore cryptocurrency enthusiasts would insist that ‘code is law’, the point of cryptocurrency as a system being that it exists outside of the government-imposed legal system and that interactions on the blockchain are governed by the code. If that is the case, what is the purpose of property law in the system? Why not just let the code govern the validity of interactions on the blockchain, particularly if that is an explicit draw of the system?

Want to write for the Legal Cheek Journal?

Find out more

Some argue it is necessary to recognise cryptocurrency as property because crypto and other digital assets are becoming an increasingly important part of our financial system. In fact, there is still a high level of institutional interest in cryptocurrencies as an investment. However, the market is highly volatile, being heavily affected by the fluctuations affecting the ordinary stock market as well as the collapses of cryptocurrency exchanges and tokens, so there’s no telling how long the bubble will grow before it pops again like it did in 2022.

Rather than facilitating the development and growth of cryptocurrency, there is a case for the law regulating cryptocurrency more stringently than it does right now. Cryptocurrency scams have risen sharply in recent years, with victims losing more on average than in any other type of scam. Outside of scams, hackers and weaknesses in the blockchain can lead to individuals and institutions losing lots of money; a notable example of this is the cyberattack that affected Axie Infinity, a play-to-earn crypto video game that provided many players with a primary source of income. Attacks and scams like this have tested the limits of the ‘code is law’ mantra. If a hack or a fraud occurs, the code has no way of knowing this, so any fraudulent transactions would be considered legitimate—that is the risk of engaging in a system that explicitly puts itself outside the law. But, understandably, the victims of fraud or hacks aren’t happy about this, and so they turn to the law to help.


The handful of court cases we have that recognise cryptocurrency as property reflect an instinct to help the victims of fraud. For example, in AA v Persons Unknown, the issue was that a hacker had demanded a ransom payment in the form of Bitcoin, and Bryan J granted a freezing injunction in respect of the Bitcoin paid. It could be argued that the statutory recognition of cryptocurrency as property could grant greater protection to victims of fraud and hacks.

However, perhaps we can afford to be a bit more sceptical of a system that insists it is self-governing and that its code is law right up until the moment something goes wrong. Rather than trying to modify the existing system of property law to enable cryptocurrency to fit it, perhaps we should go about handling the legal issues raised by crypto in another way, one that is more apt to solve the problems inherent in the fraud-ridden, low-trust environment the system seems to have created.

Niranjana Ramkumar is studying for the BCL at the University of Oxford. She has a particular interest in private law, especially the law of property.

The Legal Cheek Journal is sponsored by LPC Law.

The post The cryptocurrency dilemma: Should volatile digital assets receive legal recognition as property? appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/the-cryptocurrency-dilemma-should-volatile-digital-assets-receive-legal-recognition-as-property/feed/ 1
Redefining property in the digital age https://www.legalcheek.com/lc-journal-posts/redefining-property-in-the-digital-age/ https://www.legalcheek.com/lc-journal-posts/redefining-property-in-the-digital-age/#respond Mon, 16 Sep 2024 06:33:48 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=209463 Cambridge law student Tristan Toh Zhi Shun examines how digital assets have disrupted property definitions and how regulation must adapt to keep pace with innovation

The post Redefining property in the digital age appeared first on Legal Cheek.

]]>

Cambridge Uni law student Tristan Toh Zhi Shun takes a look at how the emergence of digital assets has disrupted legal definitions of property and how regulation will have to adapt to keep up with innovation in this area


The rapid ascent of cryptocurrency has triggered pressing legal issues in the financial sector, particularly in property law. Conventional forms of ownership, such as tangible assets or enforceable rights, must be redefined to accommodate the unique attributes of digital assets like cryptocurrency. The current interpretation of ‘property’ has led to a legal issue whereby UK law fails to adequately acknowledge or safeguard digital assets, thereby jeopardising market participants and the financial system.

The UK Law Commission’s release of a supplementary report and draft Bill on digital assets as personal property is a pivotal step in addressing this issue. The proposed legislation confirms the recognition of a new form of personal property rights that can hold specific digital assets, such as cryptocurrency. The redefinition of ‘property’ to include cryptocurrencies is crucial as it aligns the law with technological advancements and underscores the UK’s steadfast ambition to become a global hub for cryptocurrency and blockchain technology. This ambition has been articulated in recent governmental frameworks, which have placed emphasis on creating an environment conducive to the growth and regulation of cryptocurrencies. By establishing clear regulatory guidelines, the UK aims to provide both clarity and confidence to investors and innovators alike. This approach not only aligns with global standards but positions the UK at the forefront of digital finance, reinforcing its status as a leader in this rapidly evolving sector.

The legal issue presented by cryptocurrencies stems from their intangible nature. Cryptocurrencies, unlike physical properties, only exist in the virtual world, making it challenging to categorise them within existing legal principles of possession and action. As a result, the courts have grappled with how to recognise cryptocurrencies within these traditional categories, leading to significant legal ambiguity.

Several landmark cases have shaped the recognition of cryptocurrencies as property in UK law. In Vorotyntseva v Money-4 Ltd (T/A Nebus.com) [2018], the court considered whether cryptocurrencies could be classified as property to grant an injunction. Although the court did not definitively rule on the matter, it proceeded on the assumption that cryptocurrencies could be considered property, marking the judiciary’s initial engagement with this issue.

A more definitive step came with AA v Persons Unknown & Ors [2019], where the High Court granted proprietary injunctions to freeze fraudulently obtained Bitcoin. Mr Justice Bryan explicitly recognised cryptocurrencies as property, citing the UK Jurisdictional Taskforce’s (UKJT) Legal Statement on Cryptoassets and Smart Contracts. This case established a clear legal precedent that cryptocurrencies could be treated as a third category of property, capable of being subject to proprietary claims and remedies.

Want to write for the Legal Cheek Journal?

Find out more

The case of Tulip Trading Ltd v Bitcoin Association for BSV [2023] further explored the implications of treating cryptocurrencies as property. The Court of Appeal examined if Bitcoin developers were required to help recover stolen assets, as they may have fiduciary duties to users. Although the court confirmed Bitcoin’s classification as property, it also emphasised the intricate relationship between conventional legal concepts and the decentralised nature of cryptocurrencies. This case highlighted that even though cryptocurrencies can be considered property, applying standard fiduciary duties to digital assets remains an unresolved legal issue.

The inconsistencies and unresolved issues surrounding the legal treatment of digital assets, particularly cryptocurrency, underscore the urgent need for a comprehensive statutory framework. While case law is valuable, it must be enhanced to address complex legal contexts such as asset recovery and cross-border enforcement. Landmark rulings have set crucial precedents but also reveal the limitations of an incremental, court-driven approach. Without explicit statutory reform, vital questions regarding fiduciary obligations and cross-border enforcement remain unresolved. As a result of this legal uncertainty, corporations navigating different and evolving regulatory regimes would face greater complexity and ambiguity, inhibiting their ability to comply with the relevant obligations and requirements. Consequently, these corporations would bear more significant financial burdens through increased legal and compliance costs to meet various regulatory standards. Therefore, there is a pressing need for clear and consistent guidelines to ensure legal certainty and protection for market participants and the broader financial system.

In light of the evolving complexities surrounding digital assets, the UK Law Commission’s Supplemental report and draft Bill are both timely and essential. This proposed legislation seeks to formally recognise a “third category” of personal property tailored to encompass digital assets like cryptocurrency. Such a move is critical because the existing legal categories—”things in possession” and “things in action”—are ill-suited to capture the unique nature of these assets. Without a clear legal framework guiding regulated activities, there would be material legal uncertainty for corporations that deal in or with digital assets such as cryptocurrency. This may prove problematic as it creates numerous pitfalls for the digital assets industry and the viability of digital asset utilisation in the financial sector. For instance, in the case of collaterals, a collateral-taker wanting to take security by way of a fixed equitable charge over digital assets would have to take a certain level of control over the relevant digital assets. However, pertinent legal issues surface when determining accountability in areas such as control-based proprietary interest and other broader obligations, which are not the norm for other intangible assets. Without statutory reform, the current reliance on case law, while valuable, risks leaving significant legal ambiguities unresolved, particularly in areas like asset recovery and cross-border enforcement.

The main goal of the proposed Bill is to give clear legal recognition to digital assets as property, creating a consistent framework for how they are dealt with in the legal system. This clarity is essential, particularly in complex legal situations like insolvency or international disputes, where the current legal position of digital assets remains unclear. Furthermore, the recommendation from the Law Commission to establish a panel of industry experts to assist the courts in these legal matters underscores the significance of having expertise in dealing with the intricate technical aspects of digital assets. This approach increases legal certainty and establishes a strong foundation for further development of common law in this dynamic legal area.

In conclusion, the formal recognition of digital assets as a distinct category of property under UK law is a significant reform necessary to address the urgent challenges posed by the rise of cryptocurrencies and other digital assets. The draft Bill proposed by the UK Law Commission is crucial for providing legal clarity and consistency. By enacting this legislation, the UK could enhance its position as a global leader in regulating digital finance and ensure its legal system is equipped to meet the demands of a modern, digital economy. This reform not only strengthens the safeguards associated with digital assets but also paves the way for further research, innovation and legal amendments at the intersection of law and technology, contributing to a more robust and adaptable legal framework for the future.

Tristan Toh Zhi Shun is a law undergraduate at the University of Cambridge and a brand ambassador for Legal Cheek. He is passionate about applying academic and practical knowledge to legal and compliance challenges in commercial law, fintech law and financial services regulatory compliance.

The Legal Cheek Journal is sponsored by LPC Law.

The post Redefining property in the digital age appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/redefining-property-in-the-digital-age/feed/ 0
How the EU is battling Big Tech https://www.legalcheek.com/lc-journal-posts/how-the-eu-is-battling-big-tech/ https://www.legalcheek.com/lc-journal-posts/how-the-eu-is-battling-big-tech/#comments Tue, 27 Aug 2024 07:18:51 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=208073 LSE politics student Oliver Masson analyses the impact of EU and UK competition regimes against tech monopolies

The post How the EU is battling Big Tech appeared first on Legal Cheek.

]]>

LSE politics student Oliver Masson analyses the impact of EU and UK competition regimes against tech monopolies

Apple Store
In September of 2023 the EU passed the Digital Markets Act (DMA), a series of anti-trust legislation aimed at curbing the dominance of large tech firms in the digital sector. This was focused on establishing a set of criteria to regulate gatekeepers (companies that control access to key digital platforms such as app stores and messenger services) and prevent them from monopolising these markets.

The DMA was targeted at six companies: Alphabet, Amazon, Apple, ByteDance, Meta, and Microsoft, with punishments of up to a 10% fine on their annual turnover being threatened. These laws marked a revolutionary step towards encouraging competition in the tech industry, a goal that is also being pursued by other countries across the continent.

Why the digital sector?

Never in the history of competition policy have governments around the world focused so persistently on regulating the same market. This has happened due to the power that companies such as Apple and Alphabet have asserted in the digital sector  by effectively building monopolistic control over app stores, mobile payment services, and various other digital platforms. Due to this dominance, these corporations are able to stifle competition through a variety of measures. These include: pricing out smaller companies, creating harsh terms and conditions for developers to introduce their applications onto large digital platforms, and entering into exclusive deals with suppliers to ensure that competitors do not have access to the same resources. Simply put, these tech giants are able to determine which companies succeed in the industry.

The consequences of this are drastic and widespread. Reduced competition stifles innovation, as small companies are unable to introduce novel ideas due to their limited power. Furthermore, monopolistic control limits consumer choice as users are forced to use specific app stores and payment services. These platforms could cost more or have less advanced features than potential alternatives, however the dominance of large tech firms means that they continue to be used. Thus, a monopolistic digital sector harms both consumers and the industry. Therefore, governments, regulators, and competition authorities have become focused on reducing the power of large tech firms.

The DMA in practice

The DMA has made significant strides in doing this, demonstrated by EU antitrust regulators forcing Apple to open its mobile payments system to rival competitors after determining that their current practices violate the protocols of the new law. As a result, the tech firm could be facing significant sanctions. This adds to the company’s existing troubles, having being fined 1.84 billion euros on March 4 for another DMA violation caused by Spotify’s claim that Apple had limited competition from external music streaming sites through placing restriction on them on its App Store.

Apple is not the only company that has been forced to face the music as a result of the DMA, with Microsoft also battling multiple investigations since the laws were passed in September. This began with the company being encouraged to give up its board observer seat at OpenAI as they feared that it contradicted antitrust laws in the UK, US and the EU. These concerns developed due to the possibility that having this position could enable Microsoft to have excessive influence over OpenAI’s strategic decisions, thus stifling the ability of other companies to gain access to the technology. Microsoft is also currently facing investigation by European regulators over accusations that the company is preventing customers from using certain security software provided by other tech firms. This again could result in a major financial penalty.

Want to write for the Legal Cheek Journal?

Find out more

Regulatory measures outside of the EU

Hauling companies to court under the DMA is not the only measure that has been taken in an effort to curb the power of large tech firms, as nations external from the EU have also attempted to do the same. This can be seen in the UK, where the Digital Markets, Competition and Consumers Act (DMCCA) was passed to enhance the Competition and Market Authority’s (CMA) power in cracking down on antitrust violations. The CMA is the UK’s primary regulator, and this law has given them the power to heavily fine tech companies that engage in anti-competitive practices. Furthermore, they have also launched an investigation into Amazon and Microsoft’s extensive control over the nation’s cloud market; these two companies are estimated to own 70-80% of Britain’s public cloud infrastructure services. The CMA aims to curb this dominance.

These efforts by the UK to limit the power of big tech companies highlight the growing continental consensus that steps need to be taken to further regulate the digital sector. However, while revolutionary in many ways, the DMA and the DMCCA are likely to be just the first steps in battling an industry that has become increasingly dominated by monopolies.

The future of tech regulation

Looking forwards, the EIB (European Investment Bank) has called for the EU to do more in addressing the lack of competition in the digital sector. In a recent report they published, the organisation has argued that top-down strategies like antitrust laws are insufficient by themselves. Rather, the EU must also focus on providing greater investment in start-up tech companies if they hope to reduce the dominance of large firms.

Furthermore, there are also hopes for greater transatlantic cooperation regarding the penalising of these digital conglomerates. While American regulators have punished the likes of Google for antitrust breaches, legislators in the US are still yet to pass a law that mirrors the DMA or the DMCCA. This is despite multiple conferences being held involving regulators for the EU, US, and the UK to discuss how to approach this issue. The US is the leading world power in the tech industry, so any drastic chance in this sector requires the  commitment of the US as well. Therefore, moving forwards, a key priority for European regulators will be bringing the US further into alignment.

Conclusion

Two decades of relatively unrestricted growth has resulted in large companies dictating the tech industry. Whether that be the dominance of Amazon and Microsoft in the cloud market, or the monopoly Google holds over search engines, competition in the digital sector has hit an all-time low. This has led to a lack of innovation and diminished customer satisfaction within the industry. In Europe this has begun to be addressed; the EU passed the DMA and the UK passed the DMCCA. Both of these laws have worked to crackdown on antitrust violation by large tech companies, with the likes of Apple already being fined 1.84 billion euros for their infractions.

However, this is not yet enough, as more needs to be done globally in order to properly combat the rise of a monopolistic tech industry. This includes greater investment in start-up firms, but also increased international cooperation, especially between the US and the rest of the world. Only through these combined efforts can we foster a more competitive technological landscape that celebrates innovation rather than monopolisation.

Oliver Masson is a second-year politics student at LSE. His interests in commercial law generally focus on litigation, particularly in energy and regulation law. Outside of academia, he’s passionate about football and history.

The post How the EU is battling Big Tech appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/how-the-eu-is-battling-big-tech/feed/ 2
AI in law: Embracing change or facing extinction? https://www.legalcheek.com/lc-journal-posts/ai-in-law-embracing-change-or-facing-extinction/ https://www.legalcheek.com/lc-journal-posts/ai-in-law-embracing-change-or-facing-extinction/#respond Mon, 19 Aug 2024 07:15:19 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=207621 James Southwick, bar course graduate, examines the impact of legal technology on the future of the legal profession

The post AI in law: Embracing change or facing extinction? appeared first on Legal Cheek.

]]>

James Southwick, bar course graduate, analyses how can we expect the legal profession to change with developments to legal technology and what this means for the future

AI head with question mark
In one of the oldest professions in the world built upon hundreds of years of tradition, change can be a slow process. But in the 21st century, when change is more rapid than ever, the legal profession needs to catch up.

Current reasons for a lack of change vary; for those charging on an hourly basis, using technology that reduces those hours is hardly an incentive! For others with well-established processes, the effort of adopting a new system is just not worth it. But legal technology (also known as legaltech, or sometimes lawtech) is constantly changing and staying ahead means understanding what is available now, what will be available, and what to expect in the future.

In today’s current environment, legal technology can perform a host of tasks only dreamt of by professionals in the past. Thomson Reuters, just one company providing these services, offers legal research platforms with comprehensive collections of the law, drafting technology, evidence management and due diligence tools, to name but a few. Using this technology in the workplace can result in greater productivity for individuals, improved client service and optimised workflow. But what is the next step for the legal profession? Following its recent boom, everyone is looking towards artificial intelligence (AI).

What is AI?

AI is a term originally coined in 1955 by John McCarthy, a pioneer in the AI field. At is core, AI is technology that allows computers and machines to simulate human intelligence and problem-solving capabilities. Since 1955, firms and individuals have developed AI to become more efficient at their role and its use in everyday life may go unnoticed: digital assistants on phones, GPS, self-driving cars and even Roombas (autonomous robotic vacuums) all use AI to perform their tasks.

Today, the focus is on a new type of AI: generative AI. Generative AI differs from the AI found in applications like Siri, as it creates content rather than being coded for a specific purpose. The most commonly known generative AI at the moment is OpenAI’s ChatGPT. The functions of ChatGPT are seemingly endless, from solving math problems step-by-step, giving relationship advice, writing original poetry and helping people prepare for job interviews. Most importantly, AI can learn. With ChatGPT as an example, using publicly available information from the internet, information licensed from third parties and information human trainers provide, it learns about associations between words and, with reinforcement learning from human feedback, gains further information on how to generate human-like responses. So as AI is used more, its application and quality will only improve and it is already seeing uses within the legal profession.

AI and the legal profession

Legal technology is already adopting AI to improve the effectiveness of its services. Thomson Reuter’s CoCounsel utilises generative AI which can analyse documents, summarise data and answer legal questions. Meanwhile, members of the judiciary have adopted it into their practice, most notably Lord Justice Birss who confirmed he used ChatGPT to provide a summary of an area of law which was used in his final judgment.

Like legal technology, using AI in practice can mean greater efficiency, improved client service and optimised workflow. But there are also a number of risks which must be carefully considered. With generative AI, there is a risk of prompts returning fabricated information. This has already been seen in New York in the case of Mata v Avianca 22-cv-1461(PKC), where two lawyers were fined for using six cases that were generated by ChatGPT. In the UK the case Felicity Harber v The Commissioners for His Majesty’s Revenue and Customs [2023] UKFTT 1007 (TC) a litigant in person sought to rely on nine cases ‘hallucinated’ by ChatGPT, which included American spelling and repeated identical phrases. Both of these cases ultimately resulted in wasting court time and costs, emphasising the importance of the user taking full responsibility to validate the accuracy of any material generated by AI which they seek to rely on. This is especially true when acknowledging AI models such as ChatGPT gather most of their information from US data, without any ability to distinguish between UK case law.

Want to write for the Legal Cheek Journal?

Find out more

AI has also seen uses in ‘deepfake’ technology: digitally manipulated media which can create or alter files to look or sound like another person. This includes photographs, videos or audio clips and its use has already been seen in UK courts. For example, in 2019 a mother produced an audio file of the father making threats towards her, but upon further analysis, it was discovered that the file had been manipulated to include words not used by the father. It is a stark reminder that, where possible, legal professionals should seek the original files and analyse metadata to determine who has accessed the file and if any changes have been made.

Understanding the benefits and risks of AI is imperative to guarantee its proper application in practice, thereby ensuring that AI is used an as assistive tool, rather than a replacement for the legal professional.

The future of AI

So where does the future of AI lie in the legal profession? The reality is that AI has already taken more jobs than it has created. It is predicted that in the next 20 years around 114,000 legal jobs will be automated, impacted by continuously improving AI which can provide results in less time and with tangible costs benefits. Clients will expect a reduction in costs too. As automation increases, billable hours will decrease. It is very likely that in the future the profession will see a move away from billable hours into fixed fees.

One developing area utilising AI that is likely to have a huge impact is predictive legal technology, or computational law. Using AI, large databases of data can be analysed to identify patterns of activity which are predictive of future behaviour. This extends to objective analysis of judicial proceedings to predict legal outcomes. In 2014 an algorithm was used to predict the outcomes of historical US Supreme Court verdicts, using only data available before the decisions. The programme predicted over 69% of decisions correctly, higher than expert panel prediction rates. Development of this technology may result in the ability to determine the success rate for prospective litigants, but its implementation may negatively impact those with an apparent ‘low success potential’, with the decision to take instructions decided purely by a computer programme.

However, adoption of AI may take longer than expected. In 2021, a report by the University of Oxford for the Solicitor’s Regulation Authority (SRA) found that 32.8% of 891 firms reported that they were not using legal technology, with no plan on using it. When asked about the use of data analytics with AI, 84.6% said they were not planning to use it. Firms that are slow to adopt using technology in their practice may lose out on competition that can offer faster and cheaper legal services, without compromising on quality.

Understanding the future means understanding how AI is going to change the profession and adapting to change, rather than resisting it. But this shouldn’t be blind acceptance. Careful consideration and weighting should be given to the benefits and risks of using AI in practice. With the correct use, it can be a tool to build and improve upon a high-quality practice that is beneficial to both the professional and the client.

James Southwick is a bar course graduate, currently seeking pupillage in the criminal field. He has a strong interest in developing areas of law such as blockchain and cybersecurity.

The post AI in law: Embracing change or facing extinction? appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/ai-in-law-embracing-change-or-facing-extinction/feed/ 0
The Willy Wonka experience: navigating misrepresentation in the age of AI https://www.legalcheek.com/lc-journal-posts/the-willy-wonka-experience-navigating-misrepresentation-in-the-age-of-ai/ https://www.legalcheek.com/lc-journal-posts/the-willy-wonka-experience-navigating-misrepresentation-in-the-age-of-ai/#respond Mon, 29 Jul 2024 06:54:13 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=207106 Strathclyde law student Emma Campbell explores how regulation can hep protect consumers

The post The Willy Wonka experience: navigating misrepresentation in the age of AI appeared first on Legal Cheek.

]]>

Emma Campbell, law student at The University of Strathclyde, explores how regulation of AI can protect consumers against misrepresentation

Credit: Willy Chocolate Experience/Stuart Sinclair

The ‘Willy Chocolate Experience’ captured global attention earlier this year by promising an event where you could “indulge in a chocolate fantasy, like never before”, but instead delivering a bitter dose of reality. Visitors left feeling deflated by the event which shined a spotlight on the pitfalls of the use of Artificial Intelligence (AI) in promotional materials.

Inspired by the Roald Dahl book Charlie and the Chocolate Factory, The ‘Willy Chocolate Experience; held in Glasgow was sold as a magical event complete with chocolate fountains and dancing Oompa Loompas (or to avoid copyright infringements, “Wonkidoodles”). This promised magical chocolate factory was instead a sparsely decorated warehouse filled with a bouncy castle, and a lone bowl of jellybeans.

The advertisement for the event was created using AI, a technology that enables computers to reproduce human intelligence, where algorithms can utilise available data to produce media such as writing and images.

While AI can offer many benefits to its users, it also poses a risk to its consumers who may not realise the content they are consuming is AI-generated. The AI-created illustrations used to advertise the event were in the typical AI style of bright colours, distorted subjects, and spelling errors. While to some it may be obvious that the event creators used AI to advertise, (who wouldn’t want to buy tickets to a “paradise of sweet treats”?), the issue of not being able to recognise AI-generated content is increasing. The Office for National Statistics found that only one in six adults could “always or often” detect when they were using AI, which shows the dangers of the use of AI within consumer-based settings.

Many attendees argued that the experience was a waste of time and money. However, AI poses a greater risk of more serious issues, which the UK government has quickly tried to regulate.

Influenced by the White House Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, the House of Lords proposed the Artificial Intelligence (Regulation) Bill. The bill proposes the establishment of a new regulatory body, the AI Authority, which would possess various functions to help address AI regulation. It requires that the AI Authority “needs to have regard” to core principles such as safety, security as well as transparency.

The question of transparency within the use of AI has gained attention. Social networking companies such as TikTok and Meta have begun using “AI generated” labels indicating when a post has been created using AI. This has proven effective, with warning labels reducing an individual’s likelihood of engaging in misleading information. These AI warning labels may help to facilitate the proposed AI authority’s core principle of transparency within the business use of AI, but more needs to be done to ensure that all businesses have AI identification markers in place.

Want to write for the Legal Cheek Journal?

Find out more

The fact that there is no set legislative framework to ensure that social media and e-commerce companies mark content that is AI-generated is concerning. This is illustrated within the e-commerce site Etsy, where being able to sell AI-created products is regarded as a grey area. Esty’s policies don’t expressively allow or prohibit AI-generated products from being sold on the site. This has called for change within the Etsy community, where some believe that allowing AI images to be sold as products on the site puts users at risk of being scammed as it strays too far from the company’s “handcrafted products” image.

This begs the question as to whether the UK government is doing enough to safeguard against the dangers of AI. Rather than adopting blanket legislation, the Artificial Intelligence (Regulation) Bill offers a principal-based approach which may cause confusion with regards to what companies are required to do to regulate its transparency for AI.

Adopting clear legislation to require companies to label content posted as AI-generated will undoubtedly prevent consumers from being misrepresented. Requiring such a label will protect consumer rights, as established in the Consumer Protection from Unfair Trading Regulations (2008), which prohibits the use of misleading commercial practices which may influence the consumer. The AI-generated images of Willys Chocolate Experience undoubtedly influenced its attendees to purchase tickets. If AI-generated labels had been attached to these images, they might have at least alerted Wonka fans that the advertisement may not accurately reflect the actual experience.

However, when considering the principle of transparency, when should the law demand it? Clause five of the bill proposes that any person supplying a product or service would have to give customers “clear and unambiguous health warnings, labelling and opportunities to give or withhold informed consent in advance”. This short clause does not provide clarity into what is expected of those supplying AI products to customers. When providing clear labelling into AI-generated products, rather than a more general innovative approach, a more consumer-based approach should be considered. For example, declaring potential allergens on food labels is a statutory instrument (The Food Labelling (Declaration of Allergens) (England) Regulations 2008), which sets out what must be declared on food packaging, as well as how it must be formatted. Providing more clarity as to the format and application of AI labelling will encourage its effective use and increase transparency for the consumer.

AI is developing at an extraordinary rate, becoming more advanced and ‘human-like’. The government has stated that it is too early in the development of AI to introduce primary legislation which contrasts against the EU approach of introducing wide-ranging legislation across all member states. However, with such drastic developments happening at such a rate, regulation must be able to keep up. As artificial intelligence develops, and becomes more human-like, being able to identify what is human and what is artificial will become increasingly difficult. This only increases the need for more concrete AI regulations. While an AI authority may be useful in providing consensus on core principles within AI usage, powers should be extended to the regulatory body to use their expertise to propose relevant and useful legislation, and in future, to amend legislation to respond to changes within AI.

The need for concrete legislation will only increase with AI developments becoming more human-like and less obviously AI. The Willy Chocolate Experience highlights the dangers of leveraging AI without the proper safeguards. A more concrete framework, such as legislation, must be adopted to compliment the creation of a regulatory body, in order to balance accountability and ethical AI use to safeguard its users and consumers.

Emma Campbell is a second-year LLB law student at the University of Strathclyde and a student legal advisor at the University of Strathclyde Law Clinic. 

The post The Willy Wonka experience: navigating misrepresentation in the age of AI appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/the-willy-wonka-experience-navigating-misrepresentation-in-the-age-of-ai/feed/ 0
Warfare technology: can the law really referee? https://www.legalcheek.com/lc-journal-posts/warfare-technology-can-the-law-really-referee/ https://www.legalcheek.com/lc-journal-posts/warfare-technology-can-the-law-really-referee/#comments Tue, 02 Jul 2024 07:45:20 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=206395 Harriet Hunter, law student at UCLan, explores AI's impact on weaponry and international humanitarian law

The post Warfare technology: can the law really referee? appeared first on Legal Cheek.

]]>

Harriet Hunter, law student at the University of Central Lancashire, explores the implications of AI in the development of weaponry and its effect on armed conflict in international humanitarian law


Artificial Intelligence (AI) is arguably the most rapidly emerging form of technology in modern society. Almost every sector and societal process has been or will be influenced by artificially intelligent technologies and the military is no exception. AI has firmly earned its place as one of the most sought-after technologies available for countries to utilise in armed conflict, with many pushing to test the limits of autonomous weapons. The mainstream media has circulated many news articles on ‘killer robots, and the potential risks to humanity — however the reality of the impact of AI on the use of military-grade weaponry is not so transparent.

International humanitarian law (IHL) has been watching from the sidelines since the use of antipersonnel autonomous mines back in the 1940s, closely monitoring each country’s advances in technology and responding to the aftereffects of usage.

IHL exists to protect civilians not involved directly in conflict, and to restrict and control aspects of warfare. However, autonomous weapons systems are developing faster than the law  — and many legal critics are concerned that humanity might suffer at the hands of a few. But, in a politically bound marketplace, is there any place for such laws, and if they were to be implemented, what would they look like, and who would be held accountable?

Autonomous weapons and AI – a killer combination?

Autonomous weapons have been a forefront in military technology since the 1900’s – playing a large part in major conflicts such as the Gulf War. Most notably, the first usage of autonomous weapons was in the form of anti-personnel autonomous mines. Anti-personnel autonomous mines are set off by sensors – with no operator involvement in who is killed;  inevitably causing significant loss of civilian life. This led to anti-personnel autonomous mines being banned under the Ottawa treaty 1997. However, autonomous weapon usage had only just begun.

In the 1970’s autonomous submarines were developed and used by the US navy, a technology which was subsequently sold to multiple other technologically advanced countries. Since the deployment of more advanced AI, the level of weapons that countries have been able to develop has led to a new term being coined: ‘LAWS’. Lethal Autonomous Weapons Systems (LAWS)  are weapons which use advanced AI technologies to identify targets and deploy with little to no human involvement.

LAWS are, in academic research, split into three ‘levels of autonomy’ – each characterised by the amount of operator involvement that is required in their deployment. The first level is ‘supervised autonomous weapons’ otherwise known as ‘human on the loop’ — these weapons allow human intervention to terminate engagement. The second level is ‘semi-autonomous weapons’ or ‘human in the loop’, weapons that once engaged will enact pre-set targets. The third level is ‘fully autonomous weapons’ or ‘human out of the loop’, where weapons systems have no operator involvement whatsoever.

LAWS rely on advances in AI to become more accurate. Currently, there are multiple LAWS either in use or in development, including:

  • The Uran 9 Tank, developed by Russia, which can identify targets and deploy without any operator involvement.
  • The Taranis unmanned combat air vehicle being developed in the UK by BAE Systems, an unmanned jet which uses AI programmes to attack and destroy large areas of land with very minimal programming

The deployment of AI within the military has been far reaching. However, like these autonomous weapons, artificial intelligence is increasingly complex, and its application within military technologies is no different. Certain aspects of AI have been utilised more than others. For example, facial recognition can be used on a large scale to identify targets within a crowd. Alongside that, certain weapons have technologies that can calculate the chances of hitting a target, and of hitting a target the second time by tracking movements — which has been utilised in drone usage especially to track targets when they are moving from building to building.

International humanitarian law — the silent bystander?

IHL is the body of law which applies during an armed conflict. It has a high extra-territorial extent and aims to protect those not involved in the practice of conflict, as well as to restrict warfare and military tactics. IHL has four basic tenets; ensuring the distinction between civilian and military, proportionality (ensuring that any military advances are balanced between civilian life and military gain), ensuring precautions in attack are followed, and the principle of ‘humanity’. IHL closely monitors the progress of the weapons that countries are beginning to use and develop, and are (in theory) considering how the use of these weapons fits within their principles. However, currently the law surrounding LAWS is vague. With the rise of LAWS, IHL is having to adapt and tighten restrictions surrounding certain systems.

Want to write for the Legal Cheek Journal?

Find out more

One of its main concerns surrounds the rules of distinction. It has been argued that weapons which are semi, or fully autonomous (human in the loop, and out of the loop systems) are unable to distinguish between civilian and military bodies. This would mean that innocent lives could be taken at the mistake of an autonomous system. As mentioned previously, autonomous weapons are not a new concept, and subsequent to the use of antipersonnel autonomous mines in the 1900s,  they were restricted due to the fact that there was no distinction between civilians ‘stepping onto the mines’, and military personnel ‘stepping onto the mines. IHL used the rule of distinction to propose a ban which was signed by 128 nations in the Ottawa Treaty 1997.

The Marten’s clause, a clause of the Geneva Convention, aims to control the ‘anything not explicitly regulated is unregulated’ concept. IHL is required to control the development, and to a certain extent pre-empt the development of weapons which directly violate certain aspects of law. An example of this would be the banning of ‘laser blinding’ autonomous weapons in 1990 — this was due to the ‘laser blinding’ being seen as a form of torture which directly violates a protected human right; the right to not be tortured.  At the time, ‘laser blinding’ weapons were not in use in armed conflict, however issues surrounding the ethical implications of these weapons on prisoners of war was a concern to IHL.

But is there a fair, legal solution?

Unfortunately, the chances are slim. More economically developed countries can purchase and navigate the political waters of the lethal autonomous weapons systems market — whilst less economically developed countries are unable to purchase these technologies.

An international ban on all LAWSs has been called for, with legal critics stating that IHL is unable to fulfil its aims to the highest standard by allowing the existence, development and usage of LAWS. It is argued that the main issue which intertwines AI, LAWS and IHL, is the question – should machines be trusted to make life or death decisions?

Even with advanced facial recognition technology — critics are calling for a ban, as no technology is without its flaws — therefore how can we assume systems such as facial recognition are fully accurate? The use of fully autonomous (human out of the loop) weapons, where a human cannot at any point override the technology – means that civilians are at risk. It is argued that this completely breaches the principles of IHL.

Some legal scholars have argued that the usage of LAWS should be down to social policy — a ‘pre-emptive governing’ of countries who use LAWS. This proposed system allows and assists IHL in regulation of weapons at the development stage – which, it is argued, is ‘critical’ to avoiding a ‘fallout of LAWS’ and preventing humanitarian crisis. This policy would hold developers to account prior to any warfare. However, it could be argued that this is out of the jurisdiction of IHL which is only applied once conflict has begun — this leads to the larger debate of what the jurisdiction of IHL is, in comparison to what it should be.

Perhaps IHL is prolonging the implementation of potentially life-saving laws due to powerful countries asserting their influence in decision making; these powerful countries have the influence to block changing in international law where the ‘best interests’ of humanity do not align with their own military advances.

Such countries, like the UK, are taking a ‘pro-innovation’ approach to AI in weaponry. This means that they are generally opposed to restrictions which could halt progress in the making. However, it has been rightly noted that these ‘advanced technologies’ under the control of terrorist organisations (who would not be bound to follow IHL) would have disastrous consequences. They argue that a complete ban on LAWS could lead to more violence than without.

Ultimately…

AI is advancing, and with this, autonomous weapons systems are too. Weapons are becoming more advantageous to the military – with technology becoming more accurate and more precise. International humanitarian law, continually influenced by political stances and economic benefit to countries, is slowly attempting to build and structure horizontal legislation. However, the pace at which law and technology are both developing is not comparative and concerns many legal critics. The question remains, is the law attempting to slow an inevitable victory?

Harriet Hunter is a first year LLB (Hons) student at the University of Central Lancashire, who has a keen interest in criminal law, and laws surrounding technology; particularly AI.

The Legal Cheek Journal is sponsored by LPC Law.

The post Warfare technology: can the law really referee? appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/warfare-technology-can-the-law-really-referee/feed/ 1
Contracts on Monday, machine learning on Tuesday: The future of the LLB https://www.legalcheek.com/lc-journal-posts/contracts-on-monday-machine-learning-on-tuesday-the-future-of-the-llb/ https://www.legalcheek.com/lc-journal-posts/contracts-on-monday-machine-learning-on-tuesday-the-future-of-the-llb/#respond Tue, 07 May 2024 07:52:20 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=204490 Université Toulouse Capitole LLM student Sean Doig examines technology's impact on legal education and training

The post Contracts on Monday, machine learning on Tuesday: The future of the LLB appeared first on Legal Cheek.

]]>

Université Toulouse Capitole LLM student Sean Doig examines technology’s impact on legal education and training


No profession is immune to the intrusion of disruptive technologies. Inevitably, the legal profession is no exception, and the practice of law and the administration of justice has grown incredibly reliant on technology.

The integration of new legal technologies into legal services is driven by the incentive to provide more efficient, cost effective, and accessible services to its clients. Indeed, modern lawyers are implementing paperless offices and “cloud-based practice-management systems, starting up virtual law practices, and fending off challenges from document preparation services like Legal Zoom.”

Such profound change has even shaped new specialisms within the legal profession, including those known as ‘legal technologists’; a group of skilled individuals who can “bridge the gap between law and technology.” While the name suggests connotations of a ‘legally-minded coder’, the reality is that the majority of professional legal technologists lack any training or experience in both the practice of law and in the profession of engineering and technology management.

Legal technologists is a lucrative and growing niche, and it is insufficient for those professionals to lack the experience and knowledge in the practice of law if they are to develop sustainable legal technologies to assist the delivery of services to clients.

Indeed, disruptive technologies are constantly evolving, and with the rapid advancement of Artificial Intelligence (‘AI’) and the Metaverse, there is a need for immediate change as to the training of the next generations of legal minds. While this sort of fearmongering around obsolete skills and doomed professions is relatively commonplace among CEOs of AI companies, the need for upskilling and adaptability of lawyers has been reiterated by skeptical academics and legal professionals for years.

As early as the 1950’s, diction machines and typewriters changed the working practices of lawyers and legal secretaries. In the 1970’s, law firms began using computers and LexisNexis, an online information service, which changed the way legal teams performed research to prepare their cases. One of the more well-known ‘doomsayers’, Richard Susskind, whose book boldy — although perhaps rather prematurely – titled The End of Lawyers was published in 2008 — well before the era of ‘Suits’!

Despite Susskind’s earlier predictions of impending doom of the end of lawyers, the author’s subsequent book, Tommorrow’s Lawyers, surpasses the ordinary opinion that technology will remove jobs; instead, opts that technology will assist the work of professionals and more jobs will involve applying technological solutions to produce a cost-efficient outcome. Although technology is developing rapidly to assist professionals, Susskind identifies that there is a lack of enthusiasm among law firms to evolve their traditional practices. Conversely, the enthusiasm of law firms to incorporate technology is normally where AI or other technologies are able to boost profits and lower operating costs, rather than assisting the lawyer and delivering for the client.

The incentive for law firms to incorporate technology into their working practices is purely economical and fear oriented. Firms that do not incorporate technology will lose clients to those competitors that have efficient technological means at their disposal. There is little credible advice as to how firms can affectively alter their business model to integrate technology. After all, the billable hour is the crux of a law firm, and with AI speeding up historically slow and tedious work, its value is diminishing.

Without dwelling too much on the fundamentals of capitalism and its effectiveness as an economic system, it is important to note that technology companies — such as OpenAI and Meta – are mostly funded and motivated by shareholders. The rapid nature in the development of technology is to produce results and dividends for those shareholders. In order for the product to perform well economically, there is a rush to outdo competitors and to be disruptive in the market. If successful, the value of the company will increase, the value of the shares will increase, and the more equity the company will have to continue to grow.

This means that technology is advancing at a fast rate and is outpacing the technical skills of professionals. The cost of new technologies factors in the markup that tech companies seek to satisfy their shareholders and advance their research & development (R&D). As Susskind notes, the durability of small law firms will be put into question in the 2020’s against the rise of major commercial law firms that are able to afford to invest in competitive, new technologies.

What does this mean for law students? New skills are required to enter the new technological workforce, and those graduates that meet the skillset will be more in demand than the rest of their cohort. As a result, legal education must equally evolve to adequately prepare law students for working in technological law firms. As Susskind highlights: “law is taught as it was in the 1970’s by professors who have little insight into or interest in the changing legal marketplace”, and graduates are ill-prepared for the technological legal work that their employer is expecting from them.

Want to write for the Legal Cheek Journal?

Find out more

It should be noted that some graduate and post-graduate courses do exist to facilitate the teaching of some of the technological skills to prepare individuals for the new workplace. Indeed, for example, there is a simulation currently in use in a postgraduate professional course called the Diploma in Legal Practice at the Glasgow Graduate School of Law. Nevertheless, the idea here is that the burden should be placed on law schools and that technological skills should be taught at the earliest stage in order to best prepare graduates for the workplace of tomorrow.

Although it is argued that the original purpose of the LLB is to teach black letter law and the skills for legal practice should be left for post-graduate legal training, this neglects those law students who do not wish to pursue the traditional post-graduate legal education; rather opting for an alternative career path in law.

In order for the value of an LLB to be upheld, it must adapt to meet the growing demand of the industry it serves. Its sanctity and popularity rests on its ability to be of use to any student seeking to have the best possible skills and, therefore, prospects in the job market. If the LLB is to survive, itself must compete with more attractive courses such as ‘Computer Science’, ‘Data Analysis’, and ‘Engineering’. It is not enough for law professors to continue to falsely assume that “students already get it”, or that if graduates work for a law firm then critical technology choices have been determined, “including case management software, research databases, website design, and policies on client communication.”

Furthermore, firms are “increasingly unwilling to provide training to incoming associates” and seek those graduates who already possess background knowledge. Undoubtedly, technology skills will elevate students’ employability, and those with tech skills will be in high demand by traditional law firms and by tech companies that service the legal industry.

While some law schools have been introducing “Legal Technology” or “Law and Technology” modules into their curriculums, it can be argued that they are insufficient to cover the array of specific skills that need to be taught, and are rather focusing merely on the impact of technology in the legal sector. The lack of innovation in law schools is placed on the lack of imagination on the part of law professors and its institutions; fearful of experimenting with the status quo of syllabises. Institutions with the courage to experiment with their curriculum to teach desirable skills in the legal market will attract and better serve a greater number of students for the new world of work.

Perhaps the most elaborate attempt to revolutionise legal education is the theoretical establishment of an MIT School of Law by author Daniel Katz. ‘MIT Law’ would be an institution that delivered a polytechnic legal education; focusing on “the intersection of substantive law, process engineering, computer science and artificial intelligence, design thinking, analytics, and entrepreneurship.” The institution would produce a new kind of lawyer; one that possessed the necessary skills to thrive in legal practice in the 21st century. With science, technology, engineering, and mathematics (“STEM”) jobs dominating the job market, there is an overlap into the legal market; giving rise to a prerequisite or functional necessity for lawyers to have technical expertise to solve traditional legal problems that are interwoven with developments in science and technology.

This hypothetical law school may seem far-fetched, but the underlining principle should be adapted to the modern LLB. Indeed, the curriculum should choose its courses upon the evaluation of the future market for legal services and adapt to the disruptive technologies becoming commonplace in the workplace. A hybrid of traditional law courses such as contract law, with more technical courses such as Machine Learning or E-Discovery should become the new normal to ensure the effective delivery of the best LLB of the future. Each course would be carefully evaluated in light of the current and future legal labour market to ensure that students are given the best possible chances after leaving the institution; whether they go on to post-graduate legal studies or not.

Sean Doig is an LLM student at Université Toulouse Capitole specialising in International Economic Law. He is currently working on his master’s thesis, and displays a particular interest in international law, technology and dispute resolution.

The Legal Cheek Journal is sponsored by LPC Law.

The post Contracts on Monday, machine learning on Tuesday: The future of the LLB appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/contracts-on-monday-machine-learning-on-tuesday-the-future-of-the-llb/feed/ 0
5 Ways India’s Digital Personal Data Protection Act 2023 differs from Europe’s GDPR https://www.legalcheek.com/lc-journal-posts/5-ways-indias-digital-personal-data-protection-act-2023-differs-from-europes-gdpr/ https://www.legalcheek.com/lc-journal-posts/5-ways-indias-digital-personal-data-protection-act-2023-differs-from-europes-gdpr/#respond Mon, 05 Feb 2024 08:57:20 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=199715 Mayank Batavia takes a deep dive into data protection mechanisms in India and Europe

The post 5 Ways India’s Digital Personal Data Protection Act 2023 differs from Europe’s GDPR appeared first on Legal Cheek.

]]>

Mayank Batavia takes a deep dive into data protection mechanisms in India and Europe


In August 2023, the Digital Personal Data Protection Bill, 2023 was passed by the two houses of the Indian Parliament. As a result, it has now become the Digital Personal Data Protection Act, 2023, making it legally enforceable.

Elsewhere, data privacy laws of varying complexity have been introduced in different countries over time. Among them, the European Union’s General Data Protection Regulation (GDPR), is considered both comprehensive and strict.

Before comparing India’s Digital Personal Data Protection Act, 2023 and the GDPR, let’s take a moment to understand why data privacy is both important and complex.

The complexity of data explosion

Less than a century ago, important data was printed on paper and stored in books and bound documents. You needed physical space, so if you wanted to store five books instead of one, you’d need five times the space.

Digital data storage changed everything.

Dropbox estimates that about 6.5 MN pages of documents can be stored in a 1TB hard disc, a storage device about one-and-half-times the size of your palm. By the same measure, even a standard smartphone can store over 25 movies in HD.

And because such data storage is easily available to everyone, from governments to organizations and institutions to individuals, it becomes very difficult for a legal body to regulate data protection, storage and sharing.

About GDPR

The European Union brought the GDPR into effect in May 2018. You are expected to comply with the GDPR if you store, process, transfer, or access data of residents of the member-states (27 EU countries and 3 EFTA countries).

It is a forerunner to many privacy regulations, including India’s DPDP and the CCPA (California Consumer Protection Act). The GDPR requirements are stringent and the cost of non-compliance is stiff. For such reasons, the GDPR has become a model for other countries.

About India’s DPDP

India’s Digital Personal Data Protection Act (DPDP) came into effect half a decade after the GDPR. This gave the DPDP the advantage of studying the GDPR and other regulations.

Two key terms

It will help to keep in mind what the below two terms mean for these two regulations:

Data Controller: The natural or legal person that decides why and how the personal data should be processed. The DPDP uses the term Data Fiduciary instead of Data Controller.

Data Processor: The natural or legal person that processes personal data on behalf of the Data Controller.

How is the India’s DPDP different from the GDPR

The EEA and India operate under very different social, political, historical, and even commercial parameters. So, it’s only natural that their privacy laws have some differences.

For example, Article 9 of the GDPR has set out clear categories of data that cannot be processed. Processing data with the objective, say, determining the political beliefs or sexual orientation or a person is expressly forbidden. The DPDP doesn’t lay out these terms.

Here are the key differences between the Digital Personal Data Protection Act and the GDPR.

1. The enshrined principles

GDPR: The GDPR takes a defined route to establishing what data privacy is and what its guiding principles are. The seven principles that lie behind the GDPR are lawfulness, fairness, and transparency; purpose limitation; data minimization; accuracy; storage limitation; integrity and confidentiality; and accountability.

DPDP: The Bill does not explicitly list out the principles like the GDPR. However, the report by the Justice B N Srikrishna Committee, appointed to examine the requirements, validity, scope, and other aspects of data protection, mentions two guiding factors that shaped the current law.

The first emerges from the Directive Principles of State Policy, which says that the state must act as a facilitator of human progress. Hence, the DPDP is drafted in a way to encourage the growth of the private enterprise without compromising an individual’s right to privacy.

Want to write for the Legal Cheek Journal?

Find out more

The second is a self-disciplinary idea for the state: it admits that the state is “prone to excess”. Therefore, it’s important to empower individuals with fundamental rights, which they may use against the state if the state begins to border on the excess. They may also be used if private enterprises attempt to abuse the freedom the state grants these enterprises.

The data protection framework has been built so that the right to privacy, now a fundamental right in India, gets legal endorsement. This framework offers protection to the individual against both state and non-state actors.

2. How the data is processed

GDPR: If a piece of data is a part of a filing system and is personal in nature, the GDPR will apply to it. Whether it has been processed mechanically, manually, or electronically is immaterial for the GDPR.

DPDP: Against that, the DPDP is very specific. It clearly states that the processing of the data needs to be “…wholly or partly automated operation…”.

There could be several reasons why the DPDP limits the definition of processing in this way. One explanation is that if the scope had included all sorts of processing, the law would have been too complex and mammoth to enforce, thereby defeating its purpose.

The Indian government is pushing for digitalization and alongside that, Indian consumers are also showing a clear change in the way they share their personal information. So, in the next five years or so, a large chunk of data is set to be digitized anyway.

3. Data Protection Boards and enforcement

As technology lets us collect an increasingly wider variety of data, what is personal data isn’t always easy to define. That adds another level of complexity in enforcement of data privacy regulations.

For instance, role email addresses (the ones like sales@, admin@, or billing@) are rarely used to sign up for newsletters, because they are team addresses. And they are often publicly displayed on websites. And yet, marketers indiscriminately spamming role addresses need to be kept in check.

The GDPR and the DPDP have built elaborate mechanisms to ensure that they protect the privacy of people without making things unduly difficult for businesses.

GDPR: The GDPR brought into existence the European Data Protection Board (EDPB). The EU member states have designated independent, supervisory public authorities. Each of these supervisory authorities is the point of contact for the data controller or processor within each member-state. However, it’s the EDPB that will ensure that the enforcement is consistent across the EU and beyond.

There are national DPAs (Data Protection Authorities) which work with national courts in order to enforce the GDPR. If there are more than one member states involved, the EDPB will step in. That makes the EDPB a single-window enforcement.

DPDP: The DPDP Act has proposed a board called the Data Protection Board of India (DPBI). (As of 27 November 2023, the DPBI has yet not been formed.) The DPBI will have a chairperson, board-members, and employees.

Among other things, the DPBI differs from the EDPB (of the EU) in that the former doesn’t hold powers to formulate any rules, while the latter does.

The DPBI receives complaints, reviews them to understand if the complaint is worthy of inquiry, and passes interim and final orders. It will work with other law enforcement agencies if required. That means it can cast a wide net, if required. Besides, appeals from the DPBI are passed on to the Telecom Disputes Settlement Authority of India (TDSAI), and appeals from the TDSAI may be taken by the Supreme Court.

4. Consent and responsibility

GDPR: The GDPR has a long list of lawful bases for processing data. That means the consent for data processing is granular and detailed. The GDPR requires that you display notice at the time of collecting the personal data.

The onus of compliance is on the data controllers as well as the data processors, depending upon the nature of compliance or breach.

DPDP: It appears that the contents of the DPDP notice are relatively limited – nature of data, purpose of processing that data, guidelines for grievance redressal and a few other things. Against that, the GDPR notice is much more detailed.

Unlike the GDPR, the DPDP holds the data fiduciary responsible even for the data processors they engage. That means that in case of a breach of compliance, the DPDP would hold the data fiduciary responsible.

There are two likely reasons why the DPDP made this stipulation, instead of allowing a joint-and-several form of liability. One, it was the data fiduciary that defined the purpose of collecting and processing data, and will likely remain the sole beneficiary of the processed data (The data processor typically offers a service to process the data, but is unlikely to gain anything beyond the processing fees). Hence, the onus must lie with the data fiduciary.

Two, because of this stipulation, the data fiduciary will make sure that all the security measures it has in place are proportionately reflected in the measures that processor takes. That will make sure that the data fiduciary remains alert as regards the standards of every entity in its supply chain.

5. Children’s data

While both the EU and India actively seek to protect their children, there are some divergences in how this is approached.

Culturally, people in India look at family – and not the individual – as a unit of the society. As a result, some western conventions of privacy don’t apply. For instance, many children aren’t assigned a separate room for themselves. Even when a child has a separate room for themselves, they seldom keep it locked, and members of the family freely move around in and out of rooms of one another.

The average Indian parent engages their children in a way that’s different from the way an average European or American parent will. The Indian parent is more hands-on and involved: they believe sharing important information within the family is key to bonding, well-being, and even overall safety.

With all this context, it’s not unusual to routinely share account passwords within the family. That blurs the lines of privacy in the familial context. In the European Union, this would be extremely rare.

Finally, the legislature and the judiciary in India take cognizance of the unique relationship between parents and their offspring (e.g. Maintenance and Welfare of Parents and Senior Citizens Act, 2007). All this, in a small way, might partially account for some of the differences between the GDPR and the DPDP.

GDPR: The Article 57 specifically requires the supervisory authorities of member nations to pay attention to “activities addressed specifically to children” while promoting public awareness and understanding.

The GDPR sets an age limit of 16 years for the definition of child. That means a person below 16 years of age would qualify as a child, so parental consent will come into picture for processing their data.

There is, however, an interesting exception mentioned in Recital 38. It clearly states that when providing “preventive or counseling services directly to a child”, the consent from the guardian or parent is not necessary.

DPDP: A person who has not attained the age of 18 years is defined as a child under the DPDP. Before processing the data of children, a verifiable consent from parents (or legal guardians) is required.

One thing that’s not entirely clear is why, for the purpose of consent, the DPDP has clubbed people with disabilities with children. Among other reasons, it may be due to the fact that both groups receive considerable support from parents.

Another interesting feature of the DPDP is that it clearly prohibits a Data Fiduciary from processing data that can “cause any detrimental effect on the well-being of a child”. The Data Fiduciary is also clearly prohibited from tracking or monitoring children or serving targeted advertising directed at children.

To some extent, it places a certain onus on the Data Fiduciary. That’s because today children are some of the most heavy users of social media and digital platforms. As a result, an organisation may already be digitally collecting their behavioural data and serving ads accordingly. In case of a dispute or disagreement, it could be difficult to draw the lines.

Concluding remarks

Both the DPDP and the GDPR reflect a considered, mature, and yet a strict approach in protecting the privacy and the data of their people.

And yet it’s important to remember that the two sets of regulations aim at two different geographies and two different bodies. While compliance with one will make compliance with the other easier, there are some provisions unique to each of the two.

In a world where data is shared, stored, and processed more widely than ever before, organizations can profitably leverage data while remaining compliant with regulations.

Mayank Batavia works in the tech industry within the email organisation space. 

The post 5 Ways India’s Digital Personal Data Protection Act 2023 differs from Europe’s GDPR appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/5-ways-indias-digital-personal-data-protection-act-2023-differs-from-europes-gdpr/feed/ 0
Why we need to take a closer look at ‘loot boxes’ https://www.legalcheek.com/lc-journal-posts/why-we-need-to-take-a-closer-look-at-loot-boxes/ https://www.legalcheek.com/lc-journal-posts/why-we-need-to-take-a-closer-look-at-loot-boxes/#respond Tue, 19 Sep 2023 07:47:30 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=193056 Aspiring barrister Georgia Keeton-Williams on why more needs to be done to protect children from in-game currencies

The post Why we need to take a closer look at ‘loot boxes’ appeared first on Legal Cheek.

]]>

Aspiring barrister Georgia Keeton-Williams delves into why more needs to be done to protect children from in-game currencies

The pandemic thrust video games into mainstream popularity. They provided an escape from furloughed boredom and gave a platform for people desperately attempting to connect with people outside their bubble. Ultrafast play and hyper-realistic graphics are worlds away from the OG video game pong, much like, as you will see, the current money-making strategies.

With uSwitch, a broadband comparison site, estimating that 91% of children aged 3-15 play video games, the issue comes when these revenue generating strategies reach the consoles of children.

The ‘loot boxes’ problem

One of these strategies is ‘loot boxes’. How loot boxes appear changes depending on the game, but they are often depicted as in-game chests (Coinmaster) or card pack look-alikes (FIFA). Sometimes, they are something completely outlandish such as Fortnite’s ‘loot llamas’ (a loot box shaped like a pinata llama). Simply put, loot boxes are just a container with a random item(s) which a player will receive. These can be bought for in-game or real-world money.

When opening a loot box, a player does not know what they will get. They could gain a heavily sought-after item with high value or a repeat item that is basically worthless. Some in-game items are even being sold on secondary markets for real-world money.  For example, the most expensive knife on SkinPort, a marketplace for in-game items, is over £44,000 (though the site recommends its actual worth to be circa £1,000)! Selling happens even though trading is usually against both the game’s and platform’s terms and conditions.

The legal response

Loot boxes sometimes use roulette-style wheels to show the player what they have ‘won’, often accompanied by flashing lights and sound effects. Importantly, roulette wheels are a ‘game of chance’ and are classed as gambling under s6 of the Gambling Act 2005. It was this similarity that led the Gambling Commission, the UK’s gambling regulator, to assess whether loot boxes could be a type of gambling.

The Commission found that most loot boxes could not be considered gambling under existing law. The problem was not the way the mechanism operated, but rather, the need for a qualifying prize under section 6(5). A prize needs to have ‘money’s worth’ or real-world value. The conclusion was that as there was usually no legitimate way to cash out loot box rewards, loot box prizes had no real-world worth and could not be considered gambling. They did explain further that if there were a legitimate way to cash out, those loot boxes would likely fall under the gambling label and the commission would take steps accordingly.

This meant that most loot boxes could continue to be sold to children.

The government has recently reviewed whether loot boxes should be classed as gambling. The call for evidence received 32,000 public responses, a rapid evidence assessment of available psychological studies and 50 direct submissions from sources such as academics and industry experts. The government’s response to this evidence can be accessed here.

Ultimately, the government’s views echoed those of the commission – most loot boxes had no monetary value. The report decided not to amend the law to bring loot boxes under the umbrella of gambling, stating that it would be premature given the lack of definitive information about their potential harms. It was argued that doing so may risk unintended consequences — children beginning to use adult accounts was one example — or unfairly hindering the video game industry’s freedom. Similarly to the gambling Commission, the report did recognise that where loot boxes could be cashed out legitimately they may be in breach of existing gambling law – but that they trust the gambling Commission to take action when this occurs.

Want to write for the Legal Cheek Journal?

Find out more

Digging deeper

It is true that the rapid evidence assessment found a lack of available research but InGAME, the report’s publisher, found that this gap meant that “a cautious approach to regulation of loot boxes is important.” However, the publisher went on to note that this “does not mean that nothing can or should be done”. The assessment actually advocates for enhanced protections and encourages ethical game design, where game developers prioritise safety within the design process. An example of this is age ratings in games or for loot boxes specifically.

Enhanced protections are particularly important when we consider loot boxes as a new product. As they are available to buy often with both in-game money or real-world money, many of the existing advertising restrictions on in-game purchases do not apply. So, if a player gets an unwanted reward, a game can display messages such as “you nearly had it!” when the outcome was purely chance dependent or “you’ll get it next time!” to promote a purchase when in reality, there is no guarantee.

This do-nothing approach has been confirmed in the latest gambling reform policy paper. This means that a change to the law is not imminent. Whether this approach is correct remains to be seen. It is, however, concerning that the UK has chosen to allow the sale of loot boxes to children when so many other countries are taking steps to restrict their sale. They are not completely harmless, as InGAME highlighted, and there are some studies starting to emerge that link loot box expenditure to problematic gambling.

Many people including the Children’s Commissioner, the House of Lords and the government’s Digital, Culture, Media and Sport Committee advocated for loot boxes not to be sold to children. In fact, the House of Lords Committee went as far as calling for the use of Section 6(6) of the Gambling Act 2005 to bring loot boxes under gambling law until a permanent solution can be found. While it shows the sentiment, this solution may be legally flawed. Section 6(6) allows the Secretary of State to classify something as a game of chance. As mentioned above, the issue with Loot Boxes is that most are unable to satisfy the prize element of the act, rather than the ‘game of chance’ element.

Until more research into the harms of loot boxes is conducted, we cannot know whether the government decision to leave loot boxes alone was correct. What is apparent is that there is huge potential for a disconnect between UK law and technological advancements, if the loot boxes issue is left unmonitored.

Georgia Keeton-Williams is an aspiring barrister and first-class law graduate from Northumbria University.

The post Why we need to take a closer look at ‘loot boxes’ appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/why-we-need-to-take-a-closer-look-at-loot-boxes/feed/ 0
Navigating bias in generative AI https://www.legalcheek.com/lc-journal-posts/navigating-bias-in-generative-ai/ https://www.legalcheek.com/lc-journal-posts/navigating-bias-in-generative-ai/#comments Mon, 11 Sep 2023 08:22:18 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=192724 Nottingham PPE student Charlie Downey looks at the challenges around artificial intelligence

The post Navigating bias in generative AI appeared first on Legal Cheek.

]]>

Nottingham PPE student Charlie Downey looks at the challenges around artificial intelligence

While the world lauds the latest developments in artificial intelligence (AI) and students celebrate never having to write an essay again without the aid of ChatGPT, beneath the surface, real concerns are developing around the use of generative AI. One of the biggest is the potential for bias. This specific concern was outlined by Nayeem Syed, senior legal director of technology at London Stock Exchange Group (LSEG), who succinctly warned, “unless consciously addressed, AI will mirror unconscious bias”.

 In terms of formal legislation, AI regulation differs greatly around the world. While the UK has adopted a ‘pro-innovation approach’, there still remain concerns around bias and misinformation.

Elsewhere, the recently approved  European Union Artificial Intelligence Act (EU AI Act) will be seen as the first regulation on artificial intelligence. This is expected to set the standard for international legislation around the world, similar to what occurred with the EU’s General Data Protection Regulation (GDPR). The AI Act incorporates principles that will help reduce bias, such as training data governance, human oversight and transparency.

In order to really understand the potential for bias in AI, we need to consider the origin of this bias. After all, how can an AI language model exhibit the same bias as humans? The answer is simple. Generative AI language models, such as OpenAI’s prominent ChatGPT chatbot, is only as bias-free as the data it is trained on.

Why should we care?

Broadly speaking, the process for training AI modes is straightforward. AI models learn from diverse text data collected from different sources. The text is split into smaller parts, and the model predicts what comes next based on what came before by learning from its own mistakes. While efforts are made to minimise bias, if the historical data that AI is learning from contains biases, say, systemic inequalities present in the legal system, then AI can inadvertently learn and reproduce these biases in its responses.

In the legal profession, the ramifications of these biases are particularly significant. There are numerous general biases AI may display related to ethnicity, gender and stereotyping, learned from historical texts and data sources. But in a legal context, imagine the potential damage of an AI system that generated its responses in a manner which unfairly favours certain demographics, thereby reinforcing existing inequalities.

One response to this argument is that, largely, no one is advocating for the use of AI to build entire arguments and generate precedent, at least not with generative AI as it exists in its current form. In fact, this has been shown to be comically ineffective.

So how serious a threat does the potential for bias actually pose in more realistic, conservative uses of generative AI in the legal profession? Aside from general research and document review tasks, two of the most commonly proposed, and currently implemented, uses for AI in law firms are client response chatbots and predictive analytics.

In an article for Forbes, Raquel Gomes, Founder & CEO of Stafi – a virtual assistant services company – discusses the many benefits of implementing automated chatbots in the legal industry. These include freeing up lawyers’ time, reducing costs and providing 24/7 instant client service on straightforward concerns or queries.

Likewise, predictive analytics can help a solicitor in building a negotiation or trial strategy. In the case of client service chatbots, the dangers resulting from biases in the training data is broadly limited to inadvertently providing clients with inaccurate or biased information. As far as predictive analysis is concerned, however, the potential ramifications are much wider and more complex.

Want to write for the Legal Cheek Journal?

Find out more

An example

Let’s consider a fictional case of an intellectual property lawyer representing a small start-up, who wants to use predictive analysis to help in her patent infringement dispute.

Eager for an edge, she turns to the latest AI revelation, feeding it an abundance of past cases. However, unknown to her, the AI had an affinity for favouring tech giants over smaller innovators as its learning had been shaped by biased data that leaned heavily towards established corporations, skewing its perspective and producing distorted predictions.

As a result, the solicitor believed her case to be weaker than it actually was. Consequently, this misconception about her case’s strength led her to adopt a more cautious approach in negotiations and accept a worse settlement. She hesitated to present certain arguments, undermining her ability to leverage her case’s merits effectively. The AI’s biased predictions thus unwittingly hindered her ability to fully advocate for her client.

Obviously, this is a vastly oversimplified portrayal of the potential dangers of AI bias in predictive analysis. However, it can be seen that even a more subtle bias could have severe consequences, especially in the context of criminal trials where the learning data could be skewed by historical demographic bias in the justice system.

The path forward

 It’s clear that AI is here to stay. So how do we mitigate these bias problems and improve its use? The first, and most obvious, answer is to improve the training data. This can help reduce one of the most common pitfalls of AI: overgeneralisation.

If an AI system is exposed to a skewed subset of legal cases during training, it might generalize conclusions that are not universally applicable, as was the case in the patent infringement example above. Two of the most commonly proposed strategies to reduce the impact of bias in AI responses are: increasing human oversight and improving the diversity of training data.

Increasing human oversight would allow lawyers to identify and rectify the bias before it could have an impact. However, easily the most championed benefit of AI is that it saves time. If countering bias effectively necessitates substantial human oversight, it reduces this benefit significantly.

The second most straightforward solution to AI bias is to improve the training data to ensure a comprehensive and unbiased dataset. This would, in the case of our patent dispute example, prevent the AI from giving skewed responses that leaned towards established corporations. However, acquiring a comprehensive and unbiased dataset is easier said than done, primarily due to issues related to incomplete data availability and inconsistencies in data quality.

Overall, while a combination of both these strategies would go a long way in mitigating bias it still remains one of the biggest challenges surrounding generative AI. It’s clear that incoming AI regulation will only increase and expand in an attempt to deal with a range of issues around the use of this rapidly rising technology. As the legal world increases its use of (and reliance on) generative AI, more questions and concerns will undoubtedly continue to appear over its risks and how to navigate them.

Charlie Downey is an aspiring solicitor. He is currently a third-year philosophy, politics and economics student at the University of Nottingham.

The post Navigating bias in generative AI appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/navigating-bias-in-generative-ai/feed/ 2
Improving access to justice – is AI the answer? https://www.legalcheek.com/lc-journal-posts/improving-access-to-justice-is-ai-the-answer/ https://www.legalcheek.com/lc-journal-posts/improving-access-to-justice-is-ai-the-answer/#respond Mon, 21 Aug 2023 07:37:45 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=191303 Jake Fletcher-Stega, a recent University of Liverpool law grad explores the potential for technology to enhance legal services

The post Improving access to justice – is AI the answer? appeared first on Legal Cheek.

]]>

Jake Fletcher-Stega, a recent University of Liverpool law grad, explores the potential for technology to enhance legal services

Utilising advancements like artificial intelligence (AI) and chatbots in the UK can greatly boost efficiency and accessibility in the legal system. Legal tech has the potential to substantially elevate the quality of legal services, prioritising client outcomes over traditional methods, which is crucial for advancing the legal field.

Inspired by Richard Susskind’s work (a leading legal tech advisor, author and academic), this article seeks to demonstrate AI’s potential to spearhead advancements in the legal field and provide solutions to the issue of court backlogs currently plaguing the UK system.

 The problem: the overloaded UK court system

Despite our faith in the right to access to justice as a cornerstone of the British legal framework, the reality is that this is far less certain than might appear. Briefly put, access to justice is the ability of individuals to assert and safeguard their legal rights and responsibilities. In 2012, the Legal Aid, Sentencing and Punishment of Offenders Act (LASPO) significantly reduced funding for the UK justice system, resulting in a current backlog of approximately 60,000 cases and leaving many unable to afford representation.

If we are to fix this ongoing crisis, a fresh, unique, and revolutionary solution is required. I suggest that adopting an innovative approach, such as the use of legal technology, could significantly improve access to justice.

 The solution: legal tech

To echo the view of leading academic Susskind, the legal delivery service is outdated and overly resistant to technological advancements. He asserts that the utilisation of artificial intelligence, automation, and big data has the potential to revolutionise the methods through which legal services can be provided and executed. I must reiterate, it isn’t beneficial that our legal sector is overly conservative and technophobic. Other professions have moved forward with technology, but lawyers haven’t.

Lawyers are behind the curve when compared to other sectors such as finance and medicine who are now utilising technology such MicrosoftInnerEye. Law isn’t significantly different from medical and financial advice. Not different enough to deny the value of innovating our legal services.

The belief that the legal field cannot innovate in the same way as other industries due to its epistemological nature is a common misconception. Many argue that AI will never fully replicate human reasoning, analysis, and problem-solving abilities, leading to the assumption that it cannot pose a threat to human professionals whose job primarily involves ‘reasoning’. However, this perspective is flawed.

While AI may not operate identically to humans, its capability to perform similar tasks and achieve comparable outcomes cannot be underestimated. Instead of fixating on the differences in the way tasks are accomplished, we should shift our focus to the end result.

Embracing AI/Legal Tech and its potential to augment legal services can lead to more efficient, accessible, and effective outcomes for clients, without entirely replacing the valuable expertise and experience that human professionals bring to the table. It is by combining the strengths of AI with human expertise that we can truly revolutionise the legal sector and improve access to justice for all.

Want to write for the Legal Cheek Journal?

Find out more

Outcome thinking

As lawyers, we must begin to approach the concept of reform in law through the notion of ‘outcome thinking’. In outcome thinking, the emphasis is on understanding what clients truly want to achieve and finding the most effective and efficient ways to deliver those outcomes. The key idea is that clients are primarily interested in the results, solutions, or experiences that a service or product can provide, rather than the specific individuals or processes involved in delivering it.

For example, instead of assuming that patients want doctors, outcome thinking suggests that patients want good health. Another example is the creation of this article. I used AI tools to help me adjust the language, structure and grammar of this text to make it a smoother read. This is because ultimately as the reader you are only interested in the result and not how I crafted this text.

Lawyers are getting side-tracked

Lawyers fail to grasp the focus of this discussion. To illustrate this, let me share a personal story. Just moments before my scholarship interview at one of the Inns of Courts, I was presented with two statements and asked to argue for or against one of them within a five-minute timeframe. One of the statements posed was, ‘Is AI a threat to the profession of barristers?’ Instead of taking sides, I chose to argue that this question was fundamentally flawed.

My contention was that the more critical consideration should be whether new technology can enhance efficiency in the legal system, leading to more affordable and accessible access to justice. The primary focus of the law should be to provide effective legal services rather than solely securing an income for barristers, just as the priority in medicine is the well-being of patients, not the financial gains of doctors.

When a new medical procedure is introduced, the main concern revolves around its impact on patients, not how it affects the workload of doctors. Similarly, the legal profession should prioritise the interests of those seeking justice above all else.

One example — chatbots

One practical example of legal tech that Susskind suggests is the implementation of a ‘diagnostic system’. This system uses an interactive process to extract and analyse the specific details of a case and provide solutions. This form of frontline service technology is often provided through the medium of a chatbot. As a chatbot can work independently and doesn’t require an operator, it has the potential to streamline legal processes.

To test this, I developed a prototype application that demonstrated the potential of AI to tackle legal reasoning. Using the IBM Watson Assistant platform and academic theory from Susskind & Margaret Hagan, I created a chatbot that assisted a paralegal in categorising a client’s case. Although far from perfect, the project proved that AI can substantially improve the efficiency and quality of our outdated legal services.

Concluding thoughts

This article has attempted to demonstrate how embracing technological innovation can revolutionise the legal profession. By focusing on delivering efficient and client-centric outcomes, the legal sector can improve access to justice and create a more effective system. While challenges exist, proactive adoption of innovative solutions will shape a promising future for law, ensuring its continued role in upholding justice for all.

Jake Fletcher-Stega is an aspiring barrister. He recently graduated from the University of Liverpool and his research interests lie in legal tech and AI.

The post Improving access to justice – is AI the answer? appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/improving-access-to-justice-is-ai-the-answer/feed/ 0
The blame game: who takes the heat when AI messes up? https://www.legalcheek.com/lc-journal-posts/the-blame-game-who-takes-the-heat-when-ai-messes-up/ https://www.legalcheek.com/lc-journal-posts/the-blame-game-who-takes-the-heat-when-ai-messes-up/#comments Tue, 08 Aug 2023 07:55:57 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=190977 Megha Nautiyal, a final-year law student at the University of Delhi, explores the relationship between liability and technology

The post The blame game: who takes the heat when AI messes up? appeared first on Legal Cheek.

]]>

Megha Nautiyal, a final-year law student at the University of Delhi, explores the relationship between liability and technology

lawyers AI robots

Imagine a scenario where an Artificial Intelligence-powered medical diagnosis tool misinterprets critical symptoms, harming a patient. Or consider an autonomous drone operated by an AI algorithm that unexpectedly causes damage to property. As the capabilities of AI systems expand, so too does the complexity of determining the legal responsibility when they err. Who should bear the responsibility for such errors? Should it be the developers who coded the algorithms, the users who deployed them, or the AI itself?

In the world of cutting-edge technology and artificial intelligence, we find ourselves at the cusp of a new era marked by revolutionary advancements and unprecedented possibilities. From self-driving cars that navigate busy streets with ease to sophisticated language models capable of composing human-like prose, the realm of AI is reshaping our lives in extraordinary ways. However, with the awe-inspiring capabilities of AI comes an equally daunting question that echoes through courtrooms, boardrooms, and coffee shops alike — who is legally responsible when AI makes mistakes?

Assigning liability: humans vs AI

Unlike human errors, AI errors can be complex and challenging to pinpoint. It’s not as simple as holding an individual accountable for a mistake. AI algorithms learn from vast amounts of data, making their decision-making processes somewhat mysterious. Yet, the concept of holding an AI legally responsible is not just science fiction. In some jurisdictions, legal frameworks are evolving to address this very conundrum.

One line of thought suggests that the responsibility should lie with the developers and programmers who created the AI systems. After all, they design the algorithms and set the initial parameters. However, this approach raises questions about whether it is fair to hold individuals accountable for AI decisions that may surpass their understanding or intent.

Another perspective argues that users deploying (semi-)autonomously AI systems should bear the responsibility. They determine the scope of AI deployment, its applications, and the data used for training. But should users be held liable for an AI system’s actions when they may not fully comprehend the intricacies of the algorithms themselves?

Is AI a legal entity?

An entity is said to have legal personhood when it is a subject of legal rights and obligations. The idea of granting legal personhood to AI, thereby making the AI entity itself liable for its actions, may sound like an episode of Black Mirror. However, some scholars and experts argue that as AI evolves, it may gain a level of autonomy and agency that warrants a legal status of its own. This approach sparks a thought-provoking discussion on what it means to recognise AI as an independent entity and the consequences that come with it.

Another question emerges from this discussion — is AI a punishable entity? Can we treat AI as if it were a living, breathing corporation facing consequences for its actions? Well, as we know, AI is not a sentient being with feelings and intentions. It’s not a robot that can be put on trial or sent to AI jail. Instead, AI is a powerful technology—a brainchild of human ingenuity—designed to carry out specific tasks with astounding efficiency.

In the context of law and order, AI operates on a different wavelength from corporations. While corporations, as “legal persons,” can be held accountable and face punishment for their actions, AI exists in a unique domain with its own considerations. When an AI system causes harm or gets involved in something nefarious, the responsibility is not thrust upon the AI itself. When an AI-powered product or service misbehaves, the spotlight turns to the human creators and operators—the masterminds who coded the algorithms, fed the data, and set the AI in motion. So, while AI itself may not be punished, the consequences can still be staggering. Legal, financial, and reputational repercussions can rain down upon the company or individual responsible for the AI’s misdeeds.

Want to write for the Legal Cheek Journal?

Find out more

Global policies and regulations on AI

In the ever-evolving realm of AI, a crucial challenge that arises is ensuring that innovation goes hand in hand with accountability. Policymakers, legal experts, and technologists have to navigate uncharted territory, and face the challenge of crafting appropriate regulations and policies for AI liability.

In 2022, AI regulation efforts reached a global scale, with 127 countries passing AI-related laws. There’s more to this tale of international collaboration. A group of EU lawmakers, fuelled by the need for responsible AI and increasing concerns surrounding ChatGPT, called for a grand summit in early 2023.They summoned world leaders to unite and brainstorm ways to tame the wild stallion of advanced AI systems.

The AI regulation whirlwind is swirling with intensity. Stanford University’s 2023 AI Index proclaims that 37 AI-related bills were unleashed into the legal arena worldwide. The US charged ahead, waving its flag with nine laws, while Spain and the Philippines recently passed five and four laws, respectively.

In Europe, a significant stride was taken with the proposal of the EU AI Act nearly two years ago. This Act aims to classify AI tools based on risk levels, ensuring careful handling of each application. Moreover, the European Data Protection Board’s task force on ChatGPT signals growing attention to privacy concerns surrounding AI.

The road ahead: what should we expect?

 As we journey toward a future shaped by AI, the significance of policies regulating AI grows ever more profound. In this world, policymakers, legal experts, and technology innovators stand at the crossroads of innovation and ethics. The spotlight shines on the heart of the matter: determining the rightful custodian of AI’s mistakes. Is it the fault of the machines themselves, or should the burden fall upon their human creators?

In this unfolding saga, the road is paved with vital decisions that will shape the destiny of AI’s legal accountability. The future holds an alluring landscape of debates, where moral dilemmas and ethical considerations abound. Striking the right balance between human ingenuity and technological advancement will be the key to unlocking AI’s potential while safeguarding against unintended consequences.

Concluding thoughts

As we continue to embrace the marvels of AI, the captivating puzzle of legal accountability for AI errors looms large in this ever-evolving landscape. The boundaries between human and machine responsibility become intricately woven, presenting both complex challenges and fascinating opportunities.

In this dynamic realm of AI liability, one must tread carefully through the legal intricacies. The answers to who should be held accountable for AI errors must be reached on a case-by-case consideration. The interplay between human intent and AI’s decision-making capabilities creates a nuanced landscape where the lines of liability are blurred. In such a scenario, courts and policymakers must grapple with novel scenarios and evolving precedent as they seek to navigate this new challenge.

Megha Nautiyal is a final-year law student at the Faculty of Law, University of Delhi. Her interests lie in legal tech, constitutional law and dispute resolution mechanisms.

The post The blame game: who takes the heat when AI messes up? appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/the-blame-game-who-takes-the-heat-when-ai-messes-up/feed/ 1
What does digital transformation mean for women in law? https://www.legalcheek.com/lc-journal-posts/what-does-digital-transformation-mean-for-women-in-law/ https://www.legalcheek.com/lc-journal-posts/what-does-digital-transformation-mean-for-women-in-law/#comments Thu, 12 Jan 2023 11:42:04 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=183206 MSc student and qualified Turkish lawyer Öznur Uğuz considers how advancements in tech help and hinder the current gender gap

The post What does digital transformation mean for women in law? appeared first on Legal Cheek.

]]>
MSc student and qualified Turkish lawyer Öznur Uğuz considers how advancements in tech help and hinder the current gender gap

Gender inequalities in women’s career advancement and the resulting gap in leadership positions in law firms are by no means new phenomena. While so much has changed since the times when women were denied to practise law on the grounds of their sex, the so-called glass ceiling is still a thing in today’s legal industry, which makes it much harder for women to climb the career ladder.

That being said, the legal profession itself is in a period of profound change which, driven by technology and innovation, might change the picture for women in law, along with many other things in the legal industry. The potential changes the adoption of advanced technologies could bring about to legal practice have already been, and continue to be, discussed by many. Yet, nothing much has been said on how those changes might affect women in legal practice and the existing gender gap in the legal industry.

On the face of it, technology might help bridge the current gender gap by introducing new forms of working, removing time and place constraints, and fostering the emergence of brand new legal roles. One of the major advantages of the adoption of technology in the legal industry is flexibility, which provides legal professionals with the opportunity to carry out their work outside the workplace. Endorsement of technology in law firms might also facilitate the improvement of working hours by introducing positive changes to the billable hour system through the allocation of certain time-consuming and repetitive tasks to legal technology tools. These could make a significant difference in terms of work-life balance and job retention, particularly for women with parental responsibilities, making it easier to maintain a balance between work and family life.

Moreover, technology, more specifically algorithms, might help provide an impartial consideration process for women by mitigating discriminatory treatment they might face during recruitment, promotion and task allocation processes. However, algorithms, too, have the potential to perpetuate existing inequalities and exclusion in the legal industry. Contrary to common belief, algorithmic decision-making could also be impaired by biases as algorithms are developed and trained by humans and learn from existing data that might underrepresent or overrepresent a particular category of people. Given that employment decisions made by algorithmic systems are often based on companies’ top performers and past hiring decisions, the overrepresentation of men in male-dominated industries might very well lead to favouring male candidates over females.

A perfect example of this is the tech giant Amazon’s experimental hiring algorithm, which is said to have preferentially rejected women’s job applications on the grounds of the company’s past hiring practices and applicant data. The company’s machine learning system assessed the applicants based on the patterns in previous applications and since the tech industry has been male dominant, the system penalised resumes containing the word “women’s”. While Amazon said that the system was never used in evaluating candidates, the incident suggests that this type of system exists and might already be used by some firms in employment decisions.

Want to write for the Legal Cheek Journal?

Find out more

The most worrying part of algorithmic discrimination, which could have an aggravating effect on the gender gap, is the scale of possible impact. While the effect of discriminatory decisions by humans is often limited to one or a few persons, a discriminatory algorithmic decision could affect a whole category of people and lead to cumulative disadvantages for those who fall into that category, in this case, women. The fact that algorithmic systems are often kept as trade secrets and protected by intellectual property rights or firms’ terms and conditions complicates things further, making it harder to detect and mitigate such discriminatory treatment.

Technology might also have more indirect implications for women in legal practice, mainly through a shift to automation and the take-over of certain legal tasks by technology tools. The ongoing trend toward automation is expected to cause unprecedented changes in the legal industry to the extent that some legal roles might entirely disappear, while new ones are emerging.

In a 2016 report, Deloitte predicted that 114,000 jobs in the legal sector were at risk to be lost to technology in the next two decades. Junior lawyers were expected to be the ones who would be most impacted by this trend due to the relatively less intellectually demanding nature of their roles. This bleak forecast has been supported by a study from the Law Society of England and Wales, which predicted a fall in employment of 13,000 legal professionals by 2027, with legal secretaries and office support roles being at a higher risk of replacement by technology. Whilst neither of the analyses discussed the issue from a gender-specific perspective, occupational division in the legal industry indicates that women are likely to be more affected by the accelerating adoption of technology in law firms.

According to the Law Society Gazette, women accounted for 61% of solicitors and 52% of lawyers in UK law firms in 2021, while only 35% of law firm partners were female. Another data on the occupational gender structure from LinkedIn Talent Insights showed that women constituted 87% of paralegals and 54% of staff attorneys in US corporate and commercial practice in 2019, as opposed to 23% of law firm partners. These indicate a significant gender gap in the career hierarchy, with the leadership positions being male dominant, whereas women mostly occupy entry-level positions, which are more likely to be impacted by automation. While it is only the tip of the iceberg, the data signals that any possible replacement of legal roles by technology would likely broaden the already wide gender gap in the legal industry.

Öznur Uğuz is a qualified Turkish lawyer, who is currently studying for an MSc in European economy and business law in Rome, Italy. She previously completed the Graduate Diploma in Law at The University of Law, and is interested in the intersection of law, business and technology.

The post What does digital transformation mean for women in law? appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/what-does-digital-transformation-mean-for-women-in-law/feed/ 2
The impact of AI on copyright law https://www.legalcheek.com/lc-journal-posts/the-impact-of-ai-on-copyright-law/ https://www.legalcheek.com/lc-journal-posts/the-impact-of-ai-on-copyright-law/#respond Tue, 20 Dec 2022 08:52:44 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=182394 Following public excitement around 'ChatGPT', aspiring barrister Jonathan Binns considers the impact of artificial intelligence on UK copyright law, and even asks the chatbot for its take

The post The impact of AI on copyright law appeared first on Legal Cheek.

]]>
Following public excitement around ‘ChatGPT’, aspiring barrister Jonathan Binns considers the impact of artificial intelligence on UK copyright law, and even asks the chatbot for its take

At the moment it seems like the whole world is shocked to discover the colossal leap artificial intelligence (‘AI’) has made, since the release of ‘ChatGPT’ by the Elon Musk-founded company OpenAI last month.

I asked the newly-released software what effect it thinks AI will have on UK copyright law, and this is what it had to say:

“It is difficult to predict the exact impact that AI will have on copyright law in the UK. However, it is likely that the use of AI in creative industries, such as music and art, will raise questions about the ownership of works created by AI. There may also be discussions about whether AI itself can hold copyrights for works it creates. It is possible that the UK government will need to update copyright laws to address these issues.”

I must stress there was no human involvement in the creation of that answer and it has not been edited. When computers are capable of creating coherent and reasonable answers to complex questions, it’s only a short step to their ability to author literary works of their own. This specific computer programme specialises in its understanding of language and has the potential to change the face of many industries. It follows OpenAI’s previous AI image generator, ‘DALL-E 2’, which was capable of instantly generating artwork including photo-realistic images based on user prompts.

Copyright laws allow the creator of a work to be the sole owner of that work, therefore they have the sole rights to sell or reproduce their idea. These rights can be claimed by the author of the work under section 9 of the Copyright, Designs and Patents Act 1988 (‘CDPA’) which describes an author as the person who “created” the work. This work could be: literary work (e.g. a book or script), entrepreneurial work (e.g. a film or sound recording), or other works which are detailed in the Act. The Act itself considers in the instance of a literary work being computer-generated “the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken.” This is a confusing assortment of words that essentially means the author of a work written by an AI will be the writer of the prompt that encouraged the AI to write it.

Want to write for the Legal Cheek Journal?

Find out more

Different categories of copyright works have different requirements to be protected. For example, entrepreneurial works have no requirement for originality, in contrast to literary works which section 1 CDPA requires are “original”. The meaning of original is undefined in the Act but is understood to mean the original work of the author — this conflicts with the provisions under section 9 which allow the author to take credit for the computer-generated work in spite of it not being their own work.

Some suggest it would be a logical solution for a computer-generated work to be held separately to a human-written piece as an entrepreneurial work as opposed to a literary one. This would be similar to how the law treats sound recordings and musical copyright which are substantially the same but with a difference in authorship requirements and, consequently, a difference in the level of protection afforded to them.

Whilst others question whether AI-created works should be entitled to copyright protection at all. Eventually, this school of thought boils down to understanding the fundamental purpose of intellectual property law. Ultimately, when a human protects their work this is because they want to be the sole beneficiary of the products of their own time, effort and imagination. A computer-generated text, song or artwork does not derive from the same process, consequently, why should it be afforded the same protection?

On the other side of the coin, the implications of AI are not limited to computer-generated literature flooding the shelves of bookshops and AI art hanging on the walls of the Louvre. Machine learning algorithms are already being implemented by companies such as YouTube to automate the process of copyright enforcement. These algorithms can quickly and accurately scan vast amounts of content, comparing it against known copyrighted works to identify potential infringements. This has made it easier for copyright holders to enforce their rights and protect their works from unauthorised use, but has also raised concerns about the potential for false positives and other errors in the process.

Overall, the impact of AI on copyright is clearly complex and multi-faceted. While the technology has brought about many positive changes, including making it easier to identify and enforce copyright infringement, it has also raised a number of challenging legal and ethical issues. As AI continues to advance and become more widely adopted, it is obvious that these issues will continue to evolve.

The UK is in the minority when it comes to recognising the early potential for the composition of copyright works without the need for a human author and legislating on it. Many other jurisdictions, such as the USA, will have issues with this growing technology now the public have free access to this tool. In the USA, for a copyright to be satisfied, cases have established the work must be created by a human author using a modicum of creativity. It’s hard to say which approach will stand the test of time but it is obvious that the foundations have been laid for a new normal for creative industries.

Jonathan Binns is an aspiring barrister and recent law graduate, currently undertaking the BPC at The University of Law, Leeds.

The post The impact of AI on copyright law appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/the-impact-of-ai-on-copyright-law/feed/ 0
The future is driverless https://www.legalcheek.com/lc-journal-posts/the-future-is-driverless/ https://www.legalcheek.com/lc-journal-posts/the-future-is-driverless/#respond Thu, 04 Aug 2022 08:28:44 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=178107 Our driving laws are not geared up for the possibilities of driverless vehicles, but could the Law Commission have found a way to steer through the obstacles?

The post The future is driverless appeared first on Legal Cheek.

]]>
Our driving laws are not geared up for the possibilities of driverless vehicles, but could the Law Commission have found a way to steer through the obstacles? MSc student and qualified Turkish lawyer, Öznur Uğuz looks at proposals for reform

The Law Commission of England and Wales, which advises the government on law reform, published an issues paper on the law surrounding remote driving, Remote driving, in June 2022. It examines the existing law regarding remote driving and possible reforms, sets out and briefly analyses legal issues arising from the current law, and poses questions in order to invite responses from the public.

Remote driving — a technology that enables a person to drive a vehicle from a remote location rather than from within the vehicle — is a hot topic currently attracting a lot of interest from industry and business. It is already being used in the UK in off-road environments, particularly in farming, and is increasingly being trialled for on-road use. Its attractions include the ability to operate in hazardous environments, such as a mine, quarry or warehouse, and provide more feasible delivery. There are many challenges, however, particularly in terms of safety, liability and compliance.

How safe is it?

From a legal aspect, arguably the most important safety risks posed by remote driving are related to cybersecurity and terrorism. Some serious concerns on the matter include possible takeovers of remotely operated vehicles by hackers as well as their use as “a terrorist weapon”. Other risks referred to in the Law Commission paper relate to connectivity, situational awareness and maintenance of the driver’s focus.

At present, there is no specific legal and regulatory framework for remote driving, which requires existing laws and regulations to apply to remote driving systems and their operation. However, the adequacy of existing rules when applied to emerging technologies is highly questionable as these rules were formulated when the current level of technology and its possible implications could not be imagined.

The current law

The Commission has identified some construction and use regulations which may cause problems in the remote driving context. Those potentially problematic provisions are contained in the Road Vehicles (Construction and Use) Regulations 1986 and their breach is an offence under the Road Traffic Act 1988. The provisions are below as outlined by the Commission.

Regulation 104

Regulation 104 requires a driver to have proper control of the vehicle and a full view of the road and traffic ahead. The paper points out that Regulation 104 does not necessarily require the driver to be in the vehicle. This suggests that as long as the driver has a full view of the road and traffic, whether through the use of connectivity, the full view requirement can be met. As emphasised by the Commission, the more difficult issue with the provision is what amounts to proper control as it is unclear whether proper control refers to the type of control a conventional driver would have or can the requirement be satisfied also by a person undertaking only part of the driving task.

Regulation 107

Regulation 107 prohibits leaving a vehicle unattended unless the engine is stopped and the parking brake set. The Commission seems to have found the regulation mostly compatible with remote driving in light of the case law, according to which the driver does not need to be in the vehicle as long as they are in a position to observe it. This suggests that a vehicle may still be “attended” by a person who is near the vehicle or in a remote-control centre. However, the provision may still be breached when a remote driver cannot see the vehicle or is not in a position to prevent interference with it.

Regulation 109

Regulation 109 prohibits a driver from being in a position to see a screen displaying non-driving related information. The Commission has found the information regarding the driving environment and the vehicle such as route and vehicle speed, which is displayed to a beyond line-of-sight driver through a screen, driving-related and thus permitted under regulation. Still, they have noted that information that developers might wish to display may go well beyond what is permitted.

Want to write for the Legal Cheek Journal?

Find out more

Regulation 110

Regulation 110 forbids the use of hand-held devices whilst driving, including mobile phones. In the Paper, Regulation 110 has been found “potentially problematic” for “line-of-sight” driving, where a person walks alongside a vehicle and controls its speed or direction through a hand-held device. Such a person would technically be a driver who is using a hand-held device whilst driving, which would be a breach of the regulation even though the original aim of the regulation was to prevent the distractory use of mobile phones while driving.

Problems with the current law

The Commission is concerned that these uncertainties with the current law might hinder the development of some potentially valuable projects or could be exploited to put on the road systems which are not ready in terms of quality and safety. Accountability for poor remote driving is also an issue as the main responsibility currently lies with the driver, which may result in injustice, particularly where the driver has little control over the key aspects of the operation. Such that, a remote driver could face criminal prosecution for a serious offence in the event of an accident, even if they were not at fault.

Under the existing law, a remote driver also bears responsibility for the roadworthiness of the vehicle and would be liable even if they were not in a position to know the unworthiness of the vehicle. While the driver’s employer could also be prosecuted in that case, the offence the employer could face would be relatively minor.

Civil liability

Civil liability may also become an issue in the remote driving context. As a possible example, it might be difficult to determine the person liable for damage in the event of an accident if the problem lies in connectivity or some other latent defect rather than with the driver. While already difficult to determine how and who caused the harm, it might even be more complex when it comes to new technologies involving multiple actors and internal components.

In terms of insurance, the Commission refers to the UK’s compulsory motor insurance scheme. As originally required by the Road Traffic Act 1930, third-party motor insurance is compulsory for anyone who uses a motor vehicle on the road. While for a conventional vehicle identifying the person who is using the vehicle is quite easy, the situation is complex in the remote driving context. The paper refers to the case law and states that in that case, both the driver and the organisation that employs them would be “using” the vehicle, meaning they would both need to be insured against potential liability.

In the event of damage, the organisations that employ the remote driver would be liable for both; their own wrongdoing in operating an unsafe system, and the faults of the drivers they employ. In addition, employers would be responsible for any defect in the vehicle or in the remote driving system. Still, a situation might be much more complicated than that, especially where a remote driving feature of a vehicle has been designed by one organisation but operated by another, or where a remote driver is a subcontractor rather than an employee.

Remote driving from abroad

One of the most interesting issues in terms of remote driving is the legal implications that might stem from the operation of a remotely driven vehicle on UK roads from abroad. There is currently no international standard on the regulation of remote driving, which might lead to serious challenges in the event of damage, particularly in terms of compliance with driving regulations and law enforcement. In addition to possible delays and high expenses, tracking down evidence of an accident which has occurred abroad but involved a vehicle operated remotely from the UK would be practically difficult.

Another important concern in the international context is whether a remote driver from another jurisdiction who is driving a vehicle on UK roads would need a UK driving licence. As explained in the paper, as a contracting party to the Vienna Convention on Road Traffic Act 1968, the UK is normally obliged to recognise a driving licence issued by another country that is a contracting party to the Convention until the driver becomes normally resident in the UK. Still, as remote driving involves higher safety risks than conventional driving, this existing provision might not be directly applicable in the remote driving context and an amendment or reform might be needed.

Conclusion

Concerns and possible implications of remote driving technology are much wider than these and require a lot of time and thinking from experts to be managed. However, the paper makes it clear that the current law needs at least amendments to provide clear rules to follow when dealing with remote driving technology so that efficient solutions could be produced to any problems the technology might pose. Regarding any future legal reform, the Law Commission is expected to set out possible options and publish advice to the UK government early in 2023, after receiving responses to its paper.

Öznur Uğuz is a qualified Turkish lawyer, who is currently studying for an MSc in European economy and business law in Rome, Italy. She previously completed the Graduate Diploma in Law at The University of Law, and is interested in the intersection of law, business and technology.

The post The future is driverless appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/the-future-is-driverless/feed/ 0
Welcome to the futuristic world of the Decentralised Autonomous Organisation https://www.legalcheek.com/lc-journal-posts/welcome-to-the-futuristic-world-of-the-decentralised-autonomous-organisation/ https://www.legalcheek.com/lc-journal-posts/welcome-to-the-futuristic-world-of-the-decentralised-autonomous-organisation/#respond Wed, 06 Jul 2022 08:20:06 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=177218 Can old laws govern these radical creations? MSc student and qualified Turkish lawyer, Öznur Uğuz investigates the mysterious entities known as DAOs and finds they have a lot to offer

The post Welcome to the futuristic world of the Decentralised Autonomous Organisation appeared first on Legal Cheek.

]]>
Can old laws govern these radical creations? MSc student and qualified Turkish lawyer, Öznur Uğuz investigates the mysterious entities known as DAOs and finds they have a lot to offer

A Decentralised Autonomous Organisation (DAO) is a new form of digital blockchain-based organisation that is owned and governed by a community organised around a set of self-executing rules encoded on smart contracts. Members are issued with “tokens”, which grant voting rights on the governance of the organisation. Lacking a central authority, DAOs offer a more transparent and democratic decision-making process that extends their potential business applications from financial transactions and company management to secure voting, crowdfunding and charities.

Since DAOs rely on smart contracts, the risk of self-dealing and fraudulent behaviour by members is also very low compared to traditional forms of organisations. By digitally bringing together groups of people from different backgrounds and physical locations, DAOs promise to facilitate access to markets and set a new milestone in digitalisation in the age of globalisation. However, all these benefits come at a cost of legal uncertainty. The unique self-governing and decentralised structure of DAOs raises several questions ranging from the legal status and governance of the organisation to the extent of liability of its members, which cannot be answered using existing legal instruments.

Goodbye to the hierarchy

Unlike traditional corporate entities, DAOs do not have a hierarchical structure or a centralised authority but they rely on a democratic voting system and/or smart contracts operating on blockchain technology as their source of governance. DAOs can be categorised under two main types: “algorithmic DAOs” and “participatory DAOs”. Participatory DAOs are managed by the consensus of members through smart contracts. Each member in a DAO has the right and ability to participate in the DAO´s governance by voting on a proposal made by another member or by initiating a new one themselves. Those proposals that are supported by a prescribed portion of members are adopted and enforced by the rules coded in smart contracts. On the other hand, algorithmic DAOs aim to be entirely governed by smart contracts dictating the entire functionality of the organisation.

Who’s liable?

While fully autonomous DAOs would eliminate problems caused by human misconduct, they would give rise to additional legal issues. For DAOs, there is always a risk that the underlying software has defects such as bugs or security vulnerabilities, of which impact might be greater for algorithmically-managed DAOs where no human intervention is possible. Since smart contracts execute automatically, they are almost impossible to be modified and any changes would require the entire contract to be cancelled and redrawn. This would particularly be an issue in the event of an organisational crisis, unlawful enforcement or a regulatory concern, hampering the organisation’s ability to react in time. What is more, determining the liable party in case of a dispute might be more difficult for fully autonomous DAOs that involve multiple actors and internal components of complex data and algorithms which make it complicated to establish whether the damage was triggered by a single cause or was a result of an interplay of a number of different factors.

Want to write for the Legal Cheek Journal?

Find out more

Another outstanding concern over DAOs is the personal and unlimited liability of their members for the organisation’s acts, debts and obligations, which is an issue resulting from the lack of an established legal framework. From a legal perspective, DAOs are deemed as unincorporated entities with no corporate form or protection against liability as DAOs do not follow legal formalities of incorporation such as registration, bylaws, and contracts. In the United States, DAOs are likely to be treated as “general partnerships” that lack the ability to provide liability protection to their members. Since they do not have the usual protection against liability enjoyed by members of limited liability companies, who only risk their capital contribution, in case of a lawsuit, a DAO’s members would be fully liable including their personal assets until the claim is satisfied. Thus, if classified as a general partnership, a DAO may lose potential members who would otherwise support the DAO but worry that membership would put their assets at risk.

The exceptions

Currently, there are only a few exceptions to DAOs’ ambiguous legal status. The first step towards establishing a legal framework for DAOs in Anglo-American law was taken by the state of Vermont in 2018. Under Vermont’s Limited Liability Company Act, a DAO can register as a Blockchain Based LLC (BBLLC), and thereby gain an official legal status that allows it to enter into contractual agreements and offer liability protection to its members. Still, the legal recognition of DAOs as separate legal entities came later by the State of Wyoming, which has become the first state in the United States to legally recognise DAOs as a distinct form of limited liability company. Bill 38, which took effect on 1 July, 2021, has created a new form of legal entity called DAO LLC or LAO that provides LLC-like liability protections to DAOs that register as such. However, the law is essentially an amendment to the current Wyoming Limited Liability Company Act and does not create any new protections that are not available in existing LLC structures, while imposing new obligations on DAOs. Moreover, the law raises serious concerns primarily on whether registering as a Wyoming LLC would diminish the fundamental “decentralised” aspect of a DAO.

Under the act, a Wyoming DAO LLC can either be member-managed or algorithmically managed. Yet, an algorithmically-managed DAO can only be formed if the underlying smart contracts are capable of updates or modifications, meaning that DAOs formed under the Wyoming LLC Law will have to maintain some modicum of centralisation and human control. In addition, the law requires DAO LLCs to maintain a presence in the state of Wyoming through a registered agent, which may also temper the decentralised nature of DAOs. Having said that, the impact of the law is likely to be limited given the state’s small population and minimal ties to the financial industry. Overall, although Vermont’s BBLLC and Wyoming’s DAO LLC represent steps forward towards the development of a legal framework for DAOs, absent recognition at the federal level and significant clarity around the different forms of DAOs, these solutions would not be sufficient to overcome the current ambiguity.

Locating the DAO

From a jurisdictional point of view, a DAO’s decentralised form might also make it challenging to find the applicable law and jurisdiction in the event of a dispute. In the majority of legal systems, the applicable jurisdiction for entities is determined with reference to the place of incorporation of the organisation or the place where key managerial decisions of such organisations are taken. However, DAOs have neither a country of incorporation nor a place of administration. Unlike traditional software applications that reside on a specific server controlled by an operator assigned to a specific jurisdiction, DAOs run on a decentralised blockchain system and are collectively managed by a distributed network of members who can be from anywhere in the world.

In some cases, applicable law and jurisdiction might be determined based on the other contractual party or the creator of the DAO code. In terms of applicable law, it might also be possible to apply the law of the state or jurisdiction within which a lawsuit has been instituted. Still, as there is no established rule, litigants may have to bring actions in several jurisdictions to be able to obtain legal protection, and initiating a legal dispute against a DAO may become very impracticable and cumbersome in many senses.

These concerns aside, DAOs have the potential to elevate international business by creating a truly global and decentralised corporate structure that could effectively function without the need for hierarchical human management. If they reach their full potential and achieve mainstream adoption by overcoming the outstanding legal and regulatory challenges, they can accelerate the development of inclusive markets by easing access and can create novel business and cooperation opportunities that would not otherwise be possible.

Öznur Uğuz is a qualified Turkish lawyer, who is currently studying for an MSc in European economy and business law in Rome, Italy. She previously completed the Graduate Diploma in Law at The University of Law, and is interested in the intersection of law, business and technology.

The post Welcome to the futuristic world of the Decentralised Autonomous Organisation appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/welcome-to-the-futuristic-world-of-the-decentralised-autonomous-organisation/feed/ 0