Author: Ramak Molavi Vasse’i

People have a range of unmet legal needs due to limited or dysfunctional access to justice. The legal profession is looking for ways to provide advice more efficiently. Legal tech companies are looking for new areas in which to expand/offer their services. Courts are looking for ways to deal with a chronic backlog of cases and to reduce the length of court proceedings in order to reach fairer decisions in less time.
The Police are looking for ways to solve more crimes faster and prevent new ones. In the legal field, efficiency seems to be a goal, so automation is one of the first solutions that comes to mind. In particular, the use of AI promises to take over complex thinking and formulating tasks.
So what could possibly go wrong? I want to outline 3 major issues with using generative AI in the legal sphere.

1. Unreliability

GPT-3/4 driven services can produce plausible and professional-sounding writing in no time. Fed by all available content of the internet, available books, the content of wikimedia, and research papers to name a few, the text generated by AI trained on GPT4 can be very accurate. But in the same way, it can be absolutely wrong. It can even hallucinate so convincingly by inventing whole court cases and then drawing arguments from the non-existent cases and using them to analyze a new case.

The untrained eye of a non-expert would not be able to tell the difference, and neither does the AI itself:

The favorite kid in town, ChatGPT, does come with lots of disclaimers about its limitations, but when generating the outcome it does not recognize if the output is correct or not. The content is produced without supplying any additional information confirming the percentage of likelihood of content accuracy. 

Different from search algorithms we use, it comes without naming the source  the generated content is based on.

 2. Incompatibility with the Rule of Law

>The rule of law requires transparency and understanding of how decisions are made and what rules are applied. Without this transparency, individuals cannot challenge or contest decisions that affect their rights or interests. The use of non-interpretable AI, such as GPT-3, in decision-making processes poses an existential threat to the rule of law.

>Another dimension of incompatibility concerns the principle of the rule of law, which guarantees that the law applies equally to all.

GPT-3 and ChatGPT are trained on almost all data available on the Internet. This includes copyrighted material (of authors, scientists, creators and artists) as well as personal data without any consent mechanism in place.

By accepting and promoting the widespread use of services that have been built and created on the basis of rights violations at scale, we are perpetuating and normalizing the competitive advantages of deliberate infringements. 

We are also in essence accepting the unfairness of fining a single person for using a copyrighted Getty image on their personal website up to €3000 per case, while giving a pass to massive mining and extraction of copyrighted material with no consequences for openAI or other creators of transformers. This clearly contradicts the rule of law.

3. The question of Governance

The outcome of GPT-3 based AI solutions is not traceable nor understandable in a satisfactory way. How can we control a technology whose decisions and predictions are not easy to understand and which contain a significant amount of randomness?

Based on our research around meaningful transparency, there is a huge gap between the large amount of research around explainable AI and its adoption in organizations and the satisfying use of the explainability tools and features in real-life settings by the users of the AI. There is nearly no concept of explainability towards impacted and lay persons.

In addition, there is the problem of the lack of allocation of responsibility. We have an unclear chain of responsibility in GPT-3. While the producers see the responsibility more with the users, the users are unwilling to take responsibility for a technology whose functioning they cannot understand and therefore cannot influence.

The use of AI is already widely regulated. Data protection laws, antitrust laws, as well as copyright and non-discrimination laws all apply to AI. The EU AI Act will bring additional and specific legal requirements for the use of AI not only but especially for the use in high-risk areas. One of the high-risk areas will be its use in the judicial system. 

Transparency and accountability are key to law enforcement. The clear gaps between both in using AI make proper governance nearly impossible. Asking ourselves if an opaque system with unreliable output and unclear accountability allocation can be used for high-stakes decisions, the conclusion might only be that unsupervised learning and non-interpretable AI models are not compatible for use in the legal field. At this stage, we cannot govern GTP-3, so implementation in sensitive areas such as law and justice may do more harm than good thus requiring further research and development.

The sensitive area of law is not suitable as a sandbox to experiment with the use of immature, unsupervised AI.

 To avoid losing the traditionally high level of trust of citizens in the legal system (around 70% of citizens in Germany trust the legal system, 83% in Austria), the way forward is to focus on better, smaller and curated datasets and supervised learning solutions, and to ensure oversight by domain experts. The use of GPT-3 in criminal justice should not be encouraged.

Less problematic applications are those of an assistive nature, such as anonymising court decisions for publication or transcribing court hearings.

The most useful application may be to improve accessibility through better communication of laws and court decisions. While laws and court rulings seem to be written for other lawyers, gpt-3 could help to unpack the information density and translate legalese into simple language for lay people and the wider public – but again, not without expert supervision and oversight and clearly assigned accountabilities.

Published on 7 February 2023