Eye on the Future: AI and Legal Thinking

A discussion and summary of "Judicial Decision-Making and Explainable Artificial Intelligence" by Shaun Lim

Linus Koh

12/15/2021 5 min read

two girl illustrations
open book lot

Source:

Question: Can AI replace JUDGES ?

artificial intelligence and the law.

THOUGHTFULLY designed ai

Role of the judge

Conclusion: No but a tool nonetheless

Food for thought: just because Ai can replace judges does not mean that it should

xAI: Exogenous AI

xAI: Decompositive AI

What is Artificial Intelligence.

Explainability of AI mediated outcomes.

Difficulty of parsing texts into numbers

Legitimacy of outcomes

I cannot remember the number of times friends and pundits have lamented or predicted the death of the legal profession. The knell of artificial intelligence, legal automation, technology driven efficiencies brought nothing but doom and gloom. But, never have I questioned the truth behind such claims, especially so when they are made by legal professionals or soon-to-be legal professionals who shudder at the face of Excel or anything that is not remotely related to Word. I suppose it is always easier to hedge one's bets but are the views truly representative of capabilities of Artificial Intelligence?

As much as we poke fun at the archaic nature of law, there is not doubt that the law does not exist in vacuo and the law will evolve with the increasing adoption of technology. As we look to the growth of legal technology in the private sphere, one has to wonder when and if advanced technologies such as Artificial Intelligence can replace judges or facilitate judicial decision making.

The author quickly dispatches notions that Artificial Intelligence (AI) mean sentience or general "human-thought". At the current state of technology, AI is best understood as "weak AI" where it is limited to specific features of human intelligence. Whilst AI methodologies are abundant, the most common methodology is Machine Learning, where algorithms are trained by vast quantities of data sets to make correlated predictions.

The value of AI's ability to process unspeakable amounts of information whilst churning out predictable outcomes cannot be belittled. However, a core problem that AI practitioners face is the explainability of the outcomes, especially so for complex neural networks that cannot be easily understood or visualised in a graphical manner. Whilst explainability may not matter to outcome-orientated industries, the law requires certainty through explainability as evidenced by the long inductively reasoned judgements.

Another problem posited by the author is the fact that legal texts are in textual forms. In order for Machine Learning algorithms to successfully parse the information, legal texts undergo an intermediary step of Natural Language Processing (NLP). Unlike human understanding that is context-driven, NLP relies on statistics to understand language endogenously (from within the text itself). Perhaps to do my Comparative Law professor proud, context matters and especially so when deriving the essence and spirit of the law contained in such texts. For a technology that is so heavily reliant on input data, the inability to deal with context is a formidable problem that must be resolved.

The author Matthias Siems argued that the normative legitimacy of law takes the form of input legitimacy (authority of the source), output legitimacy(substance assessed by theories of justice) and throughput legitimacy (fair and transparent procedures). In the case of AI-generated legal outcomes, trust in its reliability is weakened by inexplicability of AI-mediated results and the inherent weakness and uncertainty of NLP.

As highlighted by the author, the a lack of understanding as to how the results are derived is wholly unsatisfactory where the decision-making process as ramifications insofar as it affects the rights of people. Where the role of the judiciary is to provide reasoned decisions, arbitrariness cannot be tolerated.

As introduced by the preceding paragraphs, the BlackBox approach (as long results exceed expectations regardless of explainability) is simply untenable for the judiciary's duty to give reasons. The author notes of international AI governance efforts such as the Personal Data Protection Commission Model AI Governance Framework and Microsoft Aether. These efforts(xAI or explainable AI) helped to push the needle forward in encouraging AI to further human well-being, safety and must uphold principles of explainability, transparency and fairness.

The exogenous approach takes the form of either Model-Centric explanations (MCE) or Subject-Centric explanations (SCE). MCE takes a high level view of the input parameters and output performance. SCE seeks to explain and contrast outputs as a result from different inputs.

The author citing Chief Justice Chan See Keong noted that the judicial function is built on 3 distinct pillars. Firstly, the finding of fact. Secondly, the application of the law. Lastly, the determination of rights and obligations of the parties.

The author explained that the the fact finding process is subject to varying standards of proof; a balance of probabilities for civil cases and beyond reasonable doubt in criminal cases. While abstraction may allow for a balance of probabilities to be quantified as >50%, English judges have been reluctant to rely on mathematical probabilistic reasoning while Singaporean judges place more importance on the cogency of evidence than the presence of evidence.

Stare Decisis, or let the decision stand in latin, is a quasi-law that governs judicial decision making. Inductive reasoning based on similar fact patterns may allow for regression-reliant AI systems. However, on novel points of law where facts are distant and recourse must be made to parliamentary intention or broader policy considerations, there remains an irreconcilable intellectual gap that weak AI to day cannot seem to bridge.

As to the determination of rights and obligations, the author has noted that courts in Singapore have at times adopted mathematical reasoning to standardise precedents especially in the field of sentencing in criminal law. However, the courts have pushed back against the board use of mathematical models generally since sentencing is ultimately weighed against the unique aggravating and mitigating factors of each case.

As conceded by the author, so long judicial decision-making retains its qualitative framework, it is unlikely that weak AI in its current form and in light of its limitations above can imitate let alone replace judicial decision-making.

However, the author remained optimistic about the use of AI as a tool to aid legal practitioners. The author uses the example of lawyers using xAI with SCE to predict and compared the strength of their client's case against existing case law with greater ease and efficiency. The author raised a second example where xAI is used as an empirical tool to check on the consistency of judicial decisions and highlight potential blindspots.

The author highlighted the importance of AI Governance frameworks and that they should be contextualised appropriately into the judicial context when adopted. Practitioners must acknowledge the input bias AI systems have and must active guard against "cooking" the dataset. Additionally, one must keep in mind that extrapolation by AI have no recourse to what is logically possible (severity of crime increases with punishment and taken to the extreme) thus due discretion must still be exercised.

The author questioned the feasibility of advanced AI replacing human judges. He argues that it is difficult to see any argument against the substantive correctness of a decision handed down by a machine if it was one that a human judge could have made. In the absence of any procedural impropriety, such an attack is intellectually impossible. However, it intuitively feels at odds with a personal sense of justice. One could always take the view that law is ultimate a human construct and only human can practice the law. Additionally, the author also suggested that justice must not only be done, but be seen to be done by someone qua human.

Perhaps, AI may never replace human judges for the sole reason that it lacks human legitimacy.

Decompositive AI simply seeks to explain the operation of the AI via its source code or reconstruct its out correlative neural network.