LATEST

CHARBONNEAU – AI can’t be trusted for decision-making in the courtroom

(Image: Pixabay.com)

DEFENDANTS HAVE appealed a Quebec Superior Court ruling in which they claim that the justice in the case used AI in their ruling. The ruling contains hallmarks of AI use, including citations to earlier cases that do not exist and verbatim quotations from testimony that was never given.

These allegations raise serious concerns. We expect that judgments are made by humans. And we look for rulings based on actual cases.

Also, future courts might unknowingly build on faulty precedent.

Law professor at Thompson Rivers University Robert Diab, says:

“The larger question is whether a judge’s reliance on AI renders a trial unfair. The stakes are high. If AI impairs a judge’s ability to be fair and impartial – or appear to be so – using it will erode trust in the courts. But does that mean courts have to forego the potential benefits of AI altogether?

“The Quebec case helps clarify the limits of acceptable AI use by judges.

“If a judge relies on AI that hallucinates legal authorities or invents evidence, the response is obvious. A decision based on the wrong law or facts cannot stand. The defendants in the Quebec case are making both claims. They say [the justice] relied heavily on a fabricated case to support a rule of law that does not exist and mischaracterized other authorities as establishing a ‘clear legal framework.’ The appeal also alleges the decision relied on testimony that was never given. If those claims are correct, the judgment would be difficult to defend on appeal (Globe and Mail, March 30,2026).”

Hallucinations aside, what matters is whether the decision is sound and based on actual precedents. Could AI make such a decision? Professor Diab continues:

“AI does not deliberate. It predicts the next word in a sentence based on probabilities. Its output may sound convincing, but decisions guided by that reasoning might still seem arbitrary, more a product of computation than judgment.”

 I wonder if an electronic “justice machine” could be built?

Digital circuits don’t compute outputs the way AI does. The output of a digital circuit is absolutely predictable, whereas AI depends on probabilities.

I imagine inputs being fed into my justice machine. The inputs to any  digital circuit are conceptually stored in a “truth table.” With the help of ChatGPT, the logic would look like this:

The inputs could be facts such as “contract signed,” “payment made,” “damage occurred.” The logic gates would be configured according to the legal rules (“if A and B, then liability”). The output would produce an unambiguous decision (liable/not liable).

The problem with my justice machine is that many of the inputs require interpretation, such as: “reasonable person,” “good faith,” and “undue hardship.”

That doesn’t mean AI can’t be useful in decision-making.

AI (large language models) can apply legal rules to a factual pattern such as identifying elements of negligence or breach of contract.

With a clean, agreed set of facts, AI can often produce something that looks very much like a judicial opinion.

But it is not. Only human judges can give real judicial opinions.

David Charbonneau is a retired TRU electronics instructor who hosts a blog at http://www.eyeviewkamloops.wordpress.com.

Mel Rothenburger's avatar
About Mel Rothenburger (11815 Articles)
ArmchairMayor.ca is a forum about Kamloops and the world. It has more than one million views. Mel Rothenburger is the former Editor of The Daily News in Kamloops, B.C. (retiring in 2012), and past mayor of Kamloops (1999-2005). At ArmchairMayor.ca he is the publisher, editor, news editor, city editor, reporter, webmaster, and just about anything else you can think of. He is grateful for the contributions of several local columnists. This blog doesn't require a subscription but gratefully accepts donations to help defray costs.

Leave a comment