Karoliina Ainge and Kaimar Karu: AI's potential does not trump rule of law

If we permit artificial intelligence (AI) into use in the court system, even as the logic behind AI's decisions would remain a secret, we would risk losing trust in the justice system. Karoliina Ainge and Kaimar Karu write.
When citizens start to have doubts about the impartiality of the courts, what is at risk is not tech innovation but the foundation of the rule of law, Ainge and Karu go on.
Justice and Digital Minister Liisa Pakosta's (Eesti 200) recent proposal to use AI to assist in making decisions on child support has once again highlighted the tensions between technological potential and concerns over societal trust.
The issue here is not merely the overall trustworthiness of AI applications—which itself undeniably remains a major challenge—but the use of AI in decisions which significantly affect people's lives. Technology should not be feared, but every new tech must be implemented thoughtfully, based on necessity, and transparently.
The success of Estonia's e-state is built on a smart and pragmatic policy on technology, which is integrated across sectors — not on blind enthusiasm for every new tool that comes along. At times we have deviated from this principle, but we have also learned from those experiences too. Let us not let those lessons go to waste.
When talking about applying AI to decisions that affect children's rights and outcomes for the family, we are not merely talking about technological choices, but about the potential misuse of the principle of public trust. The implementation of automated analysis and recommendations in the justice system must not be rushed, and must operate on a transparent basis.
While the goal of potential efficiency gains through digitalization and automation is understandable, we have to first pause and ask ourselves whether the existing, or potentially collectable, data, and the processes used, are ready for this.
The answer to this question may surprise the more enthusiastic. After this, once issues related to data and processes have been resolved, we must take an even longer pause, before implementing AI-based decisions and ask: Are we ready to sacrifice transparency and justice in the court system for the sake of administrative convenience?
Clarity and accountability are the cornerstones of the rule of law
Every administrative decision — be it about taxes, benefits, or child support — must be logically traceable and comprehensible to all parties involved. A decision must not be perceived as a so-called "black box," with intermediate conclusions which remain incomprehensible or opaque, even to the judge.
If a judge cannot verify the calculations made by AI or interpret them step by step, then we cannot speak of judicial discretion. In a case like that, the judge would merely be a rubber stamp.
Pakosta's assessment that "if it seems that it is fully okay to them" cannot, under rule of law, be a valid criterion for a judge's confirmation.
According to current knowledge, the state intends to use machine learning models, whose precise operating mechanisms rely on statistical probability. With models like these, no one can say exactly why the AI produced a given outcome.
This is an inherent problem with this type of AI.
Such models may be a good solution for, say, Spotify recommendations — although even then it would be worth asking the artists what they think of AI-generated and recommended tracks in terms of copyright and royalties — but they are not suitable for deciding child support cases.
In a rule-of-law society, the transparent justification of decisions is the foundation of public trust.
On the other side of the same coin lies the issue of accountability. If AI commits an error in interpreting the data provided for determining child support or makes an unfair recommendation, then who is culpable? The developer, the official, or the judge? The project manager? The owner of the data center hosting the AI?
And additionally: Who retains the right to make a decision contrary to the AI's recommendation, and with what justification?
After all, AI has processed millions of times more information in forming its recommendation than any judge, of any rank, ever could. Plus, AI is new and cool! Even epoch-making and strategic, if one reads the vision statements.
But the diffusion of responsibility makes it harder for a citizen to appeal decisions made about them or those they are responsible for, and so harder to achieve justice.
The trustworthiness of justice lies not only in its decisions, but to a significant extent also in the processes. People must retain a justified belief and understanding that they are being heard, that their situation is understood, and that they are treated equally.
Only in this way is it possible to arrive at a fair result. Legislation is not black and white, as society is not a machine, and people are not cogs in it.
Children's rights are not a domain for experimentation
The situation is turning particularly sensitive as AI is being considered for use in a field where fundamental questions remain unresolved and the stakes are very high — children's welfare and family rights.
When mistakes are made, it is not some abstract system that suffers, but real children and their parents — people who have a full right to transparent and well-considered decisions.
The minister stresses that the ultimate decision still gets made by a human being. To verify the legality and appropriateness of conclusions that come from a black box, judges or delegated parties must examine the data, the validity of the reasoning, and the substance and impact of the conclusions.
Any failure to do so would not merely be negligence, but a fundamental departure from the rule of law.
AI "decisions" are not facts, nor are they truly decisions, but rather the results of probability-based calculations, triggered by a complex algorithm.
These results reflect the selection criteria and quality of the input data, the development team's decisions, choices and biases, and the intent behind the questions posed to the algorithms.
Unfortunately, none of this is visible to the people who the decisions are being made about, nor to the judge who must sign them into assent.
We must also not forget that the "decisions" made by AI emerge within the context of the processes applied to it and the content and wording of the questions posed — reflecting the aims, choices, and biases of the person doing the asking.
The answers provided by AI can be easily manipulated via precise phrasing.
Estonia's strategy is not careless experimentation
Let us call to mind how we have built up the e-state so far: Gradually, thoughtfully, involving experts and seeking user trust.
We have seen how, in situations where these principles get ignored, problems have started to snowball.
An approach which assumes citizens should simply place their trust in the reliability of technology does not work.
Such an approach leads both to fully justified skepticism and to stubbornly persistent unfounded accusations.
The price we will pay as a society for insufficient explanatory work and failure to deliberately build trust will be a very high one.
If we allow black boxes whose decision-making logic remains a mystery into the court system, we risk losing trust in that justice system.
If a citizen begins to (rightly) doubt the impartiality of the courts, it is not technological innovation that is at risk, but the very foundation of the rule of law.
If AI is to have a place in the legal system, it must emerge via discussion, transparency, and in a strong legal framework.
The aim of national technological developments cannot merely be to reduce the number of people involved in a process, but to improve outcomes and societal well-being.
This is particularly the case where trust, fairness, and vulnerable groups in society are at stake.
Technology has to serve people, and their rights.
The introduction of AI into the court system therefore requires a much more thorough approach than that which is currently being proposed.
Fundamental questions on transparency, the algorithmic nature of processes and their foundations, accountability and impact analysis require a shared understanding among stakeholders about the risks and opportunities, plus a broad social consensus.
To start with, complete transparency must be ensured: Every decision must be accountable and comprehensible.
Second, judges must retain real control over the decision-making process, without risking becoming powerless bystanders in a "Computer says no" situation.
Third, assessment has to be given on whether the benefits of new solutions at the given time justify the inevitable costs and risks.
Until these questions are resolved, AI should not be used in the court system in this way.
Impartial judgment is the cornerstone of democracy, and must not be sacrificed just for administrative convenience.
Karoliina Ainge is a cyber expert and former head of Estonian cyber policy. Kaimar Karu is a tech entrepreneur, and a former IT and foreign trade minister.
--
Follow ERR News on Facebook and Twitter and never miss an update!
Editor: Kaupo Meiel, Andrew Whyte