Friday 30 August 2019

Justice and artificial intelligence: trading in the blindfold for the mask?

People often say that law is all about evolutions, rather than revolutions. Or at the most, about reversals… The courts and tribunals approach the evolution of morals and social relationships with circumspection. This is juris-prudence after all. But what becomes of this art of prudence in the face of the evolution of technology? What happens when technology invades the legal sphere, transforming its practices?

As is the case in many other areas of our society, law and justice are not exempt from the phenomenon of the digitisation of our daily life and the related development of artificial intelligence. In this field, the emergence of technology that is based on artificial intelligence facilitates the automation of a growing number of activities, which were considered the exclusive task of lawyers or judges in the past. The number of apps based on algorithmic, self-teaching skills, is multiplying. While some ironically denounce the potential “Uberisation of law”, others already invest in what is known as LegalTechs, the name used to designate a wide range of applications that are supposed to innovate justice and finally coax it into the new digital era: online platforms that facilitate the access to justice, intelligent databases that enable lawyers to develop and bolster their legal strategies, predictive systems that help judges assess the potential risk of an individual…

The juxtaposition of terms such as justice and algorithm inevitably gives rise to a sentiment of “unnerving strangeness” because it leads people to believe in the potential modelling of legal practice. Behind their virtuous properties of optimisation and effectiveness, these technologies have the potential to radically transform the rationality of legal practices, making the mechanisms of the implementation of law more vulnerable, and in the process, making the process more laborious for litigants. Will the development of algorithmic systems that can collect, analyse and process massive quantities of data give legal stakeholders predictive tools that enable them to optimise the decision-making processes, anticipate on risks, and even control behaviours? An irony of history? The lawyers in ancient Rome, who profoundly influenced our western legal tradition, also relied on oracles for their decisions…

Today it is crucial that we properly identify the societal challenges of these new predictive tools. There are two avenues of investigation worth following. The first concerns legal professionals and requires us to question the singularity of their practices: what does the lawyer or judge do that cannot be done by a machine? We should ask ourselves in this context whether a machine can really capture the incredible complexity of certain cases. How should we envisage new modalities for collaboration with these machines that do not relegate the lawyer to the role of a mere executor? How to establish a relationship with the client or litigant, when new algorithmic actors are bursting on the legal scene? In addition to the problems relating to these future collaborations, this also raises questions about the funding and control of the digital tools that are made available to stakeholders, at a time when the market for data and the development of these technologies is dominated by the private sector.

The second avenue for investigation relates to litigants and more specifically their fundamental right to be treated impartially. One of the arguments that is often advanced in favour of a more widespread recourse to machines is that they allow you to compensate for the cognitive and affective bias that humans often are the victim of. Some postulate that magistrates are too subjective and partial, and therefore incapable of making the same “objective” judgement for everyone. And we all know that justice’s attributes include more than the sword and the scales… The blindfold that covers the eyes of justice, and symbolises impartiality, is just as crucial. Nowadays, however, we know that this criticism is too simplistic, because the operation of artificial intelligence systems can also be quite opaque and sometimes even conceal unequal and discriminatory treatments. The system thus advances covertly, with a mask, and its power is invisible. The blindfold no longer covers the legal system’s eyes, but the litigant’s…

Christophe Lazaro,
Professeur en Droit et Société à l'Université Catholique de Louvain (UCLouvain)
Membre de la CERNA (Commission de Réflexion sur l’Ethique de la Recherche en sciences et technologies du Numérique d’Allistene, France)