It is estimated that in the next few years half of the tasks that are part of the usual routine of law firms will be replaced by Artificial Intelligence. These will largely be manual work procedures, in which the expertise of lawyers is not needed and therefore does not require special added value.

Although we would all agree that, with adequate supervision, the daily legal routine could be automated, doubts are growing in relation to other facets of machine learning such as the prediction of judgements. For years, tests have been carried out with computer tools that have become more sophisticated to the point of defining such broad aspects such as the statistical vision of the operation of a court; the probabilities of success in the event of an appeal; the criteria of each court, tribunal, section or speaker; or the knowledge of the trajectory, lines of argument and positioning of a judge.

In view of this situation, some countries are beginning to limit the scope of these new applications and, for example, France has decided to prohibit the publication of statistical information on the decisions and the pattern of conduct of judges in relation to their judgements. Although the limitation is limited to the publication of these data and, in principle, they could be used internally by lawyers or law firms, it is true that this measure is beginning to set a trend that, pending the attitude of other states in this regard, may lead to restricting the use of this type of software.

For and against regulation

In relation to this decision, the profession is divided between those who oppose it because they understand that the sector must be transparent and open, with information accessible to any office regardless of its purchasing power, and those who understand that the ban seeks to avoid pressure on judges and the trade in litigation strategies.

There are also those who consider that the generalised use of this type of legaltech tool may end up making judicial decisions less natural, since, normally, judges approach their judgements on a case-by-case basis, without taking into account the existence of a possible trend and, in their case, knowing it could make them take it into account, influencing their decisions. To this is added the doubts that may be generated by the fact that it is private companies that collect such statistical data, with the risk of bias that may exist.

And, following this line, will we arrive at robot-judges? Although some countries are already introducing this possibility for cases whose resolution is evident beforehand, doubts continue to arise about its suitability because of the possibility that other figures in the administration might absorb these procedures, although it is true that they could be of help to alleviate in part the high litigation rate of our judicial system.