13/03/2026
2 min

Artificial intelligence is opening up a horizon of possibilities and challenges for many people, while others see it as a threat. Taking this debate seriously involves, first and foremost, trying to understand what this technology is and how it works in order to know what risks it poses and how we can minimize them. This is clearly seen in one of its most attractive uses: its predictive capacity.

Nothing fascinates humans more than the possibility of predicting the future, and AI does this quite well. Even so, the work of AI has nothing to do with a special intuition, but rather with its ability to process information from past data, from which it extracts patterns and models that allow it to make predictions. These predictions can also have an effect on people, who may try, for example, to change what has been predicted, such as the effects of certain weather phenomena. But these predictions, which work in the realm of physics, become problematic when they refer to human behavior.

A well-known case is that of the COMPAS software, used in the US to assist courts in determining the likelihood of recidivism among offenders. It doesn't do a bad job. COMPAS's decisions are more accurate than those of an inexperienced judge but worse than the judgment of a group of experienced magistrates. Its effectiveness would be reasonable enough were it not for one detail: when it makes a mistake, in most cases the person harmed is Black.

The problem is that this racist bias, which would be a reprehensible error if it came from a judge, cannot be considered a malfunction of the AI, since it has no moral prejudices. As we have said, the AI ​​only uses past data, and if in the past Black prisoners were often considered potentially more likely to reoffend, this is a pattern it cannot ignore.

Proponents of this technology may argue that this bias can be corrected, but there is another aspect of AI that hinders this correction: the appearance of neutrality. Despite this, the fact remains that many people believe that AI, not being human, is unbiased and its response is impartial and reasoned. Even knowing that AI makes mistakes, we tend to distrust what a person tells us more than what ChatGPT responds, which makes critical discourse about the technology difficult.

But the main danger of the uncritical use of these systems is that they disregard a characteristic element of humans: the ability to recognize mistakes and correct them. The fact that a certain group has a high crime rate is not an obstacle for a member of that group to decide to follow a different path. Automating decisions about incarcerating people, ignoring their capacity for self-correction, is an inhumane attitude because, however much work it may reduce for judges, in practice it denies the freedom to change.

This doesn't render these tools useless, but it does highlight the need to evaluate their ethical and legal implications, and this evaluation is not the responsibility of the programmers who create them or the companies that market them. Today, there is a problem with AI, and it's not the fear of a robot rebellion, but the absence of a serious political and social debate about its use and the limits that should be imposed.

stats