Technology

Pep Martorell: "AI has no memory and does not store our data"

Physicist and PhD in Computer Science

11/03/2026

PalmPep Martorell is a physicist, holds a PhD in Computer Science, and is a partner at the management firm Invivo Partners, where he helps develop artificial intelligence (AI) projects. As an expert in the field, he will participate this Thursday, March 12, in the "Companies with a Human Face" symposium in Palma, to discuss the trends that will shape AI in the next decade.

Explained for those who have no idea: what does AI need to function?

— Basically, three things: algorithms, which learn from data; data; and energy, which is essential.

What role does AI play in redefining the world order?

— I like to say that the relationship between countries has been determined by material factors, such as borders, raw materials, and oil. Now, technology has been added to the mix. Everything digital ends up determining economic and social progress, and a significant part of the discussions revolve around this: the control of talent, data, and energy. Everything is transformed in the realm of technology. The Americans say:Who doesn't compute, doesn't competeIn other words, those who lack the capacity to develop digital skills are not competitive, from a political and economic standpoint. If you can't provide this to your ecosystem, it's not competitive. The simplest example is the internet: no one today can imagine competing without internet access. It's very obvious. The same will happen with AI.

Who is leading this technological race today?

— The United States and China, each with some elements more advanced than the other. China has a more developed energy capacity. The United States leads in AI models. Then there are continents that remain as users, far behind.

And Europe?

— It has a very complicated role. Although we've become more aware of the necessity of trying to compete, we're still not able to develop the technology at the necessary pace. One example is the gigafactories: large data centers that are planned for Europe to compete. Ursula von der Leyen announced them in February 2025. Everyone welcomed the news. But we have the typical problem: a year has passed and we don't even know where they'll be located. The call for proposals hasn't even been issued yet. Europe has made progress in recognizing the importance of AI, but the pace is far too slow.

Cargando
No hay anuncios

What is the main risk for Europe?

— The fundamental danger is the loss of competitiveness. If your companies can't access these technologies, they can't offer the same prices or open new markets. There are other consequences: AI is the most fundamental element for advancing research. If you can't guarantee your scientists access, you relegate them to a second-rate status. Although in Europe this is being addressed. Barcelona reacted quickly with MareNostrum, one of the world's largest computing machines. The Balearic Islands also have a very interesting project: the new computing infrastructure that the University of the Balearic Islands (UIB) will launch.

What practical effects can biomedical research have on patients?

— The application of AI shortens the scientific cycle. From the initial research in a specific area to obtaining a result, years can pass. Now, many tasks can be greatly accelerated thanks to the use of AI. Another advantage is that it allows us to solve problems that were previously unsolvable. One example is protein folding. Its geometric shape determines how it interacts with the environment. If you have to design proteins to interact with a tumor, you can now know how they will ultimately fold and predict what will happen. The solution was provided by AI, and the research received the Nobel Prize in Chemistry in 2024.

How does the growth of AI relate to the data we generate with mobile phones?

— Absolutely. AI models are trained using our data, including public network data. Over the past decade, there has been a significant shift: the widespread adoption of smartphones as data sources. With a mobile phone in our pocket, we have become constant providers of data for AI. But, contrary to popular belief, AI doesn't have memory and doesn't store our data as such. It uses it for training, but doesn't save it individually.

Cargando
No hay anuncios

Can AI end up creating more inequality while simultaneously driving progress?

— We're not economists, but I tend to be quite optimistic. I'd say it's a technology and a tool that will soon be universally accessible, unlike other technologies that generate inequality. Right now, it's in the hands of a few, which could create governance problems if regulations or limitations need to be applied. But access has become universal very quickly. You see more and more people who, with limited resources, can create good projects. For regions like the Balearic Islands, it's an opportunity. Will large companies dominate everything? For now, we see the opposite. We see small law firms and consultancies that can compete with much larger companies. Nobody knows how it will evolve, but I see it more as an opportunity than a threat, in terms of equality.

Can changes happen so quickly that society cannot assimilate them?

— I would separate the speed at which technology evolves from its impact on daily life. If we really think about it, has your daily life changed that much with the arrival of AI? It feels like everything is moving very fast, but the adoption of that technology won't be quite so rapid. Humans adopt technology and organizations at a more human pace. This will allow us a reasonable and calm rhythm.

AI agrees with you and tells you what you want to hear.

— The way we've trained them means they have an incentive to always respond and to do so satisfactorily. It's quite simple. Tests that evaluate how good a system is measure the percentage of times it responds to your requests. There's no complicated story behind it.

Cargando
No hay anuncios

What can a worker offer to keep their job in the face of AI? What human skills will become more valuable?

— I would cite a report from the Davos Forum that discussed two very important competencies: one is the ability to build customer trust. Customer-supplier interaction is not just transactional, but based on trust: from the local pharmacy to a lawyer who advises you. In this area, even if an AI system could give you the same advice, we will still prioritize professionals who inspire confidence. The other is general technological literacy. Do your people have technological knowledge? It's essential to have the skills to use technology effectively and understand why it works the way it does. There will be an increase in training in this field. If we want to hire someone, we will increasingly value this: whether they know how to leverage the latest tools, whether they have a basic understanding of technology.

Is there a bubble surrounding this technology?

— More than a bubble, there could be a conflagration. As with any technology, there may come a time when, due to excessive valuations or capital accumulation in a particular company, a correction occurs. What won't happen is that the technology will become irrelevant. People must separate what happens on the stock market from what happens in the real world. We already saw this with the dot-com crash in 2000: many companies lost money, but the internet continued.

What ethical dilemmas do you face as an investor in AI projects?

— We don't have much of a problem, because the projects come from hospitals and research centers that already have their ethics committees. They've already gone through many filters. We don't have any problems in that regard.

Cargando
No hay anuncios

How should AI be regulated to prevent abuses and ensure transparency?

— An interesting approach is what Europe is doing now. The regulation has been criticized, but I like that it tries to anticipate problems. It's well thought out, especially because it regulates the uses of the technology: it prohibits some and imposes specific regulations on others. It doesn't only affect the technology owner, but any company operating in Europe.

What impact can AI have on democracy, through its use in social media and information?

— One of the biggest challenges for AI is misinformation. But it's not a new problem. Misinformation can also spread through traditional newspapers, but there are limitations imposed by the byline and the prestige of the publication. What AI does is exaggerate a situation we already have in the digital world. I don't rule out the possibility that in the future, watermarks or systems that identify whether content has been generated by AI will need to be introduced.

What impact do large AI models have on energy consumption and the environment?

— Of all the things we've discussed, the element that will most limit AI's growth is energy. There's a dilemma: if we generate new sources to meet AI's demand, we might have to resort to others that aren't entirely clean. We want a clean Europe, but at the same time, China has no problem increasing its energy consumption. We'll face a difficult contradiction to manage. Some countries are going to advance very rapidly. If you look at the new energy capacity that China brings online each year, it's far ahead of the United States and Europe. The debate in Europe will be what's the point of limiting its growth if other parts of the world don't seem to care. If you don't do the same, you fall behind. And if others don't stop, your measures won't be very effective either.