Artificial Intelligence: Are we going too fast?

The accelerated advance of artificial intelligence poses opportunities and challenges: are we prepared for its implications?

The rapid spread of the use of artificial intelligence (AI) tools and the ease of free access in some of its modalities is changing the work procedures of many citizens.

Applied in business, teaching, official bodies, calculation or design programs, databases, and a long etcetera, they also find their use in the field of scientific research.

In an article recently published in Nature, it is said that the Open IA tool called “deep research tool” or “deep research tool”, which was launched on February 2, is capable of synthesizing information from dozens of web pages into extensive reports. In December, Google launched a similar tool called Gemini.

Several scientists who have tried it are surprised by its ability to write bibliographic reviews or complete review articles. Among the advantages of its use are its speed, its ability to perform internet searches, and the improved reasoning abilities of the language model.

Despite this, on its website, Open IA admits that its tool has limitations can make mistakes in bibliographic citations, and fail to distinguish between reliable information and rumors.

Elon Musk at the launch of Grok 3


On February 18, Grok 3 was presented, the new version of AI developed by xIA, Elon Musk’s Artificial Intelligence company.

It is an improved version of the tool launched in August, with a performance 10 times higher than the previous version. In addition, it uses self-correction mechanisms that avoid the errors of some AI chatbots, which give false facts as certain.

According to its promoters, it is a very precise tool that will be integrated into X, which will allow it to access data in real time. This is a paid tool that can be accessed by paying a monthly fee for Premium+ X.

In performance tests carried out by the company xIA, Grok 3 has surpassed Google’s Gemini, DeepSeek and Open IA’s ChatGPT.

Bioethical assessment

The rapid evolution of AI accessible to the public presents as many possibilities as risks. These tools use algorithms designed to handle a huge amount of data, combining them in a pre-established way to produce results that are not free of bias. These biases may be due to the limitations of these tools, but also to the design of the algorithms themselves, which can select certain information, censor it or manipulate it, which requires their prudent use, together with the need for learning that allows their wide utility to be used without falling into indiscriminate trust in the veracity of their results.

The proposal for an ethical use of these tools involves their prudent application and critical analysis of their results, which avoids transferring the possible biases and errors inherent in their use to the lives of citizens, their training or their culture.