The 2024 Nobel Prize in Physics
This year’s Nobel Prize in Physics has been aimed at recognizing the great advances in artificial intelligence (AI), awarding two of its most eminent creators. Their merits are thus highlighted by the official information from the Royal Swedish Academy of Sciences: “… they have used tools from physics to develop methods that are the basis of today’s powerful machine learning. John Hopfield created an associative memory that can store and reconstruct images and other types of patterns in data. Geoffrey Hinton invented a method capable of autonomously finding properties in data and thus performing tasks such as identifying specific elements in images.”[1]
These discoveries have led to technological developments that are changing the way we live. Every time we get a product recommendation on our electronic device, do a Google search, or book a trip online, AI is behind it. It has also entered our homes, where small robots that help us in the kitchen or clean the floor are already popular; in our work and our factories, where robots have been performing complex and, above all, repetitive tasks for some time; in our hospitals, with multiple applications to help with diagnosis; in the financial world and in market research. It would be endless to list all the practical applications of AI present in our daily routine.
Comments focus on ethical aspects
From the very beginning, Hinton’s thoughts on the dangers that AI can entail[2] took on special relevance in the news field. He himself was in charge of highlighting this in the interview that was conducted with him by the organization of the Royal Academy: “In particular, with regard to the existential threat of these things getting out of control and taking over, I think we are at a kind of bifurcation point in history where in the coming years we have to find out if there is a way to deal with that threat. I think it’s very important now for people to work on the question of how we’re going to maintain control. We need to put a lot of effort into research. I think one thing governments can do is force big companies to put a lot more resources into security research.”[3] And we’ve seen this idea in headlines in news outlets like Fortune: “Geoffrey Hinton, Nobel laureate, AI pioneer and alarmist”[4]; or CNN: “With AI warning, Nobel laureate joins laureates who have warned of risks to their own work”[5].
The ethical concerns of AI
AI is surely the most transformative technology of our time and raises deep ethical concerns about its use. Three major threats are inherent to the very essence of this technology:
- Privacy;
- The reliability of the data on which the systems are trained; and
- Accountability for the decisions that AI makes.
Privacy needs strict protection against potential privacy encroachment, as AI’s effectiveness depends on the availability of large volumes of personal data subject to collection, storage and use.
AI systems are only as good as the data they are trained on, so objective data selection is paramount. This is hugely difficult, if not impossible, in open systems like ChatGPT, so developers and researchers must prioritise and standardise rigorous testing and ongoing monitoring.
AI systems are increasingly making decisions that affect our lives from the introduction of autonomous vehicles to clinical diagnostic systems, and it is therefore critical to establish clear lines of accountability.
In the face of all these threats, society is unprotected, as the rate of technological change is so rapid that even the most informed policymakers cannot keep up, and there are no realistic investments to do so. As Joseph Fuller, a professor at Harvard Business School, puts it: “Regulatory agencies are not equipped with the AI expertise to engage in [oversight] without real focus and investment.”[6]
In addition to these ethical problems inherent to the functioning of AI, there are other problems arising from its inappropriate or perverse use, such as cyberattacks, disinformation that allows the manipulation of public opinion and amplifies social divisions, or the development of autonomous weapons. But what some fear most, when seeing the accelerated pace of AI development, is the possibility that AI systems will surpass human intelligence, and they demand that measures should be taken regarding their control and alignment with human values.
Geoffrey Hinton’s warning
It is above all this possibility that led the recent Nobel Prize winner, Geoffrey Hinton, to an important change of perspective. In April of last year, he surprisingly resigned from his position as vice president of engineering at Google, abandoning the front line of research. This is how The Register headlined this decision: “Hinton, Google’s top expert, resigns, warns of AI dangers and partly regrets a life’s work”[7]. In an interview published at the time on NPR, he said: “These things could become smarter than us and decide to take over, and we have to worry now about how to prevent that from happening.”[8]
The fact is that Hinton stepped away from AI development work in order to be able to freely express himself on the dangers of AI and actively participate in setting the standards for its development, collaborating with international organizations and research institutions, and contributing ideas on how to face the ethical challenges posed by AI advances.
Once he left his position at Google, he gave a lecture in Cambridge entitled “Two Paths to Intelligence”[9], in which he compared the way biological intelligence and AI act and explained why he fears that AI could surpass human intelligence and take over.
In his presentation, he distinguishes between digital computing, where the software is independent of the hardware, so that programs can be transferred from one computer to another, making knowledge “immortal”; and biological computing, which is developed in the human brain, which takes advantage of the “analog” properties of its “hardware” (neurons, synapses) and in which the results are stored in the same brain, so that knowledge cannot be transferred and defines it as “mortal” knowledge.
This leads to important differences in two aspects: the exchange of knowledge and the learning mechanisms.
Digital computing allows the instant fusion of learned information between the different copies of a neural network, which gives digital intelligence greater speed in the acquisition and diffusion of knowledge; biological systems, on the other hand, are based on a slower process of learning by observing and replicating a teacher.
Learning in digital computing uses backpropagation, which, although apparently a “dumb” algorithm, turns out to be a powerful algorithm to adjust the connections of a neural network based on the errors it makes, and which when combined with the computing power and precision of digital systems, would allow for more effective learning than biological computing.
On the other hand, access to data and the computational power of digital intelligence is practically unlimited compared to the limited knowledge capacity of biological brains.
The controversy over the superiority of human intelligence
Scientists and philosophers have been reasoning for decades about the possibilities of AI surpassing human intelligence. Human beings acquire knowledge organically, having experiences and understanding their relationship with reality and with ourselves. But this mechanism is something extremely complex that we do not know and that includes perception, communication, memorization and information with qualities such as emotion, subjectivity, intentionality and attention. In addition, human intelligence allows it to understand abstractions that are not perceptible to the senses. In contrast, artificial neural networks learn abstractly, processing through the use of algorithms, enormous stores of information without relating it to reality, that is, they lack genuine emotional experiences.
Although neural networks were originally developed to imitate human intelligence, the reality is that we are dealing with two very different ways of acquiring knowledge. What Hinton says is that the intelligence displayed by artificial intelligence systems transcends their artificial origins and could be better than the human brain. And he argues that, if AI becomes much more intelligent than humans, it will be very adept at manipulating us without us realizing it. Similarly, we might not realize that we are being manipulated by AI because it would be much smarter than us. [10]
But there are other ways of thinking, that of those who do not see that AI can become a threat or the most radical that it is impossible for it to reach the level of human intelligence.
Yann LeCun, a former student and collaborator of Hinton, currently vice president and chief scientist of AI at Meta, believes that at present AI stumbles upon the limits of the physical, and expresses this by stating that “any cat can jump over a series of furniture and reach the top of some bookshelf. Today we don’t have any AI systems that come close to doing these things, except for self-driving cars,” and they are oversized, requiring “mapping the entire city, hundreds of engineers, hundreds of thousands of hours of training”[11]. But he does have faith that this will be overcome by a simple basic idea: if neurons can do it, neural networks can too. However, he claims that it will never pose a threat: “AI assistants will end up being smarter than us, but we shouldn’t feel threatened. We should feel empowered by it. It’s like everyone has a team of smart people working for them. There’s nothing better than working with people smarter than you, right?”[12]
There are many who defend the superiority of human thought compared to AI based on different arguments. Mathematical logic argues, as Nobel Prize winner Roger Penrose does, based on Gödel’s incompleteness theorem, according to which the capacity of any system based on algorithms is limited. And in human nature there are non-computable features, our brain works according to the laws of quantum mechanics[13][14].
Neuroscience also argues that artificial algorithms try to imitate only the conscious function of parts of the cerebral cortex, ignoring the fact that not only is every conscious experience preceded by an unconscious process, but also the transition from the unconscious to consciousness is accompanied by loss of information.[15]
An unresolved debate
Although the arguments in favor of the supremacy of human intelligence over AI may seem solid, our limited knowledge of the nature of the universe and the mechanisms of the brain does not allow us to completely rule out the opposite possibility. It would be prudent to take into account Geoffrey Hinton’s recommendations and give priority to transparency and security in the development of AI. His recent nomination as a Nobel Prize winner will undoubtedly give greater visibility to his proposals.
Manuel Ribes – Life Sciences Institute – Bioethics Observatory – Catholic University of Valencia
***
[1] Press release: The Nobel Prize in Physics 2024 – NobelPrize.org October 8, 2024
[2] M. Ribes Artificial intelligence as a problem UCV Bioethics Observatory July 20, 2021
[3] Geoffrey Hinton – Interview – NobelPrize.org October 2024
[4] Paolo Confino Nobel laureate Geoffrey Hinton is both AI pioneer and front man of alarm Fortune October 10, 2024
[5] Meg Tirrell With AI warning, Nobel winner joins ranks of laureates who’ve cautioned about the risks of their own work CNN October 13, 2024
[6] Christina Pazzanese Great promise but potential for peril The Harvard Gazette October 26, 2020
[7] Katyanna Quach Top Google boffin Geoffrey Hinton quits, warns of AI danger The Register May 1, 2023
[8] Bobby Allyn ‘The godfather of AI’ sounds alarm about potential dangers of AI NPR May 28, 2023
[9] Geoffrey Hinton – Two Paths to Intelligence (25 May 2023, Public Lecture, University of Cambridge) https://blog.biocomm.ai/2023/06/06/cser-cambridge-geoffrey-hinton-two-paths-to-intelligence-06-june-2023/
[10] Yunzhe Wang A Synopsis of Geoffrey Hinton’s Warning of Humanity’s Existential Threat from AI Medium May 7, 2023
[11] Joshua Rothman Why the Godfather of A.I. Fears What He’s Built The New Yorker November 13, 2023
[12] Business Standard AI will amplify human intelligence, not replace it, says Meta’s Yann LeCu October 23, 2024
[13] F. Gelgi Implications of Gödel’s Incompleteness Theorem on A.I. vs. Mind NeuroQuantology, Issue 3, 186-189, 2004
[14] D. Heredia Penrose and his position against the possibility of the computability of the human mind and consciousness Dialnet Research Repository University of Seville, July 4, 2024
[15] Athanassios S Fokas Can artificial intelligence reach human thought?PNAS Nexus, Volume 2, Issue 12, December 2023, pgad409 https://doi.org/10.1093/pnasnexus/pgad409