Share This Article
Artificial-intelligence tools such as ChatGPT might soon become fully autonomous by learning to perceive and interact with their environment.
Some of the surprise winners of this year’s Nobel prizes were the developers of AlphaFold, an artificial intelligence (AI) model that can accurately predict the 3D structure of more than 200 million proteins. AlphaFold is powered by artificial neural networks that can glean patterns from how amino acids interact in known proteins and then use that information to model the behaviour of unknown proteins. Chatbots such as ChatGPT rely on similar technology to ‘learn’ and mimic human language.
The revolution set in motion by this technology is compellingly explored in Terrence Sejnowski’s ChatGPT and the Future of AI — a sequel to his 2018 book The Deep Learning Revolution. Sejnowski, a computational neurobiologist, takes readers on a whirlwind tour of the evolution of AI — from simple computational models of a single neuron built in the 1950s to breakthroughs in deep learning over the past two decades that have resulted in the birth of large language models (LLMs), which can generate human-like responses to questions.
Sejnowski’s book is a must-read for those seeking to understand the world we live in — a world in which machines transform the fabric of human cognition. Put simply, a neural network is a web of interconnected nodes, or neurons, that can process data and learn from them by adjusting the strength of the connections between the nodes. When the strength of these connections is altered to achieve a desired output during a training phase, the model learns to detect underlying patterns.
The fundamental inspiration behind neural-network architectures is the human brain. Sejnowski suggests that untangling the mystery of why these simulated models of the brain work so well — especially when they are trained on large amounts of data — could prove to be as seminal as the discovery of DNA. By rigorously interrogating the “otherworldly behaviour” of LLMs, new conceptual frameworks might emerge, he predicts.
He sees parallels between the current debates over the meanings of ‘intelligence’ and ‘understanding’ and discussions a century ago about the nature of life. Back then, vitalists argued that life is a non-physical force — an essence infused in living things but not in, say, rocks — whereas mechanists thought that life could be fully explained by physical and chemical processes. Just as the discovery of the DNA double helix bridged conceptual gaps and transformed biology, Sejnowski predicts that advances in AI will bring forth revelations about intelligence.
Evolving understanding
The holy grail of AI, Sejnowski explains, is artificial general intelligence: a machine that can think, learn and solve problems across a wide range of tasks, much like a human can. The current generation of LLMs is far from that. Referred to pejoratively by some researchers as ‘stochastic parrots’, they mostly mimic human language without true comprehension.


