The development of AI is currently largely in the hands of major American tech companies — and that is a worrying situation, according to leading AI experts Ann Dooms and Luc Steels. As a result, we have become far too dependent on these companies, while the direction of AI development is increasingly being shaped by a single perspective. They argue that many of the detours and alternative paths explored by researchers in the past have in fact led to some remarkable breakthroughs. In their book History of Ideas in the Science of AI, they also make the case that universities must continue to play a central role in AI research.

Ask Ann Dooms, a professor at Vrije Universiteit Brussel, about the history of AI and you are treated to a sweeping account of mathematics, mechanics and philosophy, taking in everyone from the Babylonians and Leonardo da Vinci to Blaise Pascal, Charles Babbage and Ada Lovelace, before eventually arriving at Alan Turing. In his famous 1950 paper, Computing Machinery and Intelligence, he introduced the so-called Turing Test. In this experiment, participants attempt to determine through conversation whether they are communicating with a human being or a machine. If the difference can no longer be detected, one might say the machine is displaying intelligent behaviour. But according to Dooms, the test is often misunderstood today.

“The Turing Test was never intended as some sort of competition to see who has the cleverest computer,” she says. “It was more of a pragmatic way of approaching a philosophical problem. In reality, we still do not fully understand what ‘thinking’ actually means.”

It is one of many moments in history where a scientist unintentionally made a fundamental contribution to the development of AI. “Ada Lovelace, for example, was searching for a way to automate the work carried out by calculating machines. At the time, many steps in those calculations still had to be performed manually. At one point she became inspired by the punched cards used in the Jacquard loom, which automatically created patterns in textiles. In doing so, she effectively invented programming.”

Gold at the Mathematics Olympiad for… Google

The development of AI has accelerated dramatically in recent years thanks to the growth in computing power and the rise of the internet, which has given us access to vast amounts of data. Specialised graphics chips originally developed for the gaming industry have also played a crucial role. These technologies now allow us to train AI systems on millions of examples in a relatively short space of time. This ultimately led to the emergence of large language models (LLMs), which power today’s generative AI systems. But according to Dooms, this remains largely a statistical approach to language.

“These models recognise patterns in enormous quantities of text,” she says. “But they still do not understand language in the way humans do.”

“Generative AI can produce impressive results, but genuine understanding, reasoning and insight remain largely unresolved challenges”

The progress currently being made in AI development is extraordinary. Some AI systems are now taking part in mathematics Olympiads and even winning gold medals.

“That is possible with so-called neurosymbolic AI: systems that identify patterns in data, such as language, while also reasoning using formal rules. An LLM can excel by recognising patterns in vast numbers of solved exercises and proposing a solution method, while simultaneously verifying it through formal calculations. But that does not mean computers are already more intelligent than humans. Human intuition still operates in a fundamentally different way, because human participants achieve the same results with far less training and computing power. Generative AI can produce impressive results, but true understanding, reasoning and insight remain largely unsolved challenges. There is still a great deal of work to be done in that area, but if we focus solely on further optimising LLMs, we will never get there.”

Language as an Ant Trail

According to Dooms, the future development of artificial intelligence requires us to look more closely at how humans solve mathematical problems. Mathematicians often discover solutions by inventing entirely new mathematics. And there is a parallel here with the way language evolves. This brings us to Remi van Trijp, one of the other authors of the book. Van Trijp is a researcher at Sony Computer Science Laboratories in Paris and was previously affiliated with the Vrije Universiteit Brussel. For more than twenty years he has studied the relationship between language and artificial intelligence, a line of research originally launched at the VUB by AI pioneer Luc Steels.

Van Trijp sees language as what is known as an emergent system: one that arises from local interactions between individuals. “A good example is the way ants create a trail,” says Van Trijp. “No single ant decides where the trail should go. A global structure emerges spontaneously through local interactions. Language works in much the same way: people develop words, rules and meanings through constant communication with one another. That is why language is continually evolving.”

“By asking different questions from the major technology companies, we may be able to uncover the next breakthroughs”

In his view, both LLMs and neurosymbolic AI fall short in this respect. “Language technology always tries to freeze language in place, whereas language is constantly evolving. That is why such systems will always lag behind real people. What we really need to understand are the processes that continually reshape language.” And that, of course, requires fundamental research. “By asking different questions from the major technology companies, we may be able to uncover the next breakthroughs.”

After the Hype: Disillusionment

Luc Steels, one of Belgium’s leading AI experts, also believes universities have an important role to play in the future development of AI. Remarkably, Steels began his career as a student of language and literature. Computers were still rare at the time, but he immediately recognised their research potential, particularly in the field of language processing. One of the earliest milestones Steels recalls is a system developed in the early 1970s by Terry Winograd, a researcher at Massachusetts Institute of Technology. His programme could understand natural language and carry out commands in a simple virtual environment.

“You could type something like ‘pick up the red block’, and the system would ensure that a robotic arm picked up the correct block. Today that sounds straightforward, but in an era when computers still relied on punched cards, it was revolutionary,” says Steels. Whereas AI research was once largely driven by universities, the centre of gravity today lies with major technology companies. Generative AI in particular is now developed almost entirely by industrial players. According to Steels, there is a clear reason for this: scale.

“The techniques themselves have often existed for decades, but only now, with enormous quantities of data and computing power, can they truly be scaled up. That requires vast data centres and infrastructure demanding investments far beyond the reach of universities.”

“Hallucinations in generative AI systems are not a temporary problem; they are built into the technology itself.”

Many companies hope that generative AI will eventually lead to Artificial General Intelligence (AGI): systems capable of performing as well as, or better than, humans across every domain. “Whoever develops such technology potentially controls a large part of the economy,” he says. “That is why companies are investing billions. But there is still no guarantee that this promise can actually be realised.”

According to Steels, the current AI wave shares characteristics with earlier technological hypes. “There will probably be another period of disappointment,” he predicts. “We have seen it before, for example with expert systems in the 1980s. A major problem with generative AI is hallucinations: systems producing convincing but incorrect information. That is not a temporary issue; it is built into the technology itself.” In Steels’s view, universities still have a crucial role to play — not in building the largest AI models, but in exploring entirely new ideas.

“Universities must look twenty years ahead,” he says. “Not at what already works today, but at what may become possible tomorrow. If we in Europe choose the right strategy and invest in research, we can once again play a leading role in the next generation of AI.”

More about History of Ideas in the Science of AI
History of Ideas in the Science of AI was developed within the framework of the deMens.nu’s Willy Calewaert Chair, awarded to Luc Steels, emeritus professor at the Vrije Universiteit Brussel and a pioneer in artificial intelligence. The book has been published by VUBPress and is available in print from the authors, as well as digitally via Zenodo, Apple Books and Google Books.

Click here for a digital edition of History of Ideas in the Science of AI.