We should consider keeping AI at a distance in universities until better alternatives exist, argues Tim Brys. Tim Brys holds a PhD in AI and is co-author of And Then There Was AI. How to Stay Human Among Machines?, to be published in March. This opinion piece appeared in De Standaard.

It is one of the highlights of the academic year. Students, professors and teaching assistants push themselves to the limit during exam season. For several years now, generative AI has been rushing to their aid. Creating summaries, drafting exams, correcting questions — it can all be done a little faster and more efficiently. Universities themselves have embraced the technology, though not without controversy. The misstep by UGent rector Petra De Sutter illustrated this once again. The question is: does AI support education, or not?

A new, extensive study by the renowned Brookings Institution sheds light on this. The authors describe the report as a premortem, because we are still only at the beginning of AI use in education. ChatGPT has existed for about three years. We therefore lack rich historical data for a thorough postmortem. We must make do with what we have. The researchers interviewed hundreds of students, parents, teachers and technology developers from fifty countries, reviewed four hundred academic studies on AI in education, and had experts analyse the data. The conclusion: in the current context, the risks of AI use in education outweigh the benefits.

Writing is thinking


First, the data show that students using generic AI models such as ChatGPT often engage in cognitive outsourcing: complex thinking is delegated, critical thinking is undermined, knowledge weakens, the line between truth and falsehood blurs, creativity flattens, and reading and writing skills erode. Creating a summary with AI is not as educational as writing that summary yourself. Brainstorming with AI means you do not learn to form creative associations on your own. Writing is thinking — but if AI writes, you are not thinking. Through this outsourcing, students also lose virtues and skills such as patience, perseverance, tolerance for ambiguity and learning from mistakes.

"For one university, embracing generative AI was a passionate impulse; for another, an act of capitulation"

In addition, AI use threatens the social nature of learning: students interact less with one another and with lecturers, causing the learning community — where ideas are exchanged, collaboration takes place, knowledge is built together and consensus is reached — to fragment. Attendance declines: recordings are available online anyway, and AI can summarise them. Students no longer ask questions of lecturers or peers, but of a chatbot. Lecturers increasingly doubt whether students submit authentic work and express themselves in their own words, while students suspect their lecturers of the same.

On top of this, AI chatbots can encourage addictive digital relationships, disrupting students’ emotional and social development. They are also tools of big tech surveillance and manipulation, fuelling polarisation, and they widen inequality between those who are AI-literate and those who are not. In that light, embracing AI in education feels more like a chokehold than a warm embrace.

Socratic chatbot

To be clear, this concerns the current situation, in which generic AI models such as ChatGPT are widely used without solid educational frameworks. The researchers argue that education can be strengthened with specialised AI models developed according to pedagogical principles, equipped with mechanisms and guardrails to prevent dependency and cognitive outsourcing, and embedded in an educational context where ethical and critical AI use is the norm.

Think of a Socratic chatbot trained on a selection of reliable sources, designed to resemble a human as little as possible, tightly limited in what it can write, running on a local server, and consistently challenging students rather than obediently answering them. Even then, such a chatbot will still hallucinate, meaning critical thinking remains essential.

At best, AI systems can improve access to education (for instance by translating resources), personalise learning processes (including for neurodivergent students), and take over administrative tasks from teachers, giving them more time for students. “AI enriches learning when it broadens and deepens the capacities of and interactions between students, teachers and course content,” the researchers state.

Passionate impulse

But such an outcome is far from guaranteed. Although digital technology is sometimes presented as inherently beneficial (smartboards in primary school, laptops in secondary school, AI at university — surely it must be progress!), the reality does not support that assumption. That is why it is crucial for educational institutions and teachers to distinguish where and how AI truly supports education — and where it does not. In the latter case, AI is best kept out, even if enforcing such a policy is not easy.

For one university, embracing generative AI was a passionate impulse; for another, an act of capitulation. In both cases, it happened without proven benefits of the technology. Perhaps the suffocating embrace has lasted long enough, and we should keep AI at a greater distance until better tools are available. In any case, I wish students every success — and wise use of AI — during this exam period.