Professor Dr Vincent Ginis is the VUB’s Special Envoy for Artificial Intelligence and Data-Driven Strategic Policy Preparation. “The OO-scan, our internal exercise where we automatically screened 4,684 course component sheets for AI fraud risk, was one of the showpieces of our AI policy. Other universities looked at it with astonishment.”

How do you implement AI in a university that both embraces and questions AI?
“It has been, and still is, a challenge on many levels. When we started about two and a half years ago, the Vice-Rectorate for Education and Student Affairs launched a call to all lecturers and staff to share ‘inspire us’ submissions. They told us how they ideally wanted to transform their teaching activities or methods. We discovered a strong culture of experimentation on our campuses, and we identified a few early adopters already working with AI. We’ll soon launch a similar call again, asking VUB colleagues to share practices we may not yet know."

“A huge amount of work and time went into what you could almost call evangelising: helping students and academic staff navigate AI”

“A huge amount of the Vice-Rectorate’s work went into what we jokingly called evangelising: helping students and academic staff navigate AI. What are the pitfalls? The misunderstandings? The best practices and opportunities? We set up lecture series with the AI for Education team. Early on, we also established a team offering language-model support to researchers. It’s all quite invisible work: not big projects, but extremely time-consuming.”

How did the implementation itself unfold?
“Modern AI, especially large language models, is unusual software. A top-down approach is far less efficient than a bottom-up one. Traditional software such as SAP or Microsoft Office is rolled out top-down and everyone simply starts using it. But with large language models, users need to discover on the spot how to work with them. It’s like having a personal assistant. People often need to figure out ad hoc how to adjust detailed processes and how to do that efficiently. That’s why the VUB set up many bottom-up initiatives—within programme councils or inspiration sessions—where colleagues literally must try it themselves, inspiring others to experiment along the way.”

Portret Vincent Ginis

“In parallel, we realised that because GPT models evolve so quickly, it’s risky to build major systems on a single model. Six months later, a new model may make all previous work obsolete. That’s why many of our projects are ‘low-hanging fruit’: isolated, well-framed cases where we know exactly who the stakeholders are. It’s safer, clearer and more workable than building giant new software systems.”

All courses at the VUB were scanned by AI. Why?
“Too many people see AI only as a tool to speed up what they already do. A negative example is writing emails faster. The OO-scans are something completely different—something we simply couldn’t do before. They opened up new conversations between the Vice-Rectorate for Education and lecturers about their course files, adding a new dimension to quality assurance. We can now run scans across many themes: evaluation formats, teaching methods, even how learning pathways are woven through a curriculum. In the past, that was only done through sampling or anecdotes. It became one of the flagship achievements of our AI policy this past year. Other universities looked at what we were doing with wide eyes. These examples—where we don’t simply try to do everything faster but actually improve quality by enabling things we couldn’t do before—are the most exciting signs of what AI can offer.”

"We also look at the new opportunities AI gives us to strengthen education and research structurally”


“Nationally and internationally, universities tend to focus heavily on detecting AI-generated texts—catching students who used language models improperly. Understandable. But at the VUB we also pay attention to how AI can help us structurally improve education and research. That vision is only possible thanks to the shared mindset across our Vice-Rectorates.”

What about research policy and the support researchers receive?
“AI affects research differently, largely because research aims to solve a specific societal issue. That means researchers have far more freedom to use AI tools. In education, the goal of a master’s thesis is never the thesis itself, but the student’s learning process. The purpose lies within the student. So you need different policy frameworks for researchers and students. The rules for students are stricter. For researchers, we take a more open approach: tools can be used freely as long as a number of checks are followed. The reason for this more flexible approach is simple: if tomorrow a new cancer treatment is developed with help from a language model, we’ll celebrate. But students outsourcing their own learning process to AI is a very different story.”

“Researchers and students need different policy frameworks”

AI is also used in the VUB’s Climate Action Plan.
“The university committed to sustainability years ago. That comes with major responsibilities regarding activities, their impact and the reporting. A complex process—perfect for a language model. It can gather documentation scattered across multiple places. We organised several sessions with the colleagues involved, helping them accelerate the calculations needed to map out sustainability measures. A language model can help by making estimates or supplementing incomplete information with online sources. You can think of such a model as a diligent personal assistant.”

A common concern is that AI may reduce human intellectual effort.
“That’s true: if you don’t use it, you lose it. If you’ve driven around a city with Google Maps for years, you’re less likely to find your way without it. That’s why I prefer the perspective where we agree to use AI for things we didn’t do before. That doesn’t mean handing tasks away, but expanding the range of tasks we can complete and producing higher-quality outcomes. An example from my own practice: I teach several maths courses to large first-year groups—around 800 students. This year, I can have students complete a full written test in class six times per semester. In the past, this happened twice a year, and with multiple choice. I now give these handwritten tests—fully anonymised—to a language model and ask it to summarise the most common mistakes. That wasn’t possible before: I only got one overview per year, at the final exam, too late to adjust teaching. Now I have multiple measurement moments. If GPT tells me that 60% still struggle with logarithmic differentiation, I know exactly what to revisit.”

“The VUB SharePoint page ‘Generative AI and Education’ holds a wealth of information”

Steven Van Luchene, Head of Educational Support, was closely involved in the OO-scan project.
“All concrete AI applications were really rolled out only this year. We first determined our focus—you can’t do everything at once. In education, that focus was twofold: the impact on assessment—how do we know whether students actually achieve the competencies?—and AI literacy, both for lecturers and students.”

“In terms of assessment quality, we set up the now famous OO-scan. As Vincent mentioned, we automatically scanned every VUB course using AI. We used the course component sheets, or OO-files—4,684 of them. The scan checked to what extent the described assessment methods were vulnerable to AI fraud. For 54%, that risk existed. Based on the February 2025 scan, we gave each lecturer individual recommendations. After a second scan a few months later, the risk dropped from 54% to 40%. Based on those figures, we proactively contacted several programmes with many written assignments to help them make their assessments more AI-proof. Another initiative is the curriculum innovation mandates, or CVMs. Educational Support allocated €300,000 for projects integrating AI literacy structurally into curricula.”

“The OO-scan was a turning point: automatically screening 4,684 courses for AI fraud risk—something we simply couldn’t do before”

“I’m also very pleased with the VUB SharePoint page ‘Generative AI and Education’, developed by our project team. It contains a wealth of information, tips and tricks for lecturers and programmes. For example, we have several Canvas modules that lecturers or programmes can copy-paste straight into the curriculum. Courses such as ‘Understanding course material with GenAI’ or ‘Data Analysis with GenAI’. Students learn to use AI as a personal tutor. On the lecturer side, we’ve collected inspiring cases—drafting example exams, preparing lesson plans… I encourage every lecturer to explore those pages!”

“As for the future, we’ll continue supporting programmes that still need to make major shifts. But an even bigger transition awaits us: so far, we’ve focused on how to deal with AI within the current educational model. The next step is far larger: how should education itself change in the age of AI? Which competencies do students need—and which not? What should students learn, and how do we design education so that generations raised in the AI era become future-proof graduates? We’re not there yet. A small group is currently reflecting on this for the next policy plan, aiming to produce a kind of white paper.”

Central Internal Appeals Committee withstands a surge in appeals

Sarah Heyl is a legal adviser at Education and Student Affairs. She uses artificial intelligence in her work for the Central Internal Appeals Committee.
“Every year, especially in the second week of September, students submit appeals against study-progress decisions taken by the VUB. The number of appeals has risen exponentially. Every appeal requires a response. In a short period, an enormous amount of data needs to be processed.”

Students can appeal a wide range of decisions: exemption requests, exceptional circumstances requests, disciplinary measures for exams, exam decisions, and refusals of enrolment or enrolments under binding conditions.

“We see a huge increase. Last year we had around 450 appeals; this year we’re above 730. It’s a major challenge to respond adequately while ensuring students don’t wait longer for decisions. Around half the appeals concern refused enrolments or enrolments with binding conditions. Every admissible case goes to a hearing where a large committee deliberates. Our service does not decide. Three voting members from the ZAP do, supported by advisory members—a legal expert, a faculty secretary, an ombudsperson and a study-path counsellor. In the past we analysed all cases manually. Now Vincent Ginis’s team uses AI to conduct this preparatory administrative step. AI also generates a summary, and in some cases we use that as a starting point. But of course we always check and correct where needed. We never adopt it blindly.

“AI helps us process large volumes of case data quickly. This allows us to give students clarity sooner”

“It may still be early days, but we’re open to automating other steps—except the legal assessment, of course. Final decisions are made by the appeals committee, and we do not want to compromise that. Another avenue to explore is exemption cases. You could analyse the learning outcomes already achieved and compare them with the required learning outcomes of the course still to be completed. AI could retrieve those course sheets automatically. It could also compare the documents. Again, always with oversight from our legal experts and the committee.”