The prestigious i-Police project has been discontinued after ten years due to a lack of results. Rosamunde Van Brakel calls for a new policy culture to prevent such costly failures. Her opinion piece appeared in De Standaard.
In the aftermath of the 2016 terrorist attacks, the federal government that year launched the prestigious i-Police project. This was done by ministers Jan Jambon (N-VA, Home Affairs) and Alexander De Croo (Open VLD, Digital Agenda). “The i-Police system will bring our police services into the 21st century,” De Croo proclaimed. The innovative solution was meant to enable better linking, sharing and analysis of information from, for example, street cameras and suspicious behaviour on social media. It was also intended to bring together the police’s scattered databases and applications on a single, uniform platform. Artificial Intelligence (AI) would help the police assess risks more accurately, track down criminal networks and deploy resources more proactively, enabling so-called predictive policing. This involves collecting signals and patterns that may mean little on their own, but become significant when connected.
A future was sketched in which detectives and government officials would constantly have the right, actionable information in real time to do their jobs. This would provide the police with better intelligence to prevent terrorist attacks. It fitted into a broader narrative of intelligence-led and big data policing, an approach that has gained momentum since the late 20th century.
Consultants played a major role in the project from the very beginning. As early as 2017, Smals (a non-profit organisation that serves as the government’s central ICT partner) brought in consultants to draft the specifications for the i-Police project. This led in 2021 to a framework agreement worth €299 million, signed with a consortium led by the French consultancy group Sopra Steria. Why so much time passed between the announcement, preparation and signing of the contract in 2021 remains unclear, as does how much money had already been spent by that point on consultants and Smals to prepare the project.
In 2023, the results of a Deloitte audit were published, highlighting that the project’s priorities were unclear and that communication needed improvement. Moreover, information management, information processes and ICT projects were not aligned. The audit also revealed a lack of vision for the digital transformation process.
Today, roughly ten years after the announcement, Minister of Home Affairs Bernard Quintin (MR) has pulled the plug on the i-Police project, following another Deloitte audit reporting a lack of tangible results. A costly fiasco, raising many questions.
Ethical questions
The digitalisation of policing in Belgium is characterised by a narrow view and approach to technology, partly driven by consultants and tech companies. At the time such projects were conceived — and still in the current digitalisation strategy — there has been insufficient attention to democratic safeguards, ethics and respect for human rights. Policymakers often ignore critical academic research, and there is little investment in studies that examine the ethical and organisational conditions for such applications, or that assess the harm technology can cause to individuals, society and the environment. Within the prevailing policy culture, questions about human rights, ethics, social justice and accountability are often treated as controversial and as obstacles to innovation and security. Consider, for example, Justice Minister Annelies Verlinden’s (CD&V) negative remarks about the EU proposal to regulate online child abuse, in which she referred to a “privacy lobby”.
Scientific research must comply with strict rules on research ethics, human rights and transparency. Why do similar ethical guidelines and procedures not apply to innovation projects and experiments within the police? This raises serious questions about legitimacy and accountability.
Looking beyond AI
The question is also where the real innovation lies. Is it in using AI and big data analytics to digitise outdated practices, without considering the impact on human rights, social justice and criminological research? Would it not be more innovative to use new technological developments to create fresh frameworks for thinking about public safety and policing, and to explore how police and technology can play a positive, connective role in society?
It is a positive development that the integrated police are now shifting their focus towards “small-scale and modular projects that directly respond to operational needs and are developed by services with the necessary expertise”. But this alone will not solve the underlying problems. What is needed is a new policy culture characterised by humility, openness and a willingness to learn from past mistakes, such as the failed Phenix digitisation project in the justice system. Policy must be grounded in good governance practices, respect human rights, and take social and ecological justice into account. This would ensure that projects are critically and thoroughly assessed before resources are committed, and that failing initiatives are halted sooner. At European level, legislation such as the AI Act already partly meets these requirements. But research shows that legal compliance alone is not enough.
When thinking about technological possibilities for the police, we need a long-term vision that looks beyond the hype surrounding AI. We must explore new policy models and ideas about public safety that respect human rights and consider environmental impact and social justice. Earlier in this newspaper, I already pointed to the urgent need for a public debate in Belgium about the use of AI and various surveillance technologies by police and security services. Yet that debate has still not been properly conducted.