
The AI World Summit in Paris has come to an end. Scientists and business leaders across Europe are now wondering what comes next. On 1 August 2024, the European AI Act came into force, introducing strict guidelines on the use of artificial intelligence by businesses and governments. However, the summit’s final declaration suggested that many of these tough regulations were softened in practice. The Americans, in particular, are strongly opposed to strict AI regulations.
“It remains to be seen how this Act will be translated into European legislation,” says Ann Nowé, VUB professor, head of the AI Lab, and academic director of FARI (the AI institute of VUB-ULB), who attended the summit.
“It’s mainly businesses that are worried,” says Nowé. “They want to understand what additional requirements and regulations the government will impose on AI development and how much this will cost. After all, they have to compete with tech giants in the US and China, which are not restricted by these stricter rules and regulations—at least not as long as they stay outside the European market.”
The AI Act categorises AI applications into different risk levels, ranging from applications with unacceptable risks to those with low risks. The former includes the use of social scoring, manipulative techniques, and smart surveillance cameras, except in specific security situations. AI systems must not manipulate or harm vulnerable groups.
Strict rules also apply to high-risk applications. AI used in healthcare, for example, can have far-reaching consequences and is subject to strict privacy regulations. Developers of data-driven AI systems must demonstrate that their models meet requirements for data quality, transparency, and safety.
For medium-risk applications, such as chatbots and generative AI models, users must be informed when they are interacting with AI-generated content. The AI Act does not impose regulations on minimal-risk applications. It is also important to note that AI is, of course, not above the law.
“Most AI companies are not opposed to the principles of the AI Act,” says Nowé. “But they fear a competitive disadvantage because the follow-up legislation will impose additional administrative burdens, requiring them to prove compliance with the regulations. The key concern is that the AI Act must remain workable. Additionally, there is tension because lawmakers do not always grasp the technical nuances of AI.”
Europe is downplaying concerns about the AI Act. The legislation will be implemented in phases and is expected to be fully in force by 2027. This gradual rollout aims to give businesses and regulators enough time to adapt to the new rules.
More info:
Ann Nowé: +32 474 94 67 34, ann.nowé@vub.be