With the many media campaigns and large investments that some of the top international companies have been making into AI-fying their products, and the heightened policy claims of countries like China, Russia, France, Canada and the US in this area, an AI excellence war has emerged throughout the world, whereby each country is staking their future on playing a leading role in Artificial Intelligence.
As it becomes clear that European and national research investments for the next decades are focusing on this area of expertise, many groups suddenly see it as very opportune to switch and claim expertise in an area which was until recently not considered to be their priority. Although all this changing of gears will, once the debris has settled, no doubt generate interesting results and advancements for some, one question will be where this unbridled arms race of AI excellence will lead to. Moral and ethical questions have justly been raised about employment and the acceptability of using this technology in policy making and warfare. Yet, usually the unforeseen consequences will have the strongest impact and it is very unclear what these consequences may be.
It is based on this contemplation that The Anh Han (Teesside University) together with Luis Moniz Pereira (Universidade Nova de Lisboa) and Tom Lenaerts (Université Libre de Bruxelles and Vrije Universiteit Brussel) decided to use their expertise and apply for support from the Future of Life Institute for this study. The Future of Life Institute is a volunteer-run research and outreach organization that works to mitigate existential risks facing humanity, particularly existential risk from advanced Artificial Intelligence. Its founders include MIT cosmologist Max Tegmark, Skype co-founder Jaan Tallinn, and its board of advisors includes entrepreneur Elon Musk and (late) cosmologist Stephen Hawking.
The team’s ambition in this project is to understand the dynamics of safety-compliant behaviours within the ongoing AI research and development race and, as such, provide advice on how to timely regulate the present wave of developments and provide recommendations to policy makers and involved participants, regarding prevention of undesirable AI race escalation.
More information can be found on:
Project coordinators:
- The Anh Han, School of Computing, Media and the Arts, Teesside University
Email: t.han@tees.ac.uk
Web: https://www.scm.tees.ac.uk/t.han/
- Luis Moniz Pereira, NOVA-LINCS Lab, Universidade Nova de Lisboa
Email: lmp@fct.unl.pt
Web: http://userweb.fct.unl.pt/~lmp/
- Tom Lenaerts, Université Libre de Bruxelles and Vrije Universiteit Brussel
Email: tlenaert@ulb.ac.be
Web: http://di.ulb.ac.be/map/tlenaert/
Email: Tom.Lenaerts@vub.be
Web: https://ai.vub.ac.be/members/tom-lenaerts