On January 25-26, 2023, CounteR attended a conference on AI technologies, used by law enforcement, with a focus on ethical and legal aspects. The conference was organised in Brussels by EC-funded projects ALIGNER, popAI, STARLIGHT (all stemming from the same H2020 call) and the AP4AI project and with the support of DG-Home, and handled issues related to the legal, ethical, operational and social aspects of the use of AI in a civil security context:
- AP4AI creates a global framework for AI accountability for policing, security and justice. The project’s framework is grounded in empirically verified accountability principles for AI as carefully researched and accessible standard that supports internal security practitioners;
- ALIGNER is a coordination and support project that aims to bring together European actors concerned with AI, law enforcement, and policing to collectively discuss needs for paving the way for a more secure Europe;
- popAI aims at fostering trust in the application of AI and AI-enable mechanisms in the security domain by increasing awareness and social engagement from multiple sectors. This approach will bring a unified European view across LEAs; and
- STARLIGHT, through which LEAs will collaboratively develop their autonomy and resilience in the use of AI for tackling major criminal threats, in a community that brings them together with researchers, industry and practitioners in the security ecosystem.
On the conference’s agenda were a large variety of panels and workshops with renowned speakers and panellists. The EU Agency for Fundamental Rights (FRA) moderated a panel on the legal and ethical collection and use of data in law enforcement. Other panels focused on the legal grey areas in the development of AI tools.
Panellists noted that, in general, the people are polarized regarding the rapid integration of AI into law enforcement: some want for this process to become faster, while others have their doubts and fears. In about four years, AI-driven attacks are predicted to be effective; while in the longer run, AI may create completely new ways of attacking, giving more way to offensive uses, a representative of the CyberPeace Institute noted.
The AP4AI project representatives delivered a presentation of the Accountability Principles for Artificial Intelligence (AP4AI), including its relevance for the future EU AI Act. The project team presented the prototype of its online conformity assessment tool, which is designed to guide internal security practitioners through the evaluation of the compliance of their internal processes when developing and deploying AI tools with the 12 AI accountability principles. An interactive workshop gathered validation feedback, which will be used to further refine and develop the project’s online tool.
The event also featured a workshop on ensuring compliance during research and development by highlighting the approach used by the STARLIGHT project, while another session was on compliance during use with insights from the ALIGNER project’s methodology. The last workshop was dedicated to the controversies and the sociotechnical imaginaries for AI in civil security.
The experts and practitioners attending the conference – including a representative of EI as a member of the CounteR consortium – came from law enforcement agencies, academia, civil society, and commerce and industry. In attendance were also institutional representatives from the EC and EU Agencies such as Europol, CEPOL, and Frontex.