CounteR Hosted a Webinar on the Role of Large Language Models in Countering Radicalisation

in News
Share this publication

On 25 January 2024, CounteR conducted its second public webinar with expert panellists from AST, IMG, INS, and ELTE.

Entitled “The Role of Large Language Models in Countering Radicalisation”, CounteR’s second webinar marks a milestone in a series of public activities that shape the final stage of project implementation.

Attended by more than 50 participants and moderated by EI, this webinar provided an opportunity for the CounteR consortium to present progress achieved in the development of the CounteR platform on both the technological and the research levels.

The CounteR Project and its Solutions – Adrian Onu from AST started with a project overview and talked about the CounteR solution, stressing that it provides a proactive and technology-driven approach to identifying and countering radical content and the spread of extremist ideologies online. The project’s current status in terms of usability, performance, security and accuracy has achieved remarkable progress in the period from Pilot 1 to Pilot 3. Coming up in 2024 are Pilot 4 and the testing with external stakeholders, tentatively scheduled for the end of March. Those interested to attend CounteR Platform’s Beta Test can subscribe through this link.

Synthetic Visual Data Generation using AI – In his presentation, Angel Spassov from IMG talked about the utilisation of synthetic data for model development and presented demos of accessible image generation with SOTA AI. He provided a number of examples of faking personas in videos and pointed out IMG’s contributions to CounteR in the domains of image analysis and for the development of the semantic reasoning and insight correlation engine.

Embedding Techniques of Networks – In the next module, Professor Gergely Palla from ELTE  stressed on the value of embedding the networks, particularly into hyperbolic spaces. He pointed out that in hyperbolic spaces we have exponential increase and presented a comparison of different geometries. The expert outlined embedding approaches such as node2vec, dimension reduction, and likelihood optimisation.

Challenges in Model Bias and Explainability – The webinar’s conclusive segment, moderated by Guillem Garcia from INS, was dedicated to the challenges in model bias and explainability. Bias is the presence of systematic and unfair errors in predictions. The sources of bias in radicalisation models can be two-fold – first, bias in the datasets for training the models; and second – bias in the large language model. De-biasing of the datasets can be achieved by replacing the sensitive terms for protected groups by generic ones in all the training data sets.

Among the guests to the virtual session were representatives of the TCO cluster that aims at raising awareness about the EU’s Regulation on Terrorist Content Online (TCO) – an initiative of the FRISCO Project together with other sibling projects, funded through Horizon Europe.

Previous Post
CounteR Project Featured at a High-Level Cybersecurity and Military AI Conference in Romania
Next Post
CounteR to Hold its 6-th Technical Meeting and 4-th Piloting in Lisbon
You may also be interested in these topics
Skip to content