An empirical evaluation of reasoning models for classifying information manipulation techniques

Main Article Content

Oleg A. Boiko
Valeriy Ya. Danylov

Abstract

The rapid evolution of modern geopolitical conflicts has transformed information manipulation and propaganda techniques from a tool of persuasion into a sophisticated weapon of mass influence. As these operations become increasingly complex, relying on subtle psychological tactics rather than overt falsehoods, the development of advanced, automated detection mechanisms is a critical security challenge. This study aims to bridge the gap between theoretical capabilities and practical applications by empirically evaluating the performance of emerging reasoning models in the specific task of classifying information manipulation techniques. The primary objective is to benchmark these generative architectures against historical supervised baselines to determine if their internal chain-of-thought capabilities offer a tangible advantage over traditional pattern-matching approaches in identifying complex rhetorical strategies. The research methodology utilizes a standardized international benchmark dataset for propaganda detection (SemEval-2020 Task 11) to conduct a comparative analysis of frontier models without task-specific fine-tuning. The study employs an inference-only strategy, integrating role-playing, definition embedding, and structured reasoning instructions to simulate expert analysis. A key methodological contribution involves the systematic variation of the reasoning budget allocation during inference to measure the correlation between computational deliberation and classification accuracy. The investigation reveals a distinct semantic advantage where reasoning models significantly outperform previous supervised systems in detecting nuanced techniques that rely on cultural context, emotional weight, and indirect logic. However, the results also uncover a critical limitation where increased reasoning effort might degrade performance on structurally simple tasks, confirming the existence of an overthinking phenomenon in automated classification. The analysis further identifies a non-linear relationship between computational cost and performance, indicating that monolithic reasoning models often yield diminishing returns compared to lightweight architectures for high-volume processing. The paper concludes that while reasoning models represent a paradigm shift in semantic understanding, they are not yet a universal solution for all information manipulation types due to structural blind spots and economic inefficiencies. The study proposes moving away from single large models toward multi-agent systems. This proposed approach advocates for an adaptive system that assigns specialized tasks to a team of virtual experts, balancing precision with operational viability in the defense of the information space.

Downloads

Download data is not yet available.

Article Details

Topics

Section

Computer science and software engineering

Authors

Author Biographies

Oleg A. Boiko, National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”. 37, Beresteiskyi Ave. Kyiv, 03056, Ukraine

PhD Student, Department of Artificial Intelligence, Educational and Research Institute for Applied System Analysis

Valeriy Ya. Danylov, National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”. 37, Beresteiskyi Ave. Kyiv, 03056, Ukraine

Doctor of Engineering Sciences, Professor, Laureate of the Borys Paton National Prize of Ukraine, Professor, Department of Artificial Intelligence, Educational and Research Institute for Applied System Analysis

Scopus Author ID: 7201827051

Similar Articles

You may also start an advanced similarity search for this article.