Skip to main content

AI in the Age of Distrust

By Henning Lahmann

“Cognitive warfare”, the malign use of information to manipulate target audiences in open and democratic societies, is considered one of the most urgent policy challenges today. The digital transformation has enabled states such as Russia or China to directly influence Western electorate through social media and other digital channels of communication. The development of ground-breaking artificial intelligence tools to generate and disseminate text and synthetic media at great speed is expected to further exacerbate the problem in the near future. At the same time, researchers have started working on concepts for AI-supported “early warning systems” for cognitive warfare, systems that utilise cutting-edge machine learning algorithms to detect, monitor, and even counter disinformation campaigns by adversarial actors. However, as long as extensive scholarship in the cognitive and social sciences produces only scarce evidence about the causal mechanics of misleading information and the degree of risk such conduct actually poses, such interventions threaten to infringe on communicative rights such as freedom of expression and freedom of information without in fact making Western societies more resilient against attempts of adversarial influencing. Without a solid evidentiary basis, the ongoing securitisation and externalisation of the problem of potentially harmful speech online may end up sacrificing the very rights and values such technological countermeasures ostensibly seek to protect.

Full article