NAVIGATING THE NEW BATTLEFIELD : Democratic Resilience in an Era of Hybrid Threats and AI
Navigating the New Battlefield:
Democratic Resilience in an Era of Hybrid Threats and AI
Hybrid Threats in the 21st Century: Cyberattacks, Disinformation, and Beyond
Democratic societies today face hybrid threats that blur the line between war and peace. These threats combine cyberattacks, disinformation campaigns and other tactics to undermine national stability, the social order and security from within, often without a formal declaration of war. Cyberattacks and disinformation have emerged as powerful weapons in the 21st-century struggle between democratic and authoritarian societies. The contours of this new emerging battlefield are evident, for example, in contemporary Ukraine: while bytes and narratives rarely inflict the same physical devastation as bullets and bombs, their strategic impact on morale, decision-making, and international perception can prove comparably decisive. Hybrid threat actors operate within the seams of our interconnected world - hacking networks, distorting information, and exploiting social divisions to advance their strategic goals without overt warfare. Their efforts can weaken economies, tilt elections, and fray the social fabric on which democracies depend. Hybrid warfare targets the minds and the will of citizens and demands from them new resilience against such hybrid threats if they are to be successfully countered. Artificial intelligence (AI) is accelerating these trends, enabling higher volumes and more personalised forms of manipulation.
Democratic resilience will be tested as never before.
Can free societies protect the integrity of their information space without closing it? Can they use technology to defend against technology while preserving fundamental freedoms? The answers will shape the future of governance and stability. What is certain is that democracies cannot be passive. A proactive stance is needed: investment in cybersecurity, vigorous counter-disinformation strategies, public education to immunise citizens against falsehoods, and robust international partnerships to present a united front against hybrid aggression.
Artificial intelligence is reshaping democratic systems in unexpected ways.
AI-powered tools allow malicious actors to quickly manipulate information and disrupt electoral processes, posing new dangers to democracy. On the one hand, AI offers smarter governance: by streamlining decision-making and public services, it can empower policymakers to identify and seize new opportunities. Automated tools and data analytics are already enabling governments to be more responsive and efficient, potentially transforming the way citizens interact with government institutions. But this transformation is not without its challenges.
As news and narratives are increasingly curated by AI, the very basis of an informed electorate is at risk. When algorithms dictate what information (with intentionally formulated manipulative or divisive messages) reaches the public and what doesn't, citizens may lose the ability to critically evaluate it and engage in political debate – a vital skill for any democracy. The rise of automated systems in the information space can, however unintentionally, undermine public trust and make citizens deeply sceptical of both media and government motives. It is possible through AI-driven targeting that mines harvested data to craft precisely tailored, manipulative or divisive messages for each audience segment.
Another growing concern is the use of AI in surveillance, whether by government agencies or private companies. Such practices, while potentially enhancing security, also threaten personal privacy and weaken the bonds of trust between citizens and their government. As surveillance technologies become more sophisticated, the balance between security and freedom becomes more delicate.
AI-Generated Deepfakes and Bot Armies: The Case of Ukraine
One prominent AI-driven threat is the rise of deepfakes – hyper-realistic fake video or audio material generated by machine learning. Cheaply produced, large-scale disinformation is becoming a daily reality. In March 2022, in the midst of Russia's war on Ukraine, a fake video of Ukrainian President Zelenskyy was circulated online, showing him calling on Ukrainians to surrender. This 'deepfake' was quickly identified and removed, but it illustrated the danger: threat actors can use AI-generated synthetic media in influence campaigns. A fake message from a national leader, if believed even briefly, could sow panic or confusion during a crisis. Deepfakes and similar AI techniques allow completely fabricated events to be presented as real, misleading audiences and eroding the baseline of truth.
Beyond deepfakes, AI can create armies of bots and fake personas on social media that mimic human behaviour. These AI-driven bots can flood online spaces with tailored propaganda, interact with real users, and even coordinate inauthentic campaigns. During elections, such AI-enabled influence operations can micro-target voters with false stories or distorted facts tailored to their profiles and biases. For example, generative text models can produce convincing fake news articles or social media posts on demand, allowing malicious actors to automate the 'firehose' of disinformation. The scale and speed of these AI tools threaten to outpace the ability of governments or fact-checkers to respond. By the time a false narrative is debunked, it may have spread to millions of people. This creates a cat-and-mouse dynamic in which democracies struggle to counter waves of AI-fuelled falsehoods in real time.
Moreover, the so-called “persuasion industry” has found a powerful ally in AI. Data-driven algorithms now have the capacity to influence public opinion on an unprecedented scale, targeting individuals with tailored messages that can undermine the autonomy of their political choices. At its worst, this technology can be used to spread disinformation, further destabilising the relationship between democratic institutions and the electorate.
There is a silver lining: How AI can empower democratic participation
AI holds great promise for civic technology. Plain-language chatbots and real-time translation already lower entry barriers, allowing citizens without legal training – or who speak minority languages – to follow and comment on draft laws. By fostering new channels for citizen engagement (for example, AI-supported vTaiwan and Citizens’ Assemblies), these innovations can enrich democracy and empower individuals to actively participate in policymaking. But even here, caution is needed: issues such as algorithmic bias and digital exclusion threaten to deepen existing inequalities and potentially sideline marginalized voices. Designing civic-tech systems therefore requires rigorous bias auditing, open-source transparency, and continuous human oversight to ensure that the same technologies that widen participation do not simultaneously narrow whose voices are heard.
In summary, as AI develops, it presents both opportunities and risks for democracy. In the face of these challenges, democracies are slowly adapting and building resilience. Many have established cyber command centres to strengthen the defences of critical networks and share threat information. The challenge ahead is to harness AI's potential to improve governance, while safeguarding the democratic values and trust that underpin the relationship between citizens and government. The dual nature of AI's impact on democracy calls for a balanced approach to ensure that technology serves as a tool for empowerment rather than division.
Kateryna Latysh
Is part of a collection of essays about RECLAIMING EUROPE.

Comments
Post a Comment