AI Is Not Destroying Democracy. But It Is Making It Much Harder to Defend
- 24/04/2026
- Posted by: Balkans Forward
- Category: Blog
You have probably seen the headlines. AI-generated deepfakes. Chatbots that write thousands of fake news articles per hour. Political ads that target you based on your fears rather than your interests. If you follow the news about artificial intelligence and democracy, it can start to feel like we are already living in the endgame, that the information space has been so thoroughly polluted that there is no way back.
The reality is more complicated, and in some ways more unsettling than the headlines suggest. AI did not create the crisis of democratic trust. But it is handing a set of very powerful tools to the people who want to deepen it.
What Was Already Broken
Before we blame AI for everything, it is worth being honest about what was already wrong.
Democratic societies were struggling with disinformation long before large language models existed. Social media platforms discovered early that outrage and fear spread faster than accurate information, and they built their business models around that discovery. Political actors, from domestic populists to foreign intelligence services, learned to exploit this. Conspiracy theories that would have taken years to spread through word of mouth could reach millions of people in hours.
The erosion of local journalism meant that in many communities, there was no longer a trusted institution checking whether what politicians said was true. Polarisation meant that even when facts were available, people increasingly trusted sources that confirmed what they already believed and dismissed everything else as propaganda.
AI did not cause any of this. It arrived into an information ecosystem that was already under serious strain.
What AI Actually Changes
What AI does is dramatically lower the cost and raise the scale of manipulation.
Creating a convincing fake video of a politician saying something they never said used to require a significant budget, technical expertise, and time. Now it requires a laptop and a free tool. Writing a thousand variations of a misleading news article, each slightly different to evade detection algorithms, used to require a team of people. Now it takes minutes. Identifying which specific fears and insecurities make a particular voter in a particular district most likely to stay home on election day used to require expensive data analysis. Now it can be automated cheaply at scale.
This is the shift that matters. The underlying tactics are not new. The barrier to using them at massive scale has collapsed.
There is also a subtler problem that gets less attention. AI does not only enable the creation of false content. It also makes it harder to trust true content. When anyone can generate a convincing fake video, the existence of that possibility becomes a tool in itself. A politician caught on camera saying something embarrassing can now claim the video is AI-generated, and a portion of the audience will believe it, because they know the technology exists. This is sometimes called the liar’s dividend: the ability to dismiss real evidence by pointing to the theoretical possibility of fabrication.
Who Benefits and Who Pays
The people who benefit most from AI-assisted disinformation are those who were already advantaged in the information war. Authoritarian governments with resources to invest in influence operations. Political movements willing to abandon factual constraints on their messaging. Platform businesses that profit from engagement regardless of whether the content is true.
The people who pay the highest price are those who were already most vulnerable. Communities with lower levels of media literacy. Populations in countries where independent journalism is weak or suppressed. Young people who get most of their information from social media feeds that are almost impossible to fully understand or regulate.
There is a geographic dimension to this that European institutions have been slow to address. The threat is not uniform. Countries in Eastern Europe and the Western Balkans, many of which are targets of systematic Russian disinformation campaigns, face a qualitatively different challenge than countries with strong public media and well-funded civil society sectors. A strategy designed primarily for Western European democracies will not work equally well in Novi Sad or Skopje.
What Is Actually Being Done
The EU has moved faster on AI regulation than most other jurisdictions. The AI Act, which entered into force in 2024, includes provisions directly relevant to disinformation: requirements for transparency when AI-generated content is used, restrictions on certain types of manipulative systems, and obligations on platforms to address systemic risks. These are not trivial measures.
There are also significant investments in media literacy programs, fact-checking infrastructure, and support for independent journalism across the continent. The European Digital Media Observatory funds researchers and fact-checkers working across member states and candidate countries. Several Erasmus+ and civil society programs include components specifically designed to build resilience against disinformation among young people.
None of this is enough. Regulatory frameworks move slowly and technology moves fast. Media literacy programs reach motivated participants, but the people most susceptible to disinformation are often not looking for media literacy training. Fact-checkers are systematically outpaced by the volume of false content being generated.
What You Can Actually Do?
Being realistic about the scale of the problem does not mean there is nothing to do.
At the individual level, the most effective thing is not, as it turns out, learning to spot deepfakes. Technology for creating convincing fakes is advancing faster than most people’s ability to detect them. What works better is developing habits around information sources rather than individual pieces of content. Who produced this? What is their track record? Is this being reported by multiple independent outlets? These questions are not foolproof, but they are more durable than any specific detection technique.
Supporting independent local journalism, financially if you can, and by actually reading it, matters more than most people realise. Local journalism is the first thing that disappears when the information ecosystem degrades, and its absence creates the space in which disinformation flourishes.
And politically: the regulatory decisions being made right now about AI, about platform accountability, and about media ownership will shape the information environment for the next decade. These decisions are being made largely by people who are not young and who will not live with the consequences as long as young people will. Treating them as a priority, in elections, in advocacy, in the organisations you support, is not naive idealism. It is a reasonable response to where the stakes actually lie.
Bottom Line
AI is a tool. Like most powerful tools, it can be used to build things or to break them. The question of which it will predominantly do to democracy is not settled by the technology itself. It is settled by political choices, regulatory decisions, and the behaviour of millions of ordinary people navigating an information environment that is genuinely more difficult to read than it was ten years ago.
The threat is real. So is the capacity to respond to it. What is missing, mostly, is urgency.