Just like the invention of the Internet and other information technology tools, which have now become part and parcel of our lives, artificial intelligence has come to stay, and it appears to be spreading fast. Although it was created for good, it has now become a double-edged sword. While its advantages are a zillion and more, the simultaneous downsides may be unparalleled. On the one hand, AI is being used to empower societies in diverse ways, but on the other hand, it is undermining the foundations of that same society. As the strands of the digital sphere intricately and intimately weave societies together, AI would be ever-present in our lives, and this means one thing: we have to learn to live with both its good and dark sides. While it can amplify ‘the good’, it blows up ‘the bad’ even faster and wider. Falsehoods, misinformation and disinformation suddenly found strength in that dark side of AI.
The Rise of AI-Driven Disinformation
Disinformation—the deliberate spread of false or misleading information—has long existed, but AI has turbocharged its capabilities, making it easier, faster, and cheaper to produce convincing fake content. Whether it’s deepfake videos of political leaders, voice cloning of public figures, or fully fabricated news articles, AI has handed disinformation agents tools that were once the domain of highly specialised teams.
A quintessential example was what transpired during Pakistan’s general elections in February 2024. From behind bars, former Prime Minister Imran Khan’s team used AI to generate a video and synthetic voice message declaring his party’s victory before official results were announced. The AI-generated content amassed millions of views and sparked confusion, showing how quickly manipulated media can influence public perception (BBC, 2024; CNN, 2024).
Similarly, in Slovakia, a candidate lost the election after a viral deepfake audio clip falsely claimed he intended to rig the vote. Despite being debunked, the damage was irreversible (CNN, 2024). These cases underscore the real-world consequences of AI-fueled disinformation campaigns, where perception can override truth, and fabricated narratives can derail democratic processes.
Generative AI: The Ultimate Disinformation Engine
Generative AI (GAI), a technology that can autonomously create text, images, audio, and video, has fundamentally changed how content is produced. With tools like ChatGPT, Midjourney, and Stable Diffusion, creating fake news, deepfakes, or misleading visuals is no longer the preserve of tech-savvy operatives. Every-day users can now generate content that looks and sounds authentic, making it increasingly difficult to distinguish fact from fiction (DW Akademie, 2024).
These tools can be weaponised to automate entire disinformation campaigns. Bots powered by generative AI can write persuasive posts, create synthetic websites, and amplify messages across social media, all without human oversight. According to NewsGuard, more than 1,100 AI-generated news sites have emerged globally, spreading misleading narratives in multiple languages (NewsGuard, 2024).
This automation leads to an information overload comprising an overwhelming volume of false content, thus, rendering traditional fact-checking a nightmarish activity. Just like the police and criminals, fact-checkers are always struggling to catch up with those behind such disinformation campaigns, who are always a step ahead, flooding timelines, newsfeeds, and group chats with AI-spun narratives (University of Zurich, 2023).
Erosion of Trust
The impact goes beyond just content. AI-driven disinformation is reshaping the public arena. With media outlets like CNET quietly publishing AI-generated content and generative models known to “hallucinate” facts, trust in journalism and institutions is steadily eroding (Freshfields Bruckhaus Deringer, 2024).
Public confusion is growing. As internet users grapple with determining whether content is human-made or AI-generated, longstanding indicators of trust—such as professional tone or visual realism—no longer suffice. According to Vinton G. Cerf, a pioneer of the internet, we are losing our ability to assess credibility because our traditional filters have been distorted (Freshfields, 2024).
The consequences are far-reaching. When trust in media collapses, so too does the public’s ability to engage meaningfully in democratic processes. As the World Economic Forum warns, this erosion of truth poses an “existential risk” to humanity (WEF, 2024).
Targeted Manipulation and Social Division
Beyond sowing confusion, AI disinformation can be precision-targeted. Political operatives can tailor misleading content to specific demographics using AI algorithms that exploit user data, preferences, and biases. This can deepen societal divisions and disproportionately impact vulnerable groups.
Gendered disinformation, for instance, leverages AI to produce misogynistic content aimed at discrediting female politicians or activists. Meanwhile, marginalised communities may face higher exposure to harmful or fake narratives crafted to manipulate their political decisions (WEF, 2024).
Authoritarian Exploitation and Global Implications
Authoritarian regimes have not missed the opportunity to exploit generative AI. According to Democracy Reporting International, state-controlled narratives are being embedded into AI chatbots. For example, when prompted in specific ways, ChatGPT has been found to reproduce pro-Kremlin rhetoric, such as justifying the invasion of Ukraine under the guise of “de-Nazification” (Democracy Reporting International, 2023).
Moreover, open-source models lacking built-in safeguards make it easier for malicious actors to spread propaganda globally. With legal systems in at least 21 countries incentivising machine-learning censorship, AI can become a tool for digital authoritarianism, silencing dissent and reinforcing state-sponsored narratives (Democracy Reporting International, 2023).
Elections at Risk
Elections are particularly vulnerable to AI-enabled manipulation. Whether it’s a fake Biden robocall urging voters to stay home or fake images of political rivals, generative AI provides powerful tools for electoral interference. These are not theoretical risks—they are happening in real-time and at scale (DW Akademie, 2024).
Actors who view information as a battlefield can now wage war with synthetic narratives. The dangers are both domestic and foreign, with disinformation campaigns coordinated, funded, and measured for impact. As one expert noted, “These actors see information as a theater of war” (Carl Miller, Centre for the Analysis of Social Media).
The Path Forward
While AI poses serious threats, it also offers part of the solution. AI systems can be trained to detect patterns of disinformation, assist in fact-checking, and analyse the spread of misleading content. However, technical tools alone are insufficient.
A multi-pronged approach is essential. This includes:
- Transparent AI governance: Initiatives like the AI Governance Alliance and the Coalition for Content Provenance and Authenticity (C2PA) are pushing for ethical deployment and watermarking of AI content (WEF, 2024).
- Media literacy: Public education on critical thinking and digital literacy can empower individuals to identify misinformation.
- Cross-sector collaboration: Governments, tech firms, civil society, and academic institutions must work together to set standards and respond to emerging threats in real-time.
Conclusion: Navigating the Age of Synthetic Truth
Generative AI is not inherently harmful, but its misuse poses a serious challenge to democracy, social cohesion, and truth itself. In the face of AI-generated disinformation, society must act urgently and collectively. With innovation must come responsibility, and with great power, accountability. Only then can AI serve as a tool for progress—not a weapon for deception.
References
- BBC News (2024). Pakistan elections: Imran Khan’s AI video sparks controversy.
- CNN (2024). AI-generated audio affects Slovakian election outcome.
- Democracy Reporting International (2023). AI and Disinformation: A Research Report.
- DW Akademie (2024). How Generative AI Is Changing Disinformation.
- NewsGuard (2024). Tracking AI-generated misinformation websites.
- University of Zurich (2023). Generative AI’s influence on perception of truth.
- World Economic Forum (2024). Global Risks Report.
- Vinton G. Cerf, Freshfields Bruckhaus Deringer (2024). Podcast on Trust in Media.
Carl Miller, Centre for the Analysis of Social Media. Podcast Interview.