Introduction
Since April 2023 the conflict between the Sudanese Armed Forces (SAF) and the Rapid Support Forces (RSF) has devastated Sudan’s political, humanitarian, and social fabric. Amid the violence, another front has emerged: the realm of digital information. What once might have been auxiliary propaganda now functions as a distinct theatre of conflict. This article examines how online disinformation, troll-farm tactics and emerging artificial-intelligence-driven content have become tools of influence in the Sudan war. In doing so, it argues that the digital domain is no longer a side-show; it shapes battlefield narratives, skews diplomatic dynamics and deepens ethnic tensions.
State-Backed Disinformation Campaigns
Both the SAF and RSF have recognised the value of controlling narratives. For example, false claims of battlefield victories and mis-attributions of massacres appear frequently in social-media feeds, often before verification is possible. According to a humanitarian-reporting study, “Misinformation, disinformation and hate speech have been used as deliberate tools of war, weaponised to distort narratives, fracture social trust and undermine humanitarian response” (CDAC Network, 2025). Similarly, independent researchers note that digital manipulation in Sudan has shifted from incidental to systematic (Greco, 2024).
The strategic aim is clear: by controlling what audiences see and believe, each faction tries to influence the domestic population, diaspora communities and international actors. In some cases, this leads humanitarian actors to respond to false alerts or change access routes on the basis of spurious threats. In short, the information environment has become a force-multiplier in the conflict.
Troll Farms and the Internationalisation of Media Wars
Beyond formal state propaganda lies a less-visible network of automated accounts and orchestrated campaigns. Both domestic actors and foreign-based troll farms exploit trending topics, hashtags and viral content to amplify favourable messaging. For example, the RSF has been observed to employ mass posting on X (formerly Twitter) in a bid to manufacture impressions of legitimacy and momentum (SMEX, 2023).
The consequence is that online opinion becomes less organic and more engineered. International diplomats, analysts and journalists looking at social-media feeds may inadvertently consume manipulated narratives. The implications extend into international policy: if foreign governments believe one side is advancing or an atrocity is imminent (based on manipulated content), this can trigger sanctions, humanitarian responses or diplomatic initiatives that may have been mis-informed.
Hate Speech, Ethnic Polarisation and Offline Violence
Digital content is not restricted to battlefield claims and victory narratives. It also includes hate speech and ethnic-targeted messaging. A recent study focusing on Sudan argues that “hate speech and disinformation are used to manipulate public perception and escalate ethnic tensions, mainly through digital platforms” (Slom, 2025, p. 1). Such online campaigns may vilify whole communities, resurrect grievance narratives and encourage militia mobilisation.
In effect, digital rhetoric becomes an enabler for offline violence. Communities that perceive themselves as threatened or demonised are more likely to respond with militia activity or self-defence. The result is a cyclical pattern where online provocation triggers real-world reprisals, which are then represented online to generate further outrage.
AI-Generated Content and Automated Propaganda
Recent advances in generative artificial intelligence are accelerating the risk of disinformation. In Sudan, fact-checking organisations have identified misleading audio attributed to known actors and voice-modulated recordings disseminated online (African Arguments, 2024). Globally, research highlights that “deepfake videos could be used to depict war crimes being committed by either side, inciting further violence” (Albader, 2025).
These developments matter because they reduce barriers to entry for propaganda production. Lower-cost tools allow non-state actors, proxy networks and even local commercial services to produce convincing falsified content. Once such content leaks into international media channels, the authenticity challenge increases, trust in all video or audio evidence may collapse. From a conflict-analysis perspective, this raises the cost of verification for NGOs, mediators and journalists and amplifies the strategic value of deception.
Implications for Peace, Diplomacy and Humanitarian Action
The ramifications of information warfare in the Sudan conflict are broad. First, peace negotiations and cease-fire talks may be initiated or blocked based on alleged violations that cannot be verified. If one side claims a major atrocity or strategic victory, the other side may feel it cannot negotiate from a position of weakness. Second, humanitarian access is compromised. False alerts, manipulated security reports and mistrust generated by disinformation can delay or block delivery of aid. For example, misinformation has “fuelled confusion, delaying aid and putting more lives at risk” during floods and conflict in Sudan (Khalifa, 2024).
When online narratives target ethnic groups or promote revenge, social cohesion deteriorates and local peace-building becomes harder. Groups engaged in digital warfare extend conflict into the everyday lives of civilians.
Conclusion
The war in Sudan is not just fought with weapons, territory and manpower. It is fought in timelines, social-media threads, livestreams and viral audio. Online disinformation, hate-speech campaigns and AI-generated content are part of a larger strategy of malign influence that shapes the trajectory of violence, diplomacy and humanitarian response. For actors seeking peace and stability, ignoring the digital dimension is no longer an option. Effective intervention will demand digital-literacy programmes, verification mechanisms, platform accountability and the recognition that information is a domain of war in itself.
References
Albader, F. (2025). Synthetic media as a risk factor for genocide. Case Western Reserve Journal of International Law and Technology.
CDAC Network. (2025, August 8). Sudan’s information war: How weaponised online narratives shape humanitarian crisis and response. https://www.cdacnetwork.org/s/CDAC_Harmful-information-Sudan_Flagship-Report.pdf
Greco, A. (2024, June 28). Sudan’s civil war and the future of information warfare. Encyclopedia Geopolitica. https://encyclopediageopolitica.com/2024/06/28/sudans-civil-war-and-the-future-of-information-warfare/
Ibrahim, H. A. H. (2023). Digital warfare: Exploring the influence of social media in propagating and counteracting hate. Chr. Michelsen Institute Publications.
Khalifa, M. (2024, October 30). Misinformation deepens the impact of conflict and floods in Sudan. IWMI Blogs. Retrieved from https://www.iwmi.org/blogs/misinformation-deepens-the-impact-of-conflict-and-floods-in-sudan/
Krack, N. (2025). Generative artificial intelligence and disinformation. In Gori, P., & Ginsborg, L. (Eds.), Handbook on Disinformation: A Multidisciplinary Analysis. Springer.
Slom, F. A. A. (2025). Hate speech and disinformation in Sudan: Impact on local peace. Journal of International Relations and Peace, 2(1)
SMEX. (2023, May 19). How disinformation campaigns endanger lives in Sudan. Retrieved from https://smex.org/how-disinformation-campaigns-endanger-lives-in-sudan/




























