Art has always been central to the identity, culture, and expression of every civilisation. It encapsulates human history, values, struggles, and triumphs. From the ancient Adinkra symbols of Ghana to the majestic sculptures of Greece and the resonant rhythms of African drums, art—in all its forms—has been both a mirror and a moulder of societies. In Africa, where oral traditions remain integral, music and performance are key instruments of storytelling and intergenerational knowledge transfer.
Yet, the same power that allows art to inspire and unite also renders it vulnerable to manipulation. Throughout history, art has been harnessed for purposes far removed from beauty or truth—namely, for propaganda, disinformation, and even psychological warfare.
Today, the evolution of artificial intelligence (AI) has added a formidable layer of complexity, transforming art into a sophisticated tool for influence and, at times, deception.
Art as a Tool for Social Influence and Control
Music, in particular, is an emotive and potent form of art. During the Trans-Atlantic Slave Trade, enslaved Africans used music and drumming as tools for covert communication and resistance, creating a shared cultural language that transcended linguistic and regional differences (Eyerman, 2001). In contemporary African societies, music continues to accompany everyday activities—from communal labour to political mobilisation—serving as a unifying force.
This persuasive power is not lost on political actors. Across the world, governments and movements have used music to stir nationalist fervour or promote ideological narratives. In 1985, the song We Are the World, performed by USA for Africa, became a global anthem of solidarity, raising millions for famine relief in Ethiopia and demonstrating music’s potential for humanitarian impact (USA for Africa, 1985).
However, the very elements that make music so impactful—its emotional resonance, repetitive structure, and cultural familiarity—also make it an effective weapon.
In detention centres like Guantanamo Bay and Abu Ghraib, music has been reportedly used as a form of psychological torture, played at extreme volumes to disorient and destabilise prisoners (BBC, 2015). Extremist groups like ISIS have weaponised music in the form of nasheeds—Islamic a cappella chants glorifying martyrdom—to indoctrinate recruits and glorify violence (Counter Extremism Project, 2022).
In Rwanda, during the 1994 Genocide, the infamous Radio Télévision Libre des Mille Collines (RTLM) broadcast hate-filled messages and music that dehumanised Tutsis and incited Hutus to violence. Songs with coded lyrics urged listeners to “cut down the tall trees”—a reference to murdering Tutsis (Thompson, 2007).
The exploitation of art for manipulation extends beyond music. In Nazi Germany, visual art and architecture were instrumental in promoting Aryan supremacy. Hitler, a failed artist himself, championed a state-sponsored aesthetic that glorified militarism and racial purity while banning so-called “degenerate art” (Petropoulos, 1996). Wagner’s operas, admired by Hitler, were imbued with themes of German nationalism and were used to justify anti-Semitic ideology. In concentration camps, prisoners were forced to play music as others were led to their deaths—transforming art into an instrument of psychological terror (USHMM, 2021).
Even cinema and theatre have served as vehicles for manipulation. During the Rwandan Genocide, RTLM broadcast not only music but radio dramas designed to stir resentment and encourage violence (Thompson, 2007). Cults such as the People’s Temple under Jim Jones used emotionally charged songs to dismantle individual autonomy and promote groupthink, culminating in the Jonestown massacre (History.com, 2020).
The Digital Age: Artificial Intelligence and the Amplification of Propaganda
Today, the rise of artificial intelligence has exponentially increased the scale, speed, and subtlety with which disinformation and propaganda can be deployed. AI can create hyper-realistic images, videos (deepfakes), audio recordings, and even music that are indistinguishable from real content.
AI-generated deepfakes have already been used to create false videos of political leaders, potentially undermining public trust and sowing confusion during elections (Chesney & Citron, 2019). For instance, AI can fabricate a video of a political figure making inflammatory remarks, sparking public outrage before it is debunked. Similarly, AI voice cloning can mimic a public figure’s voice to spread false messages, as was the case in 2023 when a deepfake audio impersonating President Joe Biden urged voters to abstain from an election (Washington Post, 2023).
Beyond content creation, AI also powers social media bots that can disseminate propaganda on an industrial scale. These bots engage in coordinated campaigns to promote specific narratives, drown out dissent, or amplify polarising content. During conflicts, misinformation and fake visuals generated or amplified by AI can inflame tensions and prolong violence (UNESCO, 2023).
Moreover, generative AI models can be trained on biased datasets, reinforcing harmful stereotypes in the content they produce—whether in writing, imagery, or music. The lack of transparency in AI algorithms makes it difficult for users to identify when they are being manipulated.
The Power and Responsibility of Art and Technology
Art—whether expressed through music, sculpture, theatre, or film—has the unique power to shape narratives, forge identities, and drive social change. But in the wrong hands, it can be weaponised to manipulate, indoctrinate, and incite violence. The integration of AI into artistic production and distribution has intensified this threat, making it imperative to establish ethical standards and regulatory safeguards.
Addressing this issue requires a multi-pronged approach:
- Digital and Media literacy: Empowering the public to critically evaluate the content they consume, especially online.
- Transparency in AI: Ensuring that AI-generated content is clearly labelled and traceable.
- Regulation and oversight: Governments and tech companies must work together to combat disinformation and restrict the misuse of AI for propaganda.
- Cultural preservation: Supporting authentic, community-led artistic expression that counters disinformation with truth.
Conclusion
As we move deeper into the digital age, it is crucial to remember that the tools we create—like AI—reflect the values we embed in them. Art and technology must be used to illuminate truth, not to obscure it. Just as art will always be with human society, so will technology, and, along with them, innovation. It is, therefore, a no-brainer that a well-intentioned tool like AI will intricately weave itself into everything we do as a human race and even render most human activities redundant. This is why the necessary measures have to be put in place right from the outset of the development of AI tools to ensure it doesn’t autonomously evolve into something uncontrollable that can, at will, misinform, disinform and spread propaganda in an era of almost unrestricted access and use of social media and other digital platforms.
References
- BBC. (2015). Guantanamo: Music as torture. Retrieved from https://www.bbc.com
- Chesney, R., & Citron, D. (2019). Deepfakes and the New Disinformation War. Foreign Affairs.
- Counter Extremism Project. (2022). ISIS Nasheeds and Propaganda. Retrieved from https://www.counterextremism.com
- Eyerman, R. (2001). Cultural Trauma: Slavery and the Formation of African American Identity. Cambridge University Press.
- History.com Editors. (2020). Jonestown. History. Retrieved from https://www.history.com
- Petropoulos, J. (1996). Art as Politics in the Third Reich. University of North Carolina Press.
- Thompson, A. (2007). The Media and the Rwanda Genocide. Pluto Press.
- UNESCO. (2023). Guidelines for the Governance of AI-generated content. Retrieved from https://www.unesco.org
- USHMM. (2021). Music and the Holocaust. United States Holocaust Memorial Museum.
- USA for Africa. (1985). We Are the World. Retrieved from https://usaforafrica.org
- Washington Post. (2023). Fake Biden robocall shows dangers of AI voice clones. Retrieved from https://www.washingtonpost.com