Weaponised Play: Iran’s AI Lego Propaganda Is a Warning Shot for the West
Lego-style Trump crying into a taco might sound like a throwaway meme. It isn’t. It’s a glimpse of the next phase of information warfare: cheap, AI‑generated, culturally fluent propaganda wrapped in children’s aesthetics and pushed through TikTok-era attention funnels.
In this piece, we’ll look at what the pro‑Iran group Explosive Media actually did, why their AI Lego videos work so well, and what this says about the future of political influence ops. We’ll also examine why Europeans should care – and why Brussels’ regulators are about to discover that “synthetic media” is far more than a line item in the AI Act.
The news in brief
According to Wired, as reported by Ars Technica, a pro‑Iran group calling itself Explosive Media has spent the current US–Iran war (which began in February) releasing a series of AI‑generated, Lego‑style videos mocking President Donald Trump and the United States.
The group, described as young pro‑Iranian activists, has published more than a dozen short videos on platforms such as X, TikTok, Instagram and Telegram. Many of these clips have reportedly reached millions of views. Their latest episode was pushed out within hours of Trump announcing that he would not “wipe out a whole civilization”, and portrays a Lego Trump colluding with Gulf leaders, being humiliated by Iranian officials, and finally weeping next to Iran’s ceasefire proposal.
Explosive Media claims to script and produce the videos using AI tools, though it will not disclose which ones. The group insists it is independent of the Iranian state, yet its clear pro‑regime line and reliable high‑speed Internet access inside heavily restricted Iran have led researchers to suspect at least tacit state backing. The wider Iranian information ecosystem – including embassies – has been amplifying AI‑generated memes and videos attacking Trump across social networks.
Why this matters
This story would be easy to dismiss as “just memes”. That would be a mistake.
First, Explosive Media has solved a core problem of modern propaganda: how to compress a complex war into a 45‑second clip that a distracted 19‑year‑old in Ohio will actually watch to the end. The answer, in their case, is pop‑culture‑literate, Lego‑inspired animation with simple moral framing and a punchline. It’s not subtle – but it’s sticky.
Second, they are doing something Western governments talk about yet rarely execute well: meeting the audience where it is, in a language it actually speaks. According to Wired, the group has deliberately studied American culture and is crowdsourcing ideas from sympathetic US users. That gives their content a tone and rhythm far closer to US meme culture than to classic, didactic state propaganda.
The beneficiaries are obvious: Iran’s regime and anyone seeking to undermine US policy credibility abroad. The losers are not just Trump and his supporters, but also Western institutions that still communicate in press releases and podium speeches while their adversaries weaponise humour, AI and platform dynamics.
The immediate implication is that the memetic battlefield is no longer dominated by Silicon Valley–born movements and influencers. State‑aligned actors can now run agile, AI‑assisted meme factories from anywhere with decent connectivity. The barrier to entry for high‑impact propaganda has collapsed – and so has the time window for response. Explosive Media had multiple endings scripted in advance and simply published the one that matched Trump’s decision. That is closer to “real‑time narrative warfare” than to traditional messaging.
The bigger picture
Explosive Media’s Lego universe is part of a broader convergence: generative AI, short‑form video and geopolitical rivalry.
We’ve already seen AI imagery used in conflicts in Gaza, Ukraine and the Sahel – fake photos, fabricated battlefield scenes, even cloned political voices. What’s different here is the deliberate use of stylised unreality. The videos do not pretend to be real footage; they lean into plastic figurines and absurd scenarios. That reduces the risk of fact‑checking backlash while still conveying a strong emotional narrative.
This mirrors a trend inside the US, where political actors use “shitposting”, memes and ironic edits rather than formal ads to drive engagement. Trump’s own team has reportedly edited war clips with Hollywood footage for his base. Iran’s answer is to make something more universally consumable – cute, funny, easy to remix.
Historically, authoritarian regimes struggled to produce culture that wasn’t painfully stiff. From Soviet posters to early RT broadcasts, the tone betrayed the message. Generative AI and a globalised meme vocabulary change this. A small, relatively young team can generate scripts, visuals, music and translations at scale, test what works, then iterate – all with consumer tools.
Competitors are already experimenting. Russia has pumped out AI‑assisted videos around Ukraine. China is exploring AI influencers and synthetic news anchors. Non‑state movements – from extremist groups to conspiracy communities – are quietly adopting the same toolchain. Explosive Media is simply one of the clearest early examples of how polished and targeted this can become.
The direction of travel is clear: political communication is becoming modular, automated and aestheticised. If 2016 was the era of the Facebook meme page and troll farm, the late 2020s are shaping up as the era of AI‑accelerated “content studios” with geopolitical patrons.
The European / regional angle
For Europe, this is not a distant American drama. It’s a rehearsal for tactics that will almost certainly be aimed at EU audiences – if they aren’t already.
The EU has equipped itself with powerful regulatory tools: the Digital Services Act (DSA), the upcoming AI Act and the Digital Markets Act (DMA). These laws create obligations around transparency, recommender systems and – in the AI Act’s case – labelling of deepfakes in political contexts. But Explosive Media’s output shows how easy it is to stay just on the edge of these rules: content that is obviously synthetic, spread across Telegram, TikTok, X and Instagram, originating from outside the EU but reaching voters inside it.
European elections are increasingly fought on TikTok and Instagram Reels. A clever foreign actor doesn’t need to convince a German or Croatian audience that Iran is right; they just need to amplify cynicism about Western institutions, US alliances or sanctions policy. A Lego Trump humiliating himself might not change votes directly, but it can normalise the idea that US leadership is ridiculous and impotent – which has real implications for NATO cohesion and foreign policy debates.
Europe also has its own vulnerabilities. Many EU member states, especially in Central, Eastern and South‑Eastern Europe, already struggle with polarised media ecosystems and limited resources for digital literacy. Local far‑right and far‑left movements are more than happy to remix foreign propaganda that fits their narratives. And the DSA’s enforcement against non‑EU state propaganda is still in its infancy.
For European platforms, media and regulators, the message is blunt: you are no longer just moderating text posts and obvious fakes. You are moderating weaponised entertainment.
Looking ahead
Expect three developments over the next 12–24 months.
First, copycats. Other state and non‑state actors will replicate the Explosive Media model: small, agile teams combining generative video, culturally specific humour and rapid reaction to live events. Some will focus on Western politics, others on regional disputes – from the Balkans to Latin America.
Second, a regulatory scramble. The EU AI Act will require labelling of certain political deepfakes, but it was not designed with Lego‑style propaganda in mind. Brussels and national regulators will be forced to clarify how “synthetic political media” should be disclosed, and how platforms must handle content that is clearly manipulative but not factually false. Expect guidance documents, soft‑law codes of practice and, eventually, enforcement cases against major platforms under the DSA for failing to mitigate systemic risks.
Third, a shift in defensive strategy. Governments and civil society will have to decide: do they try to compete in the meme arena, or do they double down on fact‑checking and media literacy? Realistically, they will need both. Pure censorship is a losing game – content can always jump to another platform or be re‑uploaded with minor changes. The more promising path is to build resilient audiences who recognise when they are being emotionally gamed, even when the content is funny and well‑produced.
Unanswered questions remain. How closely is Explosive Media tied to the Iranian state? Which AI tools are in play, and how easy will it be to trace their fingerprints? At what point does Lego’s own IP enforcement collide with geopolitical propaganda? And crucially: how far are Western democracies willing to go in responding in kind?
The bottom line
Explosive Media’s AI Lego videos are not a quirky sideshow; they are an early blueprint for how 21st‑century propaganda will look and feel. They show that with generative AI and a good grasp of meme culture, even heavily sanctioned states can run sophisticated influence campaigns at scale and speed. Europe and the US need to stop treating this as online noise and start treating it as strategic infrastructure.
The uncomfortable question for readers is simple: when the next wave of weaponised entertainment hits your feed, will you know you’re being played – and will you care?



