X’s AI War-Footage Crackdown Targets Money, Not Misinformation
A wave of hyper-realistic AI clips is already rewriting what we think we see from the frontlines of wars. Now X is finally admitting it has a problem—sort of. Instead of removing misleading AI war videos, the platform is going after something more sacred to creators: their payouts. This move tells us a lot about how X under Elon Musk wants to manage risk without abandoning its "anything goes" speech posture. In this piece, we’ll unpack what X actually changed, who it really affects, how it fits into the global AI and regulation storm, and why European users should pay close attention.
The news in brief
According to TechCrunch, X has introduced a new penalty for creators who share AI-generated videos of armed conflicts without clearly stating that the content is artificial.
Nikita Bier, X’s head of product, announced that participants in X’s Creator Revenue Sharing Program who publish such undisclosed AI war clips will be removed from the revenue program for 90 days. If they repeat the behavior after that suspension, they can be permanently excluded from earning ad revenue on the platform.
X says it will use a mix of automated tools that attempt to detect generative AI content, alongside its Community Notes system, where users collaboratively add context to posts. The policy applies specifically to AI-generated videos representing armed conflict.
The Creator Revenue Sharing Program lets popular accounts earn a slice of ad revenue from their posts. Critics, as reported by TechCrunch, argue that this scheme has pushed creators toward sensational and outrage-heavy content because that tends to generate more engagement and thus more income.
Why this matters
X is not banning misleading AI war footage. It is banning getting paid for it—if you’re in its revenue program and get caught. That distinction is crucial.
Instead of treating AI deepfakes of conflict as a speech problem, X is reframing them as a monetisation problem. This is a familiar Silicon Valley move: keep the content largely intact, but tweak the financial plumbing. For advertisers, that sounds reassuring. For users trying to understand what is real during a war, it’s far less comforting.
Who is directly affected? Only a subset of creators:
- You must be in the revenue-sharing program.
- You must post an AI-generated video of an armed conflict.
- You must fail to label it as AI-generated.
That leaves out a lot of actors: state-backed propaganda accounts, anonymous troll networks, or new accounts that don’t qualify for monetisation in the first place. They can continue to post misleading AI war footage with essentially no change—unless X separately decides to remove or limit that content, which this policy does not promise.
The incentive shift is real for a certain class of creators, though. If posting sensational fake war videos is part of your business model, a 90-day cut-off from revenue is painful. The risk is that this doesn’t reduce the volume of deceptive content; it just encourages creators to add a perfunctory “generated with AI” tag somewhere and carry on.
The enforcement challenge is huge. AI-detection tools are far from perfect, and Community Notes is reactive and slow by design. During fast-moving conflicts, misleading clips can go viral, influence opinion, or even escalate tensions long before a label or a demonetisation penalty appears.
This is less a full solution and more a signal: X knows AI-generated war propaganda is a liability, and it wants a talking point when regulators and advertisers come asking what it’s doing about it.
The bigger picture
X’s move fits into a broader industry pattern: platforms are shifting from binary takedown decisions to a more nuanced mix of labeling and demonetising.
Meta has been expanding its AI-labeling rules on Facebook and Instagram, including tags on some synthetic images and videos. TikTok has its own policies requiring disclosure of “synthetic or manipulated media,” especially in political contexts. YouTube increasingly relies on demonetisation as its main enforcement weapon: videos may stay up but be stripped of ad revenue.
X is now importing that logic into one of the most sensitive areas online: war imagery. Deepfake videos have already appeared around conflicts like Ukraine and Gaza, showing fabricated statements or staged battlefield scenes. As generative models improve, the cost of producing convincing fake footage drops towards zero. The value of attention and outrage, meanwhile, remains very high.
Historically, social networks facing criticism for misinformation have oscillated between over- and under-enforcement. During the COVID-19 pandemic, platforms were accused both of censoring debate and of leaving harmful conspiracy content untouched. Around elections, we see last‑minute rule changes, content throttling and high-profile bans.
X under Musk took a different path: slashing moderation staff, restoring previously banned accounts, and promoting a more unfiltered speech ethos. That has earned the platform intense scrutiny from governments, the EU, and advertisers. Against that backdrop, a narrow, money-focused rule about AI war videos looks less like a moral stance and more like a legal and commercial hedge.
It also tells us something about where the industry is heading. Instead of clearly deciding, “This kind of content does not belong here,” platforms are creating a layered risk framework:
- High risk + monetised + high reach → label + demonetise + maybe downrank
- High risk + non-monetised → maybe label, often tolerate
Users, meanwhile, are left to navigate an environment where a labeled AI video and an unlabeled real one may sit side by side in the same feed, both amplified by the same engagement algorithms.
The European / regional angle
For Europe, this policy lands in a much more regulated environment than the U.S.
Under the EU’s Digital Services Act (DSA), X is designated a Very Large Online Platform. That comes with legal obligations to assess and mitigate systemic risks, including the spread of disinformation, especially during elections and crises. AI-generated war footage is precisely the kind of systemic risk EU regulators worry about: it can be weaponised in information warfare, influence public opinion about foreign policy, or even affect support for sanctions and military aid.
X will likely present this new rule as evidence that it is mitigating risk: it is discouraging financially motivated deepfake war content and leaning on Community Notes as a transparency tool. But from a European regulator’s perspective, the gaps are obvious:
- Non-monetised accounts remain untouched by the policy.
- The content itself is not required to be taken down.
- Detection and enforcement rely heavily on users and imperfect AI tools.
The EU AI Act, moving into implementation, will also tighten expectations around transparency for AI‑generated content. While the act primarily targets providers of AI systems, the political climate it creates is clear: undisclosed manipulative AI content, particularly in areas like elections and public discourse, is no longer acceptable.
European users and media organisations have an additional sensitivity to war imagery. From Ukraine to the Western Balkans, war is not an abstract concept. AI-generated fake footage of bombings, atrocities or troop movements could have direct consequences for diaspora communities, fundraising campaigns, and political debates in EU member states.
This is where European alternatives and public media matter. Platforms such as Mastodon servers run by newsrooms, or strong public broadcasters with verification desks, can act as a counterweight. But X remains a key information channel for journalists, diplomats and security analysts. A half-step like this policy doesn’t resolve the core problem: Europeans will still see AI-manipulated conflict footage on X—they just might not be paying the creators for it.
Looking ahead
Expect this policy to expand, be quietly watered down, or be replaced within a year. X has a track record of rapid, sometimes chaotic rule changes.
Three things to watch:
Scope creep
Today’s rule targets AI videos of armed conflicts. What happens when an AI-generated assassination attempt or fake terror incident goes viral? Pressure will mount to cover other categories of high-impact synthetic content: elections, public health, natural disasters.Enforcement transparency
Will X publish numbers on how many accounts lose revenue under this rule, how often Community Notes are involved, and how it appeals false positives? Without data, the policy is mostly PR.Regulatory collisions
As DSA enforcement ramps up, EU authorities could demand more: not just demonetisation, but structured removal processes, better detection, and independent audits. If Brussels concludes that X’s measures are cosmetic, we may see formal investigations and fines.
Technically, AI detection is an arms race. As generators improve, forensics tools must chase them. Over-reliance on automated detection could hit legitimate citizen journalists who use heavy editing or restoration tools that confuse classifiers.
Commercially, advertisers will continue to push for their brands to be kept far away from anything resembling war propaganda, whether AI or not. One possible outcome: broad demonetisation of war-related content across platforms, which could hurt independent reporting from conflict zones.
For users, the uncomfortable truth remains: we are entering an era where “video proof” from the front lines is no longer proof. Whether or not X pays the uploader is almost secondary.
The bottom line
X’s new rule is a small, financially focused response to a much larger problem. It may deter some opportunistic creators from cashing in on fake war footage, but it leaves huge loopholes for propagandists, non-monetised accounts and other kinds of AI misinformation. In Europe especially, where regulators are sharpening their knives, this will not be the last word on AI-generated conflict content. The real question is whether we’re willing to treat synthetic war imagery as an unacceptable risk—or just another engagement lever to be managed.



