Aronofsky’s AI Revolution Is Really a Stress Test for the Future of Film
The first episodes of Darren Aronofsky’s AI‑assisted series On This Day… 1776 look, to many critics, like a failed experiment. But underneath the waxy faces and jittery camera moves is something more important than whether this particular show is “good” or “bad.” It’s one of the first serious, sustained attempts to build an entire production pipeline around AI video models rather than just sprinkling generative tools on top of conventional filmmaking.
That makes this project less a history lesson about 1776 and more a live crash test of what happens when AI collides with acting, budgets, unions, and our trust in historical images. The results will matter far beyond this one series.
The news in brief
According to reporting by Ars Technica, filmmaker Darren Aronofsky’s AI studio Primordial Soup has partnered with Time to release On This Day… 1776, a year‑long run of short videos about daily events in the American Revolution 250 years ago.
Each episode is a few minutes long and depicts historical figures such as George Washington and Benjamin Franklin using photorealistic AI‑generated video. A human writers’ room, led by Aronofsky’s longtime collaborators, scripted the episodes, and professional voice actors recorded all dialogue. Human teams also handle editing, sound, music, and post‑production.
The only part delegated to “a variety of AI tools,” Ars reports, is the moving image itself: individual shots are generated by a video model based on storyboards, visual references, and the script, then stitched together and cleaned up by humans.
Critics have savaged the early episodes, calling the visuals plastic and lifeless. Yet a source close to the production told Ars the process is highly manual, can take weeks per short episode, and is expected to evolve throughout the year as tools and workflows improve.
Why this matters
The most revealing detail in Ars Technica’s reporting isn’t that the show looks rough. It’s that generating a few minutes of AI video still takes weeks of human labour, with constant re‑prompting and shot selection. That punctures the popular fantasy that you’ll soon type “do a Marvel‑quality war epic set in 1776” and receive a finished film an hour later.
Instead, On This Day… 1776 exposes the current reality: AI video is closer to an unpredictable, glitch‑prone camera than to an autonomous director. Humans still decide the story, design the shots, choose the takes, fix the weirdness in post, and worry about deadlines.
Who gains from this?
- Producers and streamers get a glimpse of a future where historical epics no longer require armies of extras, large builds, or expensive location shoots. Even at today’s quality, the project is reportedly far cheaper than staging a traditional period docudrama.
- Tool vendors gain a prestigious testbed. If Aronofsky’s team can’t make your model behave, that’s a serious red flag.
- Editors, sound designers, and writers—ironically—get proof of their continued necessity. The production source quoted by Ars essentially admits the project falls apart without a strong human editorial eye.
Who loses, or at least feels threatened?
- Actors and background performers are staring at a world where their physical presence on set can be replaced by avatars. Even if this series still uses human voice actors, the on‑screen bodies are simulated.
- Cinematographers and production designers see their craft re‑imagined as prompt‑engineering and reference curation.
And for viewers, there’s another cost: a creeping uncertainty about what counts as “documentary” when historical figures are reconstructed by models trained on unknown datasets, guided by creative choices we can’t easily see.
The bigger picture
Aronofsky’s experiment sits on top of several converging industry trends.
First, the capabilities curve. Tools like OpenAI’s Sora, Runway, Pika and others have shown a rapid year‑on‑year improvement in coherency, motion, and style control for short clips. Until now, most high‑profile demos have been isolated videos; On This Day… 1776 is one of the first mainstream attempts to maintain visual consistency across dozens of episodes tied to a calendar.
Second, the labour conflict. Hollywood’s 2023 writers’ and actors’ strikes placed AI at the centre of contract negotiations—especially around digital doubles and synthetic performers. This show is effectively a proof of concept for what those unions feared: an entire cast embodied as AI avatars, with humans limited to voices and off‑screen roles. The fact that voice work remains human today doesn’t mean it will stay that way once synthetic voices are good enough and union contracts come up for renewal.
Third, the aesthetic backlash. We’ve already seen uproar over Marvel’s AI‑generated title sequence for Secret Invasion, and online frustration at the generic “AI look” in many ads and trailers. The aggressive critical response to Aronofsky’s series fits this pattern: audiences are developing a visual literacy for AI artifacts—repetitive camera motion, rubbery skin, dead eyes—and they don’t like it.
Historically, new visual technologies go through an “ugly adolescence.” Early CGI in the 1990s aged poorly; motion capture had an uncanny phase. The difference now is speed and opacity. Instead of a slow, expensive tool used by a handful of studios, AI video can be deployed by anyone with a budget and an API key, trained on millions of unconsenting performances.
So On This Day… 1776 isn’t just Aronofsky tinkering with a toy. It’s a live demonstration of what happens when generative video collides with the politics of labour, authorship, and historical truth.
The European / regional angle
From a European perspective, this project lands in a much more regulated—and more sceptical—environment than US critics sometimes assume.
Under the forthcoming EU AI Act, general‑purpose generative models used for video will face transparency obligations: disclosing that content is AI‑generated, providing summaries of training data, and implementing technical safeguards such as watermarking. A European version of On This Day… 1776 produced for a public broadcaster like ARD, France Télévisions, RTVE, ORF or RTV Slovenija would almost certainly need clear labeling and compliance audits.
There’s also the matter of trust in historical imagery. Europe has a long tradition of meticulously researched historical drama, often co‑funded by film institutes and public broadcasters, with strong expectations around authenticity. Recreating, say, Napoleon or Tito as AI avatars in a docudrama about national history would ignite heated debate—not just about aesthetics, but about whether the state should finance synthetic representations of real leaders.
For European creatives, however, the economic temptation is real. Smaller markets from the Baltics to the Balkans have struggled to afford large‑scale period pieces. If AI can cut the cost of costumes, sets, and crowd scenes by an order of magnitude, you suddenly make possible projects that would never clear a traditional budget committee.
But those savings clash with Europe’s labour and privacy norms. Stronger unions, robust performers’ rights, and strict data‑protection rules mean that using scans of living European actors—or training on European archives—without clear consent is legally risky. That could slow adoption in the EU or push experimentation into grey‑zone jurisdictions.
In short, the same series that in the US is framed as a technical curiosity would, in Europe, quickly become a test case for the AI Act, copyright law, and cultural policy.
Looking ahead
Over the next few years, expect three things.
The workflow will normalise. What now takes weeks of painful iteration will compress dramatically. Better control tools—keyframe‑like timelines, character locks, physically‑based lighting controls—will make AI video behave more like a temperamental but powerful 3D engine than a slot machine. New roles like “AI sequence supervisor” or “prompt DP” will quietly appear in credits.
Hybrid productions will win. Purely AI‑generated series like On This Day… 1776 are likely to remain niche curiosities. The more commercially viable pattern will be hybrid: real actors on minimalist sets, with AI filling in crowds, extensions, weather, and certain shots that would be too costly or dangerous otherwise. Think of it as VFX on steroids rather than a total replacement for live action.
The legitimacy fight will intensify. Festivals, broadcasters, and educators will have to decide: Do we treat AI‑generated historical docudramas as documentaries, fiction, or a new labelled category? Expect guidelines that require explicit on‑screen disclosure when historical figures are synthetically rendered, and maybe even standards for preserving the underlying prompts and model versions as part of the archival record.
For On This Day… 1776 specifically, the key signals to watch are:
- Whether visual quality meaningfully improves by mid‑year.
- How Time positions the project: as a novelty, a serious educational product, or a tech showcase.
- Whether other major outlets commission copycats, or quietly conclude the audience isn’t ready for “AI history.”
If the show flops, studios may use it as an excuse to slow‑roll generative projects. If it finds an audience despite the uncanny visuals, the opposite will happen: every commissioning editor with a historical brief will start asking, “Can we do it the cheap AI way?”
The bottom line
On This Day… 1776 is less a breakthrough than a public beta: a visible stress test of how far AI video can be pushed before audiences, unions, and regulators push back. The early episodes validate two things at once: the technology is already good enough to be dangerous for parts of the film workforce, and still bad enough to be artistically underwhelming.
The real question for readers isn’t whether this one series looks ugly. It’s whether we’re prepared to demand transparency, fair labour rules, and clear labels before AI‑generated history quietly becomes just “history” on our screens.



