Patreon vs. AI: The Next Copyright War Is About Power, Not ‘Fair Use’

March 18, 2026
5 min read
Illustration of online creators confronting large AI companies over use of their work

1. Headline & intro

Generative AI’s quiet superpower has been free access to everyone else’s work. That era is ending — and Jack Conte just fired another flare into the sky. The Patreon CEO used his SXSW stage to call AI companies’ “fair use” defence “bogus” and to demand that creators be paid when their work trains commercial models. This isn’t just another creator-versus-tech rant. It’s a sign that the real battle in AI is shifting from model quality to control of training data — and that the long tail of independent creators risks being written out of the deal.

In this piece, we’ll unpack what Conte actually said, why it matters far beyond Patreon, and how it collides with emerging regulation, especially in Europe.


2. The news in brief

According to TechCrunch, Patreon CEO Jack Conte used a talk at SXSW in Austin to sharply criticise how major AI firms train their models. Conte stressed he is not anti‑AI and runs a technology company himself, but argued that using creators’ work at scale without payment and then labelling it “fair use” is untenable.

He pointed out that AI companies have signed multi‑million‑dollar licensing deals with large rights‑holders such as major media groups and record labels, while millions of individual illustrators, musicians and writers whose content also feeds those systems receive nothing. For Conte, that discrepancy undermines the legal and moral credibility of the “fair use” claim.

He framed AI as another disruptive wave similar to the shift from downloads to streaming or from horizontal to TikTok‑style vertical video, and said artists can adapt — but only if the economic rules are not stacked against them. His core message: AI will be part of the future, and that future must include sustainable income for human creators.


3. Why this matters

Conte is putting his finger on the core asymmetry of today’s AI boom: the companies capturing hundreds of billions in market value are not the ones who created most of the data that made their models useful.

Right now, three groups are emerging:

  • Big rights‑holders (Disney, major labels, large publishers) who can negotiate licensing deals.
  • AI platforms racing to secure high‑quality, low‑risk training data.
  • The long tail of creators — YouTubers, indie musicians, fan‑artists, newsletter writers — whose work is scraped but whose bargaining power is close to zero.

Conte’s argument exposes the contradiction: if training is clearly “fair use,” why are AI firms paying anyone at all? The obvious answer is risk management and access to curated catalogues — not legal certainty. But that answer is politically explosive, because it admits that money flows to the most powerful, not necessarily to those whose work is actually used.

If Patreon plays this well, it could become a de facto collective bargaining layer for independent creators in the AI era: a place where rights are aggregated, preferences expressed (opt‑in, opt‑out, licence tiers), and payments distributed. That’s strategically important because whoever controls the interface between creators and AI firms will shape how value is shared.

The losers, if nothing changes, are small creators in every country who watch AI systems mimic their style, while platform revenues stagnate and subscription fatigue hits their audiences.


4. The bigger picture

Conte’s SXSW comments sit on top of a much larger shift: the industry is slowly moving from “scrape first, ask forgiveness later” to building formal markets for training data.

We’ve already seen:

  • Lawsuits by authors, visual artists and news organisations against OpenAI, Stability AI and others over alleged copyright infringement in training.
  • Licensing deals where AI companies pay news publishers, stock photo platforms or music catalogues for access to archives.
  • Product positioning from players like Adobe, who pitch their Firefly models as trained on licensed or rights‑cleared content.

Historically, this is familiar. Napster claimed radical “sharing”; Spotify turned the same behaviour into a licensing business. YouTube began as a copyright nightmare; Content ID and revenue sharing turned it into a managed ecosystem, however imperfect.

Generative AI is now at the YouTube‑pre‑Content‑ID stage. Conte is effectively arguing: let’s skip the 10‑year legal brawl where creators lose income and jump straight to building the payment rails.

Competitively, this matters because access to “clean,” legally unambiguous data will become a moat. Startups that ignore licensing may move fast now but face retroactive liability; those that build with licences and creator relationships may move slower, but end up with durable, premium models that enterprises are actually willing to use.

Conte is betting that creators — not just compute — will be a strategic resource.


5. The European / regional angle

For European readers, Conte’s attack on “fair use” lands differently, because the EU doesn’t even have US‑style fair use. Instead, it has narrowly defined copyright exceptions and, since the 2019 Copyright Directive, specific rules for text and data mining (TDM).

The directive allows data mining for research, but for commercial AI training it introduces an opt‑out mechanism: rights‑holders can say “no” via machine‑readable signals (robots.txt, metadata). In theory, that should give European creators more control than their US counterparts. In practice, most solo creators have no idea these rights exist, and enforcement across global AI crawlers is murky.

Layered on top of this, the EU AI Act will require more transparency about training data, particularly for general‑purpose models, and the Digital Services Act increases platform accountability. Combined, this puts Europe on a path where the legal space for “we just scraped the open web” will steadily shrink.

There is also a cultural angle: European markets, especially Germany and France, have strong traditions of author’s rights and collecting societies (GEMA, SACEM, PRS and others). A Patreon‑like collective licensing scheme for AI training could plug into these existing structures.

For European AI startups, this is both a headache and a chance to differentiate: models trained on properly licensed European media and cultural archives could be marketed as legally safe and ethically sourced, something enterprise buyers in the EU will increasingly demand.


6. Looking ahead

Several things are likely to happen over the next three to five years.

  1. More high‑profile licensing deals. AI giants will continue signing agreements with major publishers, stock libraries and music catalogues to de‑risk their flagship models. This further entrenches the advantage of large rights‑holders.

  2. A scramble for “creator representation.” Platforms like Patreon, Bandcamp, Substack and even YouTube’s MCNs will realise they can become brokers between the long tail of creators and AI firms. Expect experiments with opt‑in training licences bundled into creator tools and dashboards.

  3. Technical standards for consent and attribution. We’ll likely see emerging standards that go beyond robots.txt: cryptographic watermarks or registries that let models track which works influenced which outputs, at least statistically. That’s a prerequisite for any meaningful royalty scheme.

  4. Regulatory tests. Key court cases in the US and the implementation of the EU AI Act will set the boundaries for what “training under an exception” can realistically cover. If judges narrow the scope, the licensing market will explode. If they accept broad training as lawful, the battle will shift to politics and platform pressure.

The risk is clear: we end up in a world where Disney and a few large media houses are paid, while independent creators become invisible raw material. The opportunity is equally real: build infrastructure where being a small creator in Ljubljana, Lagos or Lima still means having a say — and a share — when your work trains someone else’s AI.


7. The bottom line

Conte is right to call the current “fair use” posture bogus, not because AI training can never be lawful, but because the way it’s implemented today bakes power imbalances into the foundations of the AI economy. The fight over training data is really a fight over who participates in the upside of generative AI.

If you use or build AI tools, the question is no longer abstract ethics. It’s concrete: are you comfortable with models trained on unpaid labour from the very creators you say you support — and if not, what are you prepared to change?

Comments

Leave a Comment

No comments yet. Be the first to comment!

Related Articles

Stay Updated

Get the latest AI and tech news delivered to your inbox.