The ‘This Is Fine’ Dog Just Became AI’s Copyright Canary
If you wanted a single image to sum up the AI industry’s attitude to artists and copyright, you could hardly pick a better one than a smiling dog in a burning room. The latest dispute around KC Green’s “This is fine” meme and AI startup Artisan is not just another internet drama; it’s a stress test for how aggressively AI companies think they can exploit culture without consent. What’s at stake here is less one comic and more the social licence under which AI is allowed to remake creative work, advertising and, eventually, whole industries.
The news in brief
According to TechCrunch, webcomic artist KC Green says AI startup Artisan used an altered version of his famous “This is fine” comic in an ad campaign without his permission. The ad, reportedly spotted in a subway station, shows Green’s recognisable dog surrounded by flames with new text about a “pipeline” being on fire and a call to “Hire Ava the AI BDR,” referencing Artisan’s AI sales product.
Green wrote on Bluesky that he never agreed to the use and described the ad as having been “stolen,” comparing it to how AI systems take artwork. He even urged followers to deface the posters if they see them. Speaking to TechCrunch, Green said he is seeking legal representation.
Artisan told TechCrunch it respects Green and is contacting him directly, later adding that a conversation has been scheduled. The company previously courted controversy with billboards telling companies to “Stop hiring humans,” which its CEO framed as targeting a category of work rather than people in general.
Why this matters
On the surface this is one startup, one meme and one angry artist. Underneath, it’s a perfect microcosm of three fault lines: meme culture, AI ethics and startup marketing.
First, memes have long lived in a legal gray zone. Users remix them socially, often non‑commercially, and most original creators tolerate – or quietly endure – the loss of control. But the social contract changes the moment a venture‑funded company prints that meme on a subway billboard to sell an AI product. At that point it stops being participation in internet culture and becomes commercial exploitation of someone else’s IP.
Second, for many artists, AI already feels like large‑scale, industrialised appropriation. Models are trained on their work without consent; datasets are opaque; compensation mechanisms barely exist. When an AI startup then appears to literally lift a copyrighted comic for its ad, it confirms the worst suspicions: that the industry sees human creativity as free raw material.
Who benefits? In the very short term, Artisan gets attention – this column included. But it’s toxic awareness. For founders, this is a case study in how not to market an AI startup in 2026: you invite legal risk, alienate creators, and signal to regulators that the sector cannot self‑police.
The losers are obvious. Green now has to spend time and money navigating a legal system instead of making comics. Other artists see yet another example that “going viral” often means losing control to companies with bigger legal budgets. And the AI ecosystem loses a bit more public trust at precisely the moment lawmakers in the US and EU are deciding how tightly to regulate it.
The bigger picture
This isn’t happening in a vacuum. It slots neatly into several ongoing battles around AI and creative work.
Over the past two years, we’ve seen a wave of lawsuits against AI companies over training data: authors suing OpenAI and others, Getty Images taking Stability AI to court, and class actions by visual artists. Whether or not those plaintiffs ultimately win every case, they’ve already shifted the narrative. The story is no longer “magical AI learns from the internet,” it’s “tech companies quietly ingest mountains of copyrighted material and call it innovation.”
The “This is fine” dispute is the same energy on a smaller canvas. Training a model on unlicensed art may feel abstract; slapping an unlicensed, recognisable comic on a subway ad is concrete, legible and easy for regulators and jurors to understand.
We’ve also been here before with memes. Cartoonist Matt Furie eventually sued Infowars over its use of his Pepe the Frog character on a poster, and they settled. Countless other meme creators — from the “distracted boyfriend” photographer to Vine and TikTok originators — have watched brands cash in on images and sounds that they themselves barely monetised.
Historically, tech has a pattern: push IP boundaries until someone sues, then negotiate a new normal. Napster broke the music industry’s business model; Spotify and Apple Music emerged from the rubble with licensing baked in. YouTube was flooded with pirated clips; Content ID and rights‑holder deals followed.
AI is in its Napster phase. Startups like Artisan are playing with fire (quite literally, in this case) at the edges of copyright and ethics. The big difference this time is that regulators are moving earlier. The EU’s AI Act, the US Copyright Office inquiries and multiple court cases mean there is far less room for the “we didn’t think about it” defence.
Compared with Big Tech, which now employs armies of lawyers and brand safety teams, smaller AI startups seem especially tempted by edgy, “move fast” marketing. That may generate screenshots and social buzz, but it also paints a regulatory bullseye on their backs.
The European angle
Green is American and Artisan appears to be US‑based, but the precedent and the attitude matter globally — especially in Europe.
European law traditionally gives authors stronger “moral rights” over their creations than US law, including the right to object to certain uses that distort their work. The EU’s Copyright in the Digital Single Market Directive already tightened rules around online platforms, and the AI Act will add transparency obligations for models trained on copyrighted data.
In that context, an AI startup casually using a recognisable comic in out‑of‑home ads without a clear licence would be an even riskier move on European soil. Rights‑holders and collecting societies in countries like Germany and France are generally more litigious and better organised than in the US. A meme on a billboard in Berlin or Paris is far more likely to trigger a cease‑and‑desist than a shrug.
There’s also a cultural factor. European audiences tend to be more sensitive to labour and creator rights and more sceptical of automation that “replaces humans.” Artisan’s earlier tagline, “Stop hiring humans,” might get clicks in San Francisco; in many EU capitals it reads like a direct provocation to unions, policymakers and the cultural sector.
For European AI startups, the lesson is blunt: your branding is now part of your compliance posture. Sloppy meme appropriation isn’t just a PR risk; under the Digital Services Act, illegal ads and IP violations can have platform‑level consequences. As Brussels starts enforcing the AI Act in 2025–2026, cases like this will colour how regulators perceive the industry’s maturity.
Looking ahead
What happens next for Artisan and KC Green is almost predictable. The company will likely try to settle: apologise publicly, pay a licence fee or damages, and quietly pull the ads. Green has already indicated he is looking for legal representation; even a modest lawsuit would be expensive and embarrassing for a startup that depends on trust to sell AI sales agents to businesses.
More broadly, expect a tightening of the advertising supply chain around AI. Creative agencies and outdoor media owners will get more conservative about approving AI‑linked campaigns that use well‑known internet culture. Brand‑safety checklists will expand from “is this offensive?” to “did someone actually clear the rights to this meme?”
At the same time, a new market opportunity is emerging: AI that is legally and ethically sourced. Stock‑photo platforms, music libraries and illustration marketplaces are already experimenting with “opt‑in for AI training” catalogues. Over the next few years, we’re likely to see more startups whose selling point is precisely that they pay creators and document licences — a kind of Fairtrade label for machine learning.
The unanswered questions are thorny. Where is the line between parody and infringement when memes are involved? How much transformation is enough to count as fair use, especially in advertising? And how should the law treat models that have “seen” copyrighted work in training even if they don’t reproduce it exactly?
For now, founders should assume the practical standard is much stricter than the theoretical one. If a reasonable person can look at your ad and say, “That’s the ‘This is fine’ dog,” you probably need a licence. And if your entire brand promise is that AI can replace humans, you can’t afford to treat human creators as disposable.
The bottom line
Artisan’s alleged use of the “This is fine” comic isn’t a quirky meme story; it’s a warning shot. It shows how quickly AI startups can burn through goodwill — with artists, with the public and with regulators — by treating culture as a free dataset. Sustainable AI innovation will require treating creators as partners, not fuel. The real question for the industry is simple: will it learn that lesson voluntarily, or only after courts and lawmakers make the fire too hot to ignore?



