Olympic ice dance just had its first AI soundtrack scandal. Sports federations are not ready.

February 10, 2026
5 min read
Czech ice dancers performing on Olympic ice with digital soundwave graphics in the background

Headline & intro

When a Czech sibling duo stepped onto Olympic ice this week, the real provocation wasn’t their lifts, but their playlist. Half of their rhythm dance music was generated by AI — in the style of 1990s rock — and uncomfortably close to real songs. Suddenly, a background detail became the main story.

What looks like a quirky footnote is actually a warning shot: generative AI has crashed into elite sport, and the rulebooks are blank. In this piece, we’ll look at what happened, why it matters far beyond figure skating, how it intersects with European regulation and copyright, and what needs to change before AI music becomes the default soundtrack of global sport.

The news in brief

According to TechCrunch, Czech ice dancers Kateřina Mrázková and Daniel Mrázek used partially AI-generated music for their rhythm dance at the Winter Olympics. The program had to match this season’s theme, “The Music, Dance Styles, and Feeling of the 1990s.” Their chosen mix combined AC/DC’s “Thunderstruck” with an AI-created rock track described in official documentation as “One Two by AI (of 90s style Bon Jovi).”

Earlier in the season, the pair reportedly used another AI-generated 90s-style song whose lyrics closely echoed New Radicals’ hit “You Get What You Give,” lifting recognisable lines and even the “one, two” opening. Ahead of the Games, they swapped those lyrics for new AI-generated ones that sounded suspiciously like Bon Jovi, including a phrase that also appears in an existing Bon Jovi song, with a vocalist timbre very close to Jon Bon Jovi’s.

There is currently no explicit rule in International Skating Union (ISU) or Olympic regulations that bans AI-generated music. The program was therefore allowed, but it has sparked online backlash and debate about plagiarism, copyright and the role of AI in artistic sports. TechCrunch notes this all happens while the music industry itself is experimenting with AI-native acts, including an artist persona built with Suno that reportedly landed a multi‑million‑dollar record deal.

Why this matters

On paper, nothing illegal has been proven. In practice, this episode exposes three uncomfortable truths.

First, athletes are being turned into involuntary beta testers for generative AI ethics. Mrázková and Mrázek did what many coaches and choreographers are surely considering: use AI to cheaply generate “original” music in a specific style, avoiding complex licensing deals. They followed the written rules — and still ended up as the face of a plagiarism debate on the world’s biggest stage. The reputational risk sits squarely on the athletes, while the incentives and grey areas were created by federations and rights holders.

Second, governing bodies are badly behind the curve. Figure skating has hyper‑detailed regulations on skirt length, lift positions and even the number of twizzles, yet says almost nothing about how music is created. The assumption was simple: music is either licensed or composed. Generative AI breaks that binary. It can spit out something that is “technically new” but statistically very close to existing works — especially when prompted “in the style of X.” Rulebooks written for CDs and orchestras do not survive contact with models trained on the entire history of recorded music.

Third, AI music shifts power in subtle ways. If federations and broadcasters start nudging athletes towards AI‑generated tracks because they’re “rights‑clean” and cheaper to clear globally, human composers and smaller labels lose a key revenue stream: bespoke pieces for sports programs, from gymnastics to skating to synchronised swimming. At the same time, original artists whose work was scraped to train these models may see their style echoed at the Olympics without consent or compensation.

The immediate implication: this won’t be the last AI soundtrack we hear in competition. But unless someone moves quickly, the next one may come with a lawsuit attached.

The bigger picture

This controversy sits at the intersection of two broader trends: the mainstreaming of AI‑generated music and the growing techification of sport.

On the music side, tools like Suno, Udio and others have already made it trivial to type “epic 90s rock song like Bon Jovi” and receive a polished track with vocals in seconds. According to TechCrunch, one Suno‑powered persona, Xania Monet, has already translated AI‑assisted songs into a reported $3 million record deal. At the same time, we’ve seen previous uproar around AI‑generated tracks mimicking famous artists’ voices, such as the viral fake Drake/The Weeknd song that was eventually pulled from platforms. The line between homage and impersonation is shrinking.

On the sports side, technology has long been welcomed when it improves judging accuracy or athlete performance: think Hawk‑Eye in tennis or VAR in football. But when tech touches the aesthetic core of a sport — its music, choreography or narrative — fans react differently. Ice dance, like gymnastics or artistic swimming, sells itself as a fusion of athleticism and artistry. Bringing in AI as a silent ghost composer raises existential questions: whose creativity is being judged? The skater’s, the choreographer’s, or the prompt engineer’s?

Historically, we’ve seen similar inflection points when new tools entered artistic sport. The arrival of powerful editing software made it easier to stitch together complex musical cuts; skating federations eventually had to regulate abrupt sound effects and lyrics. Now, generative AI is the new frontier. Where editing rearranged existing songs, AI can fabricate “new” ones that lean heavily on the statistical patterns of old hits.

Compared to other sports, figure skating may simply be the first to get caught on camera. Gymnastics, dance competitions, cheerleading, even marching bands all rely on themed playlists and tight licensing budgets. Anywhere there is pressure to find catchy, recognisable styles without paying top‑tier rights, AI is now an obvious temptation.

The industry signal is clear: generative music is moving from experiments on SoundCloud and TikTok into prestige, tightly regulated spaces. Once it’s on Olympic ice, it’s in every federation’s inbox.

The European / regional angle

From a European perspective, this is a three‑way tension between athletes, rightsholders and regulators.

The EU’s Copyright Directive already anticipated some of this. It introduced rules for text‑and‑data mining: AI developers can scrape works unless rightsholders explicitly opt out. In practice, few musicians have the resources or knowledge to do that effectively. European collecting societies like GEMA (Germany) or SACEM (France) are already worried about AI models built on their repertoires without proper licensing. Seeing AI‑style mimicry on the Olympic stage will only sharpen those concerns — and could accelerate demands for collective licensing schemes specifically for training and generative outputs.

Then comes the EU AI Act. While it doesn’t ban generative AI for music, it does impose transparency duties for “synthetic” content in certain contexts, especially when there is a risk of deception. Today, a TV commentator casually mentioning “this first part is AI‑generated” is enough. Tomorrow, broadcasters in the EU might be legally required to label AI‑generated audio more clearly on‑screen, or in event documentation. That would change the optics for federations considering AI soundtracks.

European audiences are also more privacy‑ and rights‑conscious than many US viewers. In countries like Germany or France, the idea of an AI track obviously echoing Bon Jovi or New Radicals — built on unlicensed training data and then showcased at a state‑supported sporting event — will be politically sensitive.

For smaller European federations and national Olympic committees, especially in Central and Eastern Europe, the economic temptation is real: AI music can appear “free” compared to dealing with global labels. But if the output ends up infringing, liability may land on broadcasters or event organisers in EU jurisdictions with strict copyright enforcement. That’s a risk accountants should not ignore.

Looking ahead

Expect this Olympic program to become a case study in future policy discussions. Several things are likely over the next four‑year cycle.

First, sports federations will start writing explicit AI clauses into their music rules. The ISU and national skating bodies can’t afford another Olympics where an AI track triggers a copyright scandal mid‑competition. We may see requirements such as: written confirmation that music is either licensed, self‑composed, or generated using tools whose training data and output rights meet specified standards; bans on “in the style of [living artist]” prompts; or at minimum, mandatory disclosure that can be checked against known works.

Second, the music industry will test its leverage. If any AI‑generated sports tracks are found to be substantially similar to specific songs, rights holders may pursue high‑profile enforcement, not only against the athletes but also against AI tool providers and organisers. Even the threat of litigation could push federations towards safer, human‑composed options.

Third, AI music platforms themselves will adapt. To stay on the right side of governments and labels, they may introduce style‑blocking (refusing prompts that closely target specific artists), stronger similarity checks against known catalogues, or special “broadcast‑safe” modes that attest to lower legal risk. That, in turn, will shape what kind of AI music is realistically usable in elite sport.

For viewers and athletes, the near‑term future will be noisy. Expect more debates about authenticity when AI soundtracks show up in gymnastics, dance, or even opening ceremonies. The key question isn’t “AI yes or no?” but “On whose terms, and with what guardrails?”

The bottom line

AI‑generated music at the Olympics isn’t the apocalypse, but it is a loud alarm. It exposes how unprepared sports, law and culture still are for generative tools that blur the line between influence and imitation. If federations and regulators treat this as an isolated curiosity, they’ll be back here in 2028 with lawyers on the boards. The real decision is ours: do we want the world’s most human performances set to music born from prompts trained on unconsenting artists — and if not, who will rewrite the rules?

Comments

Leave a Comment

No comments yet. Be the first to comment!

Related Articles

Stay Updated

Get the latest AI and tech news delivered to your inbox.