0:00
/
0:00
Transcript

Getty Loses AI Copyright Case: What the UK Ruling Means for You - Creator or Not

Getty Images—the most litigious photo company on Earth—just lost their AI copyright case. If they can't protect their $3B brand with armies of lawyers, what chance do individual creators have? EP #108

If you’re a musician, writer, photographer, painter, designer, filmmaker—this matters to you. Right now.

Getty Images just lost a landmark AI copyright case in the UK. Not a small creator. Not someone without resources.

Getty Images, legendary for hunting down anyone who uses their photos without permission. The company with armies of lawyers, sophisticated tracking systems, and a reputation for being relentless about protecting their intellectual property.

They lost.

A UK judge ruled that when AI companies scrape your work, break it into millions of tiny pieces called “tokens,” and use those pieces to train their models.

That’s not copyright infringement. That’s fair use.

  • Musicians: Your melodies, your lyrics, your years of practice and creative evolution?

Fair game for AI training. (Unless you happen to be in Germany, where one judge recently protected song lyrics. Good luck everywhere else.)

  • Visual artists: That painting you spent months perfecting, that illustration style developed over decades?

    AI absorbs it, learns from it, and generates work “in your style” without asking permission or paying you a dime.

  • Writers: Your voice, your stories, your unique way of seeing the world? Just words on the internet.

    Just data. Just tokens to be reassembled into something that’s “transformative” enough to escape copyright claims.

The legal argument is beautifully simple: once your work is broken into tokens, it’s no longer your work. It’s been transformed.

And courts around the world are buying it.

Recent San Francisco bus poster: Maya Ackerman

When Getty’s Watermark Becomes Evidence—And Still Loses

Getty’s case had evidence most copyright plaintiffs only dream of.

Stability AI’s image outputs didn’t just look similar to Getty photos. They literally displayed Getty’s watermark—that distinctive black banner with “Getty Images” and often the photographer’s name printed across it.

The company’s $3 billion brand, the visual signature they’ve spent decades building and protecting, starts appearing on AI-generated images.

And not just on images that might have been scraped from Getty’s collection. The watermark appeared on completely different images—distorted faces, glitchy hallucinations, weird compositions that Getty never created or would ever associate with their brand.

Their logo had become a pattern that AI learned, a visual element that got baked into Stability’s model and started reproducing itself.

When your company’s trademark appears on inferior, sometimes grotesque images you never produced, that’s not just copyright infringement—that’s bad brand dilution.

Getty’s value proposition is quality, curation, professional imagery. Now AI is slapping their name on random generations.

This should have been the easiest copyright case to prove. You don’t have to demonstrate complex similarities or argue about artistic influence.

Don’t Look at the Hands: Getty’s Original Complaint Source

The evidence is right there: Getty’s actual logo, on images, generated by a system that was clearly trained on their content.

Getty Images is known for being litigious about their IP—and for good reason. They’ve built a business on strict licensing, on making sure every use of their content is paid for.

They have the legal resources to pursue cases that smaller creators could never afford. If any company could win against AI scraping, it should have been Getty.

The UK High Court disagreed.

The Tokenization Defense: How AI Companies Are Winning

Here’s a little about how the judge may have viewed the law in this case.

When AI ingests your work, it doesn’t store it as a complete, intact copy. Instead, it breaks everything down into tokens, tiny fragments of data scattered across the model’s neural networks.

The judge used fav analogy of AI “Optimists” (not yours truly): It’s like when you read a book and it influences your thinking.

You don’t have the book stored word-for-word in your brain. You’ve absorbed concepts, patterns, ways of expression. That’s not copyright infringement, that’s learning.

Yes, there’s a massive difference. When I read a book and it influences my writing, I might produce a few sentences over my lifetime that reflect that influence.

When AI ingests a book, it can generate millions of derivative works at scale, flooding the market with content that competes directly with the original creator.

But that distinction doesn’t seem to matter to the courts.

The tokenization defense works like this:

  • Your copyrighted work gets transformed into something fundamentally different. It’s no longer a book or a photo or a song—it’s mathematical representations of patterns and relationships.

  • Copyright law protects specific, fixed creative works. Once your work becomes unfixed, scattered into millions of tokens and associations, it’s something else entirely.

You can’t easily extract the original work back out. Research suggests you might be able to reconstruct maybe 20% of a book if you really tried, using specific prompts and techniques.

But you can’t just ask the AI to reproduce the complete original. The content is in there, influencing every output, but it’s not in there as a discrete, copyable thing.

This isn’t unique to the UK ruling. I’ve been following at least ten major AI copyright cases over the past two years, across multiple countries.

The pattern is consistent: Judges look at how AI works technically, see that it doesn’t store exact copies, and feel (rulings await) that this transformation is fair use.

There was a case in Germany recently where a court found that AI companies violated copyright by using song lyrics. But that ruling only applies in Germany.

And is a fundamental problem with AI: It’s global. One country’s rules can’t contain it.

If AI companies can train their models anywhere in the world and then deploy them everywhere, strong copyright protection in one country doesn’t help.

The content has already been taken. We’re talking about events from six years ago or more.

AI companies scraped the internet long before most creators even understood what was happening.

Now we’re finding out, case by case, that judges are looking at this and deciding it’s legal. Or at least in Getty’s case, many other cases are pending.

We’ve Become China: When IP Protection Dissolves, Content is sort of Open Source

We’re becoming China.

There’s been enormous political pressure—particularly in the US—to not let China beat us in AI development.

National security. Economic competitiveness. Tech leadership.

We can’t let China win this race.

So what did we do? We adopt China’s traditional approach to intellectual property.

Historically, China has been known for not protecting copyrights—particularly foreign copyrights—unless the work has significant social or economic impact on the country.

In practice if your book or music or art makes a lot of money, if it has major cultural influence, you might get protection. If you have resources and lawyers and can prove economic damage at scale, you might get compensation.

But for everyone else? Your work is considered part of the commons. It’s shared intelligence.

It’s the natural passing on of stories and ideas. Taking it, using it, building on it—that’s how culture works.

The US and UK protect individual creators’ rights. We believe that even the solo artist, the independent writer, the small photographer deserves legal protection for their work.

You don’t need to prove massive economic impact. You don’t need to be commercially successful. If you created it, you own it.

Until now.

That was the deal. That was our advantage. We value intellectual property to protect innovation and reward creativity.

Not anymore.

Now, just like in China’s traditional model, if you have money and lawyers—if you’re Getty Images with a $3.5 billion brand value, or the New York Times, or a major record label—you can get a licensing deal.

AI companies will negotiate with you. You have the resources to litigate for years, making settlement worthwhile.

But an individual creator? You’re out of luck. Your work is training data. Your content is fair use. Your creativity is just tokens now.

The courts seem to be deciding that protection flows to those with significant economic power, not to individual rights holders.

We’ve adopted China’s model while claiming to compete against it.

Recent SF Billboard: Maya Ackerman

What This Means for Creators Going Forward

The courts have spoken, and they’ve essentially told creators that if AI can take your work, transform it into something else, and make it impossible to extract your original creation in its entirety—then it’s fair use.

This isn’t just a UK problem. It’s not just Getty’s problem. Not a single judge in the major cases I’ve reviewed has stood up and said,

“Wait a minute. Taking someone’s creative work, breaking it into pieces, and using those pieces to generate competing content. That’s still using their work.”

The legal system is built around a simple idea: copyright protects a static, unchanging creative work.

A book. A painting. A photograph. A song. One fixed thing that can be copied or not copied.

But AI doesn’t store your work that way. It learns patterns from your work. It creates associations. It generates something new-ish.

And judges keep ruling that because you can’t simply extract your original work back out of the model in its complete form, then there’s no copyright violation.

That’s the loophole. That’s the game. It’s not in there!

  • This ruling threatens the entire licensing model. Why would anyone pay Getty Images for stock photos when they can generate similar images for free using AI that was trained on Getty’s collection?

  • Why license music when AI can create “royalty-free” alternatives in any style?

  • Why pay writers when AI can generate content influenced by millions of scraped articles?

Baroness Kidron captured the absurdity perfectly when she said the High Court “chose to sanction a system that in effect says, ‘You can go abroad to break UK law and then bring the proceeds of that back’.”

AI companies can train models anywhere, using content scraped from everywhere, and then deploy those models globally while claiming they haven’t violated anyone’s rights.

Rebecca Newman, legal director at Addleshaw Goddard, put it bluntly:

“The UK’s secondary copyright regime is not strong enough to protect its creators.”

The same appears true in the US.

We’re not at the end of this legal journey. More cases are working through courts. Appeals will happen.

But you have to start looking at the patterns.

The momentum is not in favor of the creator, it favors AI.

SF boat: Maya Ackerman

The Economic Reality: When AI Becomes Business

We don’t have laws designed for this technology. The tech is brand new, or at least the application at this scale is new.

So how do we define what’s right? We follow the money trail.

Getty Images alleged that Stability AI didn’t just scrape their content—they also appropriated Getty’s brand in ways that could devalue it significantly.

When your trademark becomes associated with distorted, low-quality outputs, that has real economic consequences.

For a company whose entire value is built on premium, curated imagery, having their logo appear on AI-generated garbage is wrong. But copyright can’t protect it.

This should have been the strongest possible case. Brand damage. Trademark dilution. Clear evidence of the source. Economic impact that could be measured in the billions.

It wasn’t enough.

Stability built by scraping copyrighted content (including but not limited to Getty) without permission or compensation.

If courts start ruling that training on copyrighted works requires licensing, it would be thermonuclear for the big players that everyone in the AI ecosystem orbits around.

The OpenAIs, the Anthropics, the Googles. Their models are trained on massive datasets that include copyrighted material.

Unwinding that, paying for it retroactively, establishing licensing frameworks going forward—the costs are staggering.

I don’t think it will come to that. The courts seem determined to find legal frameworks that allow AI development to continue unimpeded.

That means creators pay the price. So far.

What Can Creators Do?

So what now?

First, understand things are changing, but there are no rules yet.

Stop assuming your copyright means anything in the AI age. These court rulings are establishing patterns that are hard to ignore.

The legal protection you thought you had doesn’t apply the way it used to.

Second, adapt by controlling who sees your work.

If you want to keep work truly private, put it behind paywalls, behind passwords, off the internet entirely.

If you’re putting content online, your new job isn’t just creation—it’s GEO (Generative Engine Optimization). That’s the new SEO.

Figure out how to get your work into AI systems in ways that benefit you, because assuming you can keep it out is increasingly naive.

Third, push for transparency.

If courts won’t protect creators retroactively, governments need to require AI companies to disclose what they’re training on going forward.

Transparency won’t fix past harms, but it might give creators some say in the future.

AI is way more than ChatGPT and text-to-image generators that need to scrape the internet.

Yann LeCun, Meta’s chief AI scientist, is leaving to build a startup focused on AI that learns by observation—more like how humans actually learn.

Watching. Experiencing. Understanding context. Not just ingesting every copyrighted work it can find and calling it “training data.”

The current model of “take everything, break it into tokens, call it transformative” may not be the only path forward for AI development.

But right now, today, it’s the path courts seem to be blessing.

Getty Images learned that the hard way, with the clearest evidence possible and resources most creators will never have. They lost anyway.

The courts aren’t protecting creators. They’re protecting the AI industry’s ability to grow without friction.

And in doing so, we’ve abandoned the principles of individual IP rights we once claimed made us different from China.

Your work is training data now. The only question is what you do about it.


Additional Resources

Blow for UK copyright holders as High Court sides with Stability in Getty infringement claim
Graham Lovelace’s detailed analysis of the ruling and its implications for creators

Music rights group scores landmark legal victory in copyright battle with OpenAI
Coverage of Germany’s ruling protecting song lyrics from AI training

Meta’s star AI scientist Yann LeCun plans to leave for own startup

Discussion about this video

User's avatar