For two years, I've documented the AI content wars from every angle - court cases, licensing deals, developers' reasons for taking content without permission or payment, and the gut-punching threat that greets every creator who does the work for little reward.
Judging creators to be standoffish, privileged, and pains in the ass reveals the true element lacking in AI: respect for people who create, who aren't engineers.
Because you don't build brilliance by lacking respect for humanity.
The singularity isn't just about humans and machines melding - it's about a better life for people. AI companies throw this out with casual disregard, and it shows.
Wonder why AI adoption is so slow?
Why only 3% are willing to pay for AI?
Why Big Tech forces AI into every piece of software without choice, telling the rest of us to 'adapt or die' as if they're in control?
This is what causes crashes and bad business. And oh yeah - what's your business model?
Because except for ChatGPT, most willingly hide from having one. After all, business models and plans and people are distractions from this grandiose vision of AI.
I've worked with engineers for years. Many stare into the mirror of their own creativity like the artists they disdain.
They're myopic, don't listen to users, convinced they're geniuses. Dunning-Kruger at best, inferiority-superiority complex at worst.
Meanwhile, great engineers listen to people. They don't start with tech - they start with users, the ones who show you what's up.
Creators understand it isn't their work that matters - it's the audience's reaction.
What they create isn't what the audience consumes. It means different things to different people. Unpredictable.
That's how you find a business model: customers plus onboarding and retention. Relying on the latest cool AI is so yesterday.
Most of it's based on a few LLMs with little differentiation. This game gets hard when you're stuck in the engineer bubble hoping the hype lasts.
And it doesn't - which is good. What's happening now is ego.
Safe Superintelligence and Sutskever running secret projects with no plans, no business models, just billionaire egos getting stroked.
But these dances end when truth surfaces. What people do with your tech matters more than what you say.
Engineers fall on the cross of their own egos, staring into mirrors users don't share.
You're part of the world. Join it. Get a business model. Stop bragging you should get content for free because 'AI will free the world.'
Then tell me why DeepSeek is every bit as good with less money - because they focus on solving problems, not massaging engineer egos.
That bubble is popping. Here's how I know.
Part 1: The Divide (Bruce Randall Interview)
Why both sides think they're right - and why that's the problem
Bruce Randall cut right through my bias in ways that made us both laugh - because we recognized ourselves in the problem. I share what I’ve heard….
"I think both sides are like incredibly similar... They're entitled. Their work sucks.
I don't like their work, so they shouldn't get paid for it. Right?
Versus the creatives. Like I should get paid for everything that I don't get paid for."
We're all just defending our positions instead of listening.
Engineers dismiss creators as entitled whiners whose work isn't worth paying for. Creators demand payment for everything that gets used.
Both sides have dug in so deep they can't see how similar their arguments actually are.
Bruce nailed the core issue - it's not about who's right, it's about perspective and how people develop that perspective.
Once they lock into their worldview, they resist change because they believe they're absolutely right. The other side believes they're absolutely right too.
Then Bruce said something that stopped me cold:
"And then when you start going inside, you start developing. You start seeing that it's all the same, right? It's just a matter of what degree to what side you're on."
That's the breakthrough most people miss. It's not two different species fighting - it's humans with different stakes in the same system.
The engineering mindset that solves technical problems runs into the creative mindset that solves human problems, and instead of collaboration, we get tribal warfare.
The solution isn't picking sides - it's recognizing we're humans being imperfect and learning from each other.
That's what AI needs too. Not just code and training data, but understanding the impact on people.
Not parading around with "AI First" and "Stop Hiring Humans" with pride.
It's cool to be efficient, but it's cruel to focus your future on the destruction of someone else's present, life, and future.
Both sides could find common ground and build on it, but they're stuck believing their truths are the only truths that matter.
TLDR:
Both sides create "truths" from their beliefs, resist change once positions harden
Developers stereotype creators as entitled; creators want payment for everything taken
The similarity between sides is what they refuse to see
Solution requires finding common ground instead of defending positions
"AI First, Stop Hiring Humans" isn't progress - it's cruelty disguised as efficiency
Part 2: The Human Edge Makes AI Brilliant
Where else is that data from, and the logic that makes sense of it?
Here's what manyAI developers miss: you're not just scraping data, you're losing the source of what makes that data valuable in the first place.
"If we lose the human edge, we're going to lose the great AI that we could have.
Because if you lose that edge, you lose out on a lot. You lose showing us what could be, not just what was or what's popular now."
Are we going to live like content social media algorithms, feeding us just what we want so we don't actually grow?
"The edge isn't just a boundary, it's the frontier of human creativity.
And once you lose it, no amount of AI can bring it back."
For the few who choose to be creative - because it's not a calling of many, it's hard and rarely gets compensated - it's good to nourish and nurture this, not simply take what they create without reward.
Look at what happened to Marvel. Those comics took years to develop, then the blockbuster movies became like AI is becoming - just kept repeating the good stuff, focusing on the violence, not the human interactions.
It went from meaningful to caricature. And those Marvel comics were founded by writers and artists with little pay, people looking at them like they were crazy.
Ask Stan Lee how long it took - there's no guarantee. Same thing with AI.
"The human edge is the source of creativity. Otherwise, we're just spinning the same tunes over and over again. It's sort of dull. Do you really want 2025 on repeat?"
When everything goes into some AI database and over time becomes just an image we can sort of create, you lose the breakthrough.
TLDR:
Creativity comes from the margins, gets killed when everything moves to algorithmic middle
Marvel went from breakthrough to repetitive caricature - same path AI is on
Taking without nurturing kills the creative source that makes AI valuable
Part 3: Andersen et al v. Stability - The Legal Crack
Judge Orrick's discovery ruling - why this small win could crack Big Tech's wall
Sarah Anderson and her group of artists just landed something that seemed impossible - they cracked open AI's black box, even if just a little.
After months of legal maneuvering where it looked like Big Tech would shut this down before it even started, Judge Orrick delivered a surprise.
The artists have only one angle to protect them - copyright law. And the copyright lawsuit was at a point where it could have been dismissed before discovery, where lawyers sit down and ask the hard questions.
This is where they discover what each side actually has, what evidence gets brought into the case, and whether it's legal or not.
If it doesn't go to discovery, AI companies don't have to share anything about what's actually going on behind the scenes. But Judge Orrick said yeah, it does.
"Now they're going to have to open those black boxes of AI not only tell us how to work, but telling the decisions that went through to grabbing those materials."
Lawyers for the artists can now peer inside and examine documents from Stable Diffusion, Midjourney, and DeviantArt, revealing more details about their training datasets, their tech, and how we got here in the first place.
Something private companies don't have to share unless they do something illegal - like maybe violating copyright law.
Remember what Stability's CEO said?
"We took them. Now we can recreate them and do iterations."
Laws are built on intent - why you did something. And OpenAI says it can't create this without copyrighted material and won't pay for it. The intent is actually pretty clear.
This isn't a legal decision yet - the case has got a ways to go.
But it means the case has enough merit to warrant that deeper discovery. That's what makes this huge.
Even though this is a small victory, remember - this is a giant underdog fight. How are they going to win against Big Tech companies whose business model is "steal and sue," or at least have lawyers to defend yourself and ask forgiveness later?
This small victory is a crack in the wall of Big Tech dominance and could lead to more accountability in the future and more respect for the people who create the content.
TLDR:
Anderson v. Stability moves to discovery - AI companies must reveal training data sources
Judge found "sufficient" evidence of induced copyright infringement to proceed
Companies must explain decision-making process behind grabbing copyrighted materials
Small crack in Big Tech's "steal and defend later" business model
Discovery could expose intentional copying vs. claimed technical accidents
Part 4: Ultraman in China - The Economics Lesson
When courts actually side with creators - and why it matters
Here's where things get weird. In a Shanghai courtroom, a Japanese superhero won a copyright battle against AI.
Meanwhile, in Tokyo, that same AI could legally devour everything in sight.
This isn't just about Ultraman - it's about your work becoming someone else's fuel, and Japan is showing us how it happens.
Japan has one of the most permissive AI models in the world.
That manga you created? Legal training data.
That song you recorded? Fair game for AI.
But it's not that simple - there are boundaries and rules around this, allowing access to copyrightable content to learn, as long as it doesn't replicate, copy, or impact what money creators earn from their work.
That last part is crucial: financial challenge.
"As long as it doesn't do it, to really replicate the copy it, or to challenge somebody financially."
But in China, they drew a different line. When AI companies used Ultraman's likeness to generate content that competed directly with the original, the court said no.
The key wasn't that AI training happened - it was that economic damage occurred.
This case reveals the real test for copyright protection worldwide. Courts aren't interested in abstract principles about AI ethics or creative rights.
They care about economic impact and demonstrable damages. Show real financial harm, and courts will lean toward creators. Show no damages, and you get no judgment.
The weird contradiction here is telling. Japan allows AI to consume everything but protects against economic impact through the courts.
China, where most people think copyright law is ignored, actually enforced it when clear loss of money to the brand was proven.
This signals something important: the licensing wave isn't about moral arguments or creator rights. It's about business reality.
As AI companies start making real money from generated content, they create real economic competition with original creators. That's when courts start paying attention.
The Ultraman case shows us the future - not because of any grand legal precedent, but because it demonstrates the threshold where taking content becomes legally risky.
AI companies can scrape and train all they want, but the moment their output starts displacing original creators' income, they've crossed into dangerous territory.
TLDR:
Japanese superhero wins copyright case in China despite Japan's permissive AI laws (though the fine was paid by a web site that used another AI to generate the image, so it’s not really stopping the problem.)
Court focused on economic damage, not training data usage
Shows the real test: demonstrable financial harm to creators triggers legal protection
Licensing isn't about ethics - it's about avoiding lawsuits
AI companies safe until their output directly displaces creator income
Part 5: NY Times Reality Check - The 85% vs. 20% Revelation
What's actually happening vs. the headlines
The New York Times versus ChatGPT case has been my go-to example of AI companies literally reproducing copyrighted content.
I quoted that 85% accuracy rating from their legal documents, showed all those exhibits where ChatGPT spit out nearly verbatim New York Times articles.
Then a smart Substacker named Swen Werner made me look closer.
What I found changed everything about how I see this case.
The New York Times wasn't pulling out complete articles. They were pulling out snippets - little sections where they'd give ChatGPT the beginning of an article and ask it to continue.
These were all using articles printed by the New York Times, and they showed tons of examples. But they were all short clips.
I didn't see one single full article reproduced, which is what I was expecting based on the way they positioned their case.
When Swen compared what was actually pulled out to the original articles, he found about 20% of the article content had that 85% similarity rate.
So 20% of the article showed 85% reproduction - not the entire piece.
The New York Times was doing all sorts of things to influence the output, including techniques most of us can't do today.
For the general public paying $20 a month for ChatGPT, it would take enormous amounts of work to try to reproduce what's already been created.
"Large language models do what's called reconstruction. They put together different pieces, different starting with tokens, putting together words.
There's a whole science to it, but nothing is actually sitting there like the New York Times claims in memory."
The memorization argument falls apart when you understand how LLMs actually work. They reconstruct patterns, they don't store articles like a database.
When you can pull out Hamlet's "To be or not to be" passage, that's not because it's stored verbatim - it's because that pattern appears so frequently in training data that reconstruction becomes highly probable.
The same thing applies to that viral Guy Fieri article. It went massively viral on social media, got commented on everywhere, quoted endlessly.
All of that social sharing made it much more likely that content would be reconstructable from ChatGPT.
This case isn't really about copying snippets. It's about the New York Times versus ChatGPT as competitors.
The Times doesn't want ChatGPT to become a news source. They don't want it taking subscribers away.
At worst, if you could go into ChatGPT and get New York Times content without paying them, that's obviously what this is all about.
But the evidence shows that's not what's actually happening. What's happening is business competition disguised as copyright violation.
Be interesting to see how this turns out, as it appears licensing settlements have likely been offered along the way. But the NY Times sees something bigger, again, that economic damage. Keep an eye on this one.
TLDR:
NYT showed 85% similarity in snippets, not full articles - only 20% of articles affected
Required advanced prompting techniques most users can't replicate
LLMs reconstruct patterns, don't store articles in memory like databases
Viral content more likely to be reconstructed due to training data frequency
Real issue: business competition between NYT and ChatGPT, not only copyright theft
Case reveals gap between legal headlines and technical reality
Two years ago, I started this podcast as the AI Optimist, believing we'd find common ground between creators and developers.
I documented court cases, licensing deals, and arguments from both sides. Being nuanced and balanced, this podcast didn’t stand a chance of reaching the masses because I wasn’t playing the clickbait game - which most do who find an audience.
The common ground exists - but it's not where anyone expected.
It's in the reality that free content was always temporary, and ego-driven development was always unsustainable.
I've built businesses through crashes before. My startup's revenue tripled when the dotcom bubble burst - not through hype, but by focusing on what people actually needed.
This AI world has more money, and it's the same exclusivity problem: engineers only. That exclusivity kills growth.
There's more to the world than math and engineering. I love engineers - work with them constantly - and know in their hearts they're some of the kindest, most caring people.
But not with AI. This lack of appreciation for humanity shows, scares users, and isn't your best.
Want to make AI that changes the world?
Begin with yourself. Realize we're all serving humanity, not AI - despite what some leaders want.
Humans are amazing. Try looking at them the way you look at AI.
Look at Parasol Cooperative's RUTH - a chatbot helping people escape domestic abuse and human trafficking.
Silent, Hidden, Digital: AI Tackles Tech Abuse
I remember it so clearly, the Exhibitor’s booth with the simple name Ruth at TechCrunch Disrupt 2024, away from the posturing about AGI and the next trillion-dollar unicorn.
No data collection because their users' privacy isn't a corporate slogan, it's their lives.
No paywalls, no founder ego conferences. They love what they're doing and use AI for people who need help. AND need to keep it private.
Want a real Turing Test?
Be like Turing - doing things for society, caring even when society didn't care for his identity.
Prove you're human to another human not through IQ tests founded on eugenics, but by being real, caring, using your skills for what people need instead of telling them what they need.
You don't have to be in a cult. You can be part of the human race, as can AI.
And while you're debating superiority, DeepSeek builds better AI for less money because they focus on solving problems, not stroking egos.
Americans love saying we're the greatest - I love this country, but there's no proof we're better. We're colleagues, not masters of AI.
While China uses this for surveillance openly, we build surveillance capitalism and act superior. At least they're honest.
The great AI content free-for-all is over. The engineer ego bubble is deflating. The business model reckoning is here.
Your choice: Join the world, or keep pretending you're the genius while reality moves on without you.
Because this isn’t Dotcom with the US leading the way. The whole world is able to build AI today.
Most aren’t doing it to threaten people, they are trying to help them. And many in the US are also doing this, but the leadership of AI is out of touch.
Billions of dollars will do that. Most of the world doesn’t need billions to do this and their solutions come from the people they design their AI for.
Turn the focus from Singularity to people, and watch what you find.
RESOURCES
If AI is an Inventor, then So is Nature - Robert Plotkin (EP #35)
The NYT’s AI Lawsuit Hinges on a Misleading Claim—And Nobody Noticed
OpenAI warns copyright crackdown could doom ChatGPT
How a New York Times copyright lawsuit against OpenAI
could potentially transform how AI and copyright work
Who’s suing AI and who’s signing:
January 2025: Major AP, AFP and Axios deals announced.
By Charlotte Tobitt
Top Takeaways from Order in the Andersen v. Stability AI Copyright Case
by Kevin Madigan
Oscar Wilde by Napolean Sarony
Chinese Court Issues First Decision on AI Copyright Infringement
China’s First Case on AIGC Output Infringement--Ultraman
Report on AI and Copyright Issues by Japanese Government
27th_Tokyo_International_Film_Festival_Ultraman_from_Ultraseven_(15001540863) By Dick Thomas Johnson from Tokyo, Japan - 27th Tokyo International Film Festival: Ultraman from Ultraseven, CC BY 2.0,
By Dick Thomas Johnson from Tokyo, Japan - Ultraseven from "ULTRASEVEN 55th Anniversary Special Screening" at Red Carpet of the Tokyo International Film Festival 2022, CC BY 2.0,
Share this post