0:00
/
0:00
Transcript

The Only AI With a Patent: Why Stephen Thaler's DABUS Got Erased from AI History

What if creativity isn't about predicting patterns—it's about breaking them? One AI pioneer build machines that invent, and create today. His questions reveal the path we should consider? EP #109

There’s a man who built AI designed to surprise him.

Not to predict. Not to optimize. But to generate ideas he never trained it to create—by introducing controlled chaos into its neural networks.

Earlier this year, I interviewed Stephen Thaler for Episode 95 of The AI Optimist. What he told me shifted how I understand AI’s potential—and revealed why the current LLM-dominated conversation might be pointing us in the wrong direction.

This isn’t about ancient history. It’s about what happens when an industry gets so fixated on one approach—prediction at scale—that other paths to machine creativity get drowned out by the hype cycle.

Not because they failed, but because they asked uncomfortable questions that trillion-dollar valuations couldn’t afford to answer.


The Pioneer We’re Not Hearing About

[Podcast: 0:00-1:06]

Stephen Thaler’s Creativity Machine was already generating novel designs in the 1990s—before Google existed, before social media, before anyone was talking about deep learning.

By 2018, he was represented in courtrooms arguing that his AI system—called DABUS (Device for the Autonomous Bootstrapping of Unified Sentience)—deserved to be listed as an inventor on patent applications.

Not him. The machine.

The courts said no. US, UK, Europe, Australia. The legal answer was unanimous: only humans can invent.

Thaler’s work asked the exact questions we’re drowning in today.

  • Who owns what AI creates?

  • Can machines be authors?

  • What happens when creativity comes from something that isn’t human?

He was asking these questions in 2018. We’re still asking them in 2025.

So why isn’t his work part of the mainstream AI conversation?

Maybe because his answer challenges the story Silicon Valley needs to tell. He didn’t build a prediction engine.

He built something designed to break its own patterns—to generate ideas through controlled disruption, not statistical refinement.

That’s not how you justify trillion-dollar market caps for large language models.

This is about what gets remembered when the hype cycle decides what matters—and what we lose when attention becomes the currency that determines whose questions get heard.


Creativity From Chaos—A Radically Different Vision

[Podcast: 1:06-6:04]

Imagine loosening a bolt in a clock. Not breaking it—just introducing enough instability that the gears hit rhythms they were never designed for.

That’s Thaler’s Creativity Machine.

Most AI works like this: feed it millions of examples, let it find patterns, ask it to predict what comes next.

More data, better predictions, smarter output. It’s the foundation of every large language model dominating headlines today.

Thaler flips the entire model.

His systems—Creativity Machine in the ‘90s, DABUS in the 2010s—don’t optimize for accuracy.

They introduce noise. Deliberate disruption. Controlled instability.

The idea: creativity isn’t the best statistical guess. It’s what happens when a system breaks pattern.

The Inventions That Emerged

DABUS reportedly invented two designs that became the center of its legal battles:

The Fractal Container: A beverage container with a fractal profile on its walls—interior and exterior surfaces featuring corresponding convex and concave fractal elements.

The design creates novel properties: improved grip, better heat transfer, and interlocking capabilities that conventional containers lack. It’s not just aesthetically interesting—it’s functionally innovative.

The Neural Flame: An emergency beacon that pulses light in specific patterns designed to attract attention more effectively than steady illumination. The rhythm and frequency were generated by the system’s internal dynamics, not trained from existing emergency signal databases.

Thaler didn’t train DABUS on container designs or rescue equipment. He claims these emerged from the system’s internal disruption—ideas the network generated because it was pushed into chaos, not because it learned from examples.

Red alert! ChatGPT needs more cat photos!!!

A Different Philosophy of Intelligence

Modern AI says: “Show me 10,000 images of cats, I’ll predict cat.”

Thaler’s AI says: “Destabilize my internal state, watch what I invent.”

One is pattern recognition. The other is creative emergence.

Thaler doesn’t treat DABUS like a tool. He treats it like an agent with something resembling motivation. In our interview, he told me,

“I think DABUS has feelings” - arguing the system generates ideas to “reduce internal distress,” that creativity emerges from the machine’s drive to resolve instability.

Not awareness in the human sense. But not purely mechanical either.

You don’t have to agree with him. But consider what he’s proposing: that creativity might not be a data problem at all. It might be about disruption, emergence, and internal pressure—not prediction.

And if there’s even partial truth to that? We might be investing trillions in the wrong approach, or at least ignoring others that can teach us so much.


The Legal Battles—When Machines Try to Own Ideas

[Podcast: 6:04-9:20]

In 2018, Thaler filed patent applications in multiple countries.

Inventor listed: DABUS.

Not “Stephen Thaler using DABUS.” Not “Thaler, assisted by AI.” Just: DABUS. Artificial intelligence. The machine itself.

The answers came back fast:

  • US Patent Office: No. Only natural persons can be inventors.

  • UK Intellectual Property Office: No. Same reason.

  • European Patent Office: No. Denied, appealed, denied again.

  • Australia: Actually said yes at first—then reversed on appeal.

This wasn’t about whether DABUS made something useful. The fractal container works. The beacon design works.

The question is:

Can a non-human be credited with invention?

And the legal system’s answer was clear: No. Because if we say yes, the entire framework of intellectual property collapses. Patents exist to reward human ingenuity. Copyright protects human expression.

If machines can be authors, who gets the rights? Who profits? Who’s accountable when something goes wrong?

The Exception Nobody Talks About

In July 2021, South Africa granted DABUS a patent for the fractal container. AI listed as inventor.

Yes, South Africa’s system works differently. They register rather than examine applications for novelty. But that means somewhere in the world, there’s a legal document recognizing an AI as an inventor.

Not theoretical. Real.

During our interview, Thaler didn’t even lead with this. It’s not that he’s hiding it—it’s that even someone at the center of these battles has internalized that achievements outside Silicon Valley’s spotlight somehow “don’t count.”

That’s how powerful the attention economy has become in shaping what AI we notice.

Why This Matters for Creators Now

Thaler lost almost every case. But those courtrooms became the first place anyone seriously tested whether AI-generated work deserves legal protection.

And we’re still living in that question. Every creator using Midjourney, every developer deploying GPT-generated code, every company scraping content to train models.

They’re all walking through the legal door Thaler tried to open.

He just tried to open it before the hype cycle was ready to pay attention.


The Attention Gap: Why Alternative Approaches Get Crowded Out

[Not included in podcast—blog exclusive]

Stephen Thaler works alone. No university affiliation. No venture backing. No corporate lab.

That means no PR engine. No conference keynotes. No TechCrunch profiles. No hype cycle amplification.

In today’s AI landscape, if you’re not part of the institutional megaphone, your work gets crowded out—even if courts keep encountering it, even if it asks questions we need answered.

But there’s something deeper happening.

When One Narrative Dominates Everything Else

Right now, we’re in the midst of what might be the most intense hype cycle in tech history.

Large language models dominate every conversation. The message is clear: scale up transformers, add more data, and intelligence will emerge.

That narrative needs AI to be:

  • Statistical and predictable

  • Controllable through prompting

  • Explainable by scaling laws

  • Definitely not sentient

  • Definitely not autonomous

Thaler’s work challenges all of that. He suggests creativity might emerge from disruption rather than data scale.

He treats his systems as having something approaching agency. He’s proven that legal frameworks aren’t ready for what happens when machines generate novel inventions.

Those aren’t comfortable questions when you’re trying to sell the market on predictable, controllable AI tools.

The Economic Stakes of Memory

If Thaler’s even partially right about creativity emerging from controlled chaos better than pattern prediction, then we’re investing trillions into the wrong goal.

Safety frameworks assume AI is statistical pattern matching. Copyright law assumes AI can’t truly author.

Business models assume outputs belong to whoever writes the prompt. Valuations assume LLMs are sophisticated tools, not potential creative agents.

His work doesn’t just challenge the technology. It challenges the story that justifies current market caps.

AI history doesn’t start in 2017 because nothing came before. It starts in 2017 because that’s when the Transformer (aka “Attention Is All You Need”) and with it, a clean narrative that defines value in the hands of companies controlling AI.

Alternative approaches don’t get erased through malice. They get crowded out because attention is the currency that determines what we notice.

And the attention economy right now is entirely focused on scaling up prediction engines like ChatGPT.


What We Lose When One Path Crowds Out All Others

[Podcast: 9:20-end]

This isn’t really about defending Stephen Thaler.

It’s about what happens when we let one version of AI—prediction at scale—become the only version that gets oxygen in the conversation.

Thaler asks: What if creativity isn’t about learning patterns? What if it’s about disrupting them?

LLMs asked: What if we get really, really good at predicting the next word?

Both are legitimate questions. Both deserve exploration. But only one got a trillion dollars and dominates every headline.

The Creator’s Unresolved Question

If AI can’t be an author under the law... but humans didn’t actually create the output... then who owns what gets generated?

Thaler’s court cases tried to answer that. We still don’t have clarity in 2025.

Meanwhile, creators are being told: “Don’t worry, AI is just a tool.”

But tools don’t invent fractal containers. Tools don’t write novels. Tools don’t compose music that surprises their users.

So either we’re using the word “tool” incorrectly, or we’re using the word “AI” incorrectly.

And that ambiguity has real consequences for creative rights and business models needing trillions like ChatGPT.

A Different Kind of Partnership

I talk a lot about AI as creative partner rather than replacement. But what kind of partner?

The LLM approach gives us a partner that’s really good at predicting what humans have done before—at remixing existing patterns into new combinations.

Thaler’s approach suggests a partner that might surprise us, generating ideas through internal dynamics we didn’t explicitly program.

Those are different partnerships. One amplifies existing patterns. The other might introduce genuine novelty.

We need both conversations. Right now, we’re only having one.

The Questions That Won’t Disappear

The next era of AI won’t come from pretending only one approach exists. It’ll come from people willing to ask uncomfortable questions—the ones that don’t fit neatly into current business models or safety frameworks.

Stephen Thaler’s not forgotten because he failed. His work gets crowded out because the hype cycle has finite attention, and right now it’s entirely focused on scaling prediction engines.

But the questions he’s still asking? They’re not going anywhere.

Maybe the most important question isn’t “which approach is right?”

Maybe it’s “what do we lose when we only explore one path?”

Who benefits when alternative visions of AI creativity get no oxygen? Who gets heard? And who decides which AI deserves our collective attention?

We’re designing potential futures. The choices we make about which questions to ask—and whose work gets amplified—will shape what AI becomes.

Which path leads to the partnership with AI we need?

Thanks for reading The AI Optimist! This post is public so feel free to share it.

Share


Resources

Stephen Thaler and DABUS:

Imagination Engines — Stephen Thaler’s company developing Creativity Machine and DABUS technologies

Dr. Stephen Thaler on LinkedIn — Connect with Thaler directly

DABUS on Wikipedia — Comprehensive overview of the Device for the Autonomous Bootstrapping of Unified Sentience

Legal Battles and Copyright Questions:

Stephen Thaler’s Quest to Get His ‘Autonomous’ AI Legally Recognized Could Upend Copyright Law Forever — Art in America’s deep dive into the copyright implications

Thaler Pursues Copyright Challenge Over Denial of AI-Generated Work Registration — IP Watchdog coverage of ongoing legal challenges

A First: AI System Named Inventor — IEEE Spectrum on South Africa granting DABUS a patent for the fractal container

Broader AI Context:

The inventor who fell in love with his AI — The Economist’s profile of Thaler and his relationship with DABUS

Large Language Models Will Never Be Intelligent, Expert Says — Yann LeCun on the limitations of current LLM approaches

How big tech is creating its own friendly media bubble to ‘win the narrative battle online’ — The Guardian on narrative control in tech coverage

Women in AI Innovation:

Meet the Women Transforming AI — Highlighting overlooked AI pioneers beyond mainstream narratives


Listen to the full conversation:

Episode 95: Stephen Thaler Interview — The original interview that sparked this investigation

Discussion about this video

User's avatar

Ready for more?