0:00
/
0:00
Transcript

Baking in AI Copyright: The 3 C's Every Creator Needs to Protect their Work

Creators find AI that works for you, not taking. New easy AI tools for Creators stack the deck in your favor. Learn how Control, Consent, and Compensation give you protection and potential. EP #85

That buzz when you discover something that feels right - that's how I felt about the UK's 'Make it Fair' campaign for creators' rights with AI.

Web site © NMA Ltd’ or ‘Courtesy of NMA Ltd’.

Then boom - nothing. Like creators trying to make up for lost time.

And what’s “AI fair” varies depending on the country you are in…

Like chasing a static object dancing in random neural networks.

Or eating over-ripe sourdough bread in a seedy San Francisco bus stop.

Sourdough bread and AI. What do they have in common?

Both begin with something small that keeps growing, learning, and evolving.

Some sourdough starters are over 100 years old, taking in new flour and water each time, growing into something new. AI does the same with creative work.

But while everybody loves the smell of fresh bread, we're less comfortable with AI consuming our creativity without clear boundaries.

At least Japan gives you rules you can understand. Pretty much every other country is sitting on the sidelines, waiting for courts to figure it out.

The battle for creative work isn't tech versus artists.

It's about whether we value both the starter and what grows from it.

Like sourdough requires regular feeding, AI needs continuous training with fresh data—not just now, but 100 years from now.

In the future that won't be the Wild West AI of the past few years.

That's why the UK's "Make It Fair" campaign launched this week, another voice shouting the same message.

Meanwhile in the courts and regulatory agencies, many decisions will be made this year.

Creative industries in many countries are fighting back against rules letting AI use copyrighted work without permission, payment, or involvement from content owners.

They're not anti-tech, they're pro-creator, demanding the obvious: consent and compensation.

Bakers often share their sourdough starters as gifts.

AI companies in many countries took creative work without asking. We're not saying stop.

Act like a baker, not a unicorn.

We're saying think and ask before you act.

This battle between creators and AI isn't just playing out in studios and corporate boardrooms between lawyers.

It's happening at the international level, with two countries taking completely different approaches to the same problem.

"The battle for creative work isn't tech versus artists.

It's about whether we value both the starter and what grows from it."

And whether what is “Fair” in the UK, is the same in the US, or Japan?

Japan vs. US: Two Models For AI's Creative Future

In Japan, all copyrightable data is allowed to be put into AI. That might sound like a free-for-all, but they've established rules, boundaries, and practices that make this work better than you might expect.

Japan's approach to AI is community-focused, viewing creativity as an evolutionary timeline.

What somebody creates now builds on what came before, like software development. They're encouraging learning and training while still providing protection for creators.

The Japanese model doesn't mean creators have no recourse. Even without lawyers, they offer mediation and arbitration so anyone can get their case heard.

It's not perfect, but it creates a framework where AI development and protection can coexist. And it’s also indicative of the culture of copyright in Japan.

This contrasts with the US approach, where we're still figuring things out case by case, waiting for the courts to establish precedents.

Our uncertainty creates anxiety for both creators and engineers. Both feel a negative outcome would be destructive to their careers and businesses.

Whatever happens, half will feel let down, and possibly left behind.

Japan's copyright rules for AI rest on two key pillars that balance progress with protection:

The Non-Enjoyment Purpose Requirement

Intention matters. Japan allows copyrighted content to train AI systems if the purpose isn't to recreate the original work. You're not bringing in Picasso’s paintings to create more Picassos - you're learning artistic styles, influences, and color theory.

This creates breathing room for AI development while maintaining boundaries around blatant copying. The focus isn't on inputs but outcomes.

Article 34 Proviso: Where Protection Lives

Japan doesn't leave creators without protection. Their Article 34 Proviso acts as a safety net, establishing that if AI output causes actual harm to a creator's career, reputation, or financial status, normal copyright protections kick in.

This practical approach says: let data flow into AI systems, but watch carefully what comes out. If the output creates direct competition or damages creators, that's where the line gets drawn.

“Make it Fair” for who?

Meanwhile, the UK is having its own creator uprising. The "Make It Fair" campaign launched in February 2025 stands in stark contrast to their 2019 rules regarding AI and copyrightable data.

While the 2019 rules allowed AI to use copyrighted data without permission or payment, "Make It Fair" aims to protect creators' rights and ensure fair compensation.

The campaign addresses the broader impact on creative industries, which generate over £120 billion annually for the UK economy.

They're advocating for stronger copyright protection, opposing government proposals that would weaken existing laws.

Unlike the 2019 rules, the campaign emphasizes the need for AI companies to obtain permission and provide payment for using creative content.

"Japan's approach creates breathing room for AI development while maintaining boundaries around blatant copying. The focus isn't on inputs but outcomes."

Fair for creators versus Fair Use for AI is a defining point in legal decisions.

AI as Hero AND Villain: The Joseph Campbell Perspective

I’d love to ask mythologist Joseph Campbell about AI, he'd likely see our collective anxieties as a projection of our shadow selves.

Just as vampires emerged from three people creating horror stories in 1816 and evolved into a cultural phenomenon, AI represents our deepest fears and highest hopes.

Campbell would recognize AI playing dual roles in our modern mythology: hero and threshold guardian.

For AI enthusiasts and engineers, it's the hero's journey personified - overcoming odds, transforming challenges, and bringing about a whole new world.

It represents progress, potential, and possibility.

For creators whose work has been taken without permission, AI represents the opposite - a destroyer, not a builder of worlds.

It's the threshold guardian, that archetypal figure standing at the door between the old world and the new, challenging whether we're ready to proceed.

This tension isn't new. When electronic synthesizers entered music in the 1970s, many musicians feared the end of "real music."

When digital photography emerged in the 1990s, professional photographers predicted the death of their craft (though Kodak was crushed).

When CGI revolutionized filmmaking in the early 2000s, actors worried they'd be replaced by digital avatars (and that’s looking more possible with AI).

Each time, the technology became a tool that expands rather than replacing the human element. The initial fear gave way to integration - but not without establishing boundaries and rules.

Oh yeah, and don’t steal things first without asking.

Silicon Valley finds that a dated practice, and in this case it’s hard to stop the AI train.

That's why creators are so concerned with legacy. If they can't make a living from their work like Dr. Polidori (who wrote one of the first vampire stories), why create at all?

The fear isn't about AI creating - it's about AI regurgitating what humans have already done without adding value or respecting the source.

Our relationship with AI is complex. We’re both drawn to its potential and frightened by its impact. Like all threshold guardians in mythology, it forces us to confront hard questions about what we value and how we define creativity itself.

"AI plays dual roles in our modern mythology - both the hero on a journey to transform the world and the threshold guardian challenging whether we're ready to proceed."

The 3 C's: Creator Control Toolkit

The Creator Control Toolkit is a way to grab back ownership of your creative work in the AI age.

While many creators feel powerless against AI training on their content, there are practical steps you can take right now.

Control - Take Back Your Creative Power

Up until recently, creators have had virtually no control over how their work is used online. People can take, sample, and repurpose your creative materials with little recourse. But that's changing.

  1. Time/Date Stamp Everything - Whatever you create, get an official timestamp. Services like CredTent can provide certificates or blockchain records that can be updated.

  2. Set Up Robots.txt - This simple file on your website can block search engines and some AI crawlers from scraping your content. While not foolproof (many large language models ignore it), it establishes your intent.

  1. Update Your Copyright Notice - Ensure your website explicitly addresses AI training in your privacy policy and terms of service. (I've posted a template on my Substack that you can use - I got it from ChatGPT, it's that easy!)


    ChatGPT No AI Training language example:

    Not a legal contract, run this by an attorney. I am not one.

    No AI Training Clause

    1. Prohibition on AI Training Use
      You may not use, reproduce, or distribute any content from this website, including but not limited to text, images, videos, or other materials, for the purpose of training, fine-tuning, or enhancing any artificial intelligence (AI) models or machine learning algorithms without prior written consent from [Your Company Name].

    2. Licensing Requests
      We recognize that there may be legitimate uses of our content for AI development. If you are interested in licensing our content for such purposes, you must contact us directly at [Contact Email or Form] to discuss terms and obtain explicit permission.

    3. Breach and Consequences
      Unauthorized use of our content for AI training or related purposes is a violation of these Terms of Service and may result in legal action, including but not limited to injunctive relief and claims for damages.

    4. Monitoring and Enforcement
      We reserve the right to monitor usage of our content and take necessary action to protect our intellectual property. This includes contacting organizations or platforms that may misuse our content.

    5. Reservation of Rights
      All rights not expressly granted are reserved. This includes the right to refuse any licensing requests at our sole discretion.

    If you have questions about this policy or wish to request a license, please reach out to [Contact Information].

  1. Specify Your AI Training Policy - Clearly state whether you allow AI training on your work. If you do allow it, under what conditions? With licensing? For free? Make this clear.

  1. Run anything legal by an attorney if you can. I’m not an attorney and this isn’t legal advice. These are simply actions to take to protect yourself.

Consent - Set Clear Boundaries

Consent means having a say in how your work is used. Without establishing boundaries, you're giving permission for your content to be used however AI companies see fit.

  1. Be Specific About Permissions - Don't leave it to interpretation. State clearly what is and isn't allowed with your content.

  2. Watch For Services That Assume Consent - Some platforms' terms of service include language that grants them rights to use your content for AI training. Read the fine print!

  3. Join Collective Licensing - There's strength in numbers. Publishers like Penguin are bringing authors together for collective AI licensing arrangements. Smaller networks are popping up for each creative niche.

  4. Document Everything - Keep records of your policies and when they were implemented. Screen captures and a Google Drive helps, and similar ones.

Compensation - Get Paid For Your Value

If your work is valuable enough to train AI on, it's valuable enough to be compensated for.

  1. Explore Licensing Options - Smart contracts on blockchain platforms can automate licensing for your work.

  2. Consider Collective Bargaining with Providers who may scale - If there are thousands or millions of creators together, you have more leverage.

  3. Evaluate Compensation Models - Not all payment structures are equal. Consider whether the compensation offered is worth permanently allowing AI to learn from your work.

  4. Remember: You Can't Turn Back Time - Once an AI has trained on your work, you can't undo it. Be intentional about your decisions.

It's easy to find reasons things won't work, but finding reasons they will begins with protection and understanding the assets you have as a creator.

They say AI output is just a roulette wheel of randomness. That's only true if you let it be random.

Maybe AI gives us an ability to control the roulette - but only if we first control our relationship with AI.

Establish your presence and protection now, before regulations start adding up. As AI gets better, you'll be protected early.

"The three C's for creators aren't just nice ideas - they're your lifeline in the AI era: Control, Consent, and Compensation."

Owning your creative future: making AI Sourdough with engineers

Ever taste 100-year-old AI? How about sourdough bread from Boudin Bakery in San Francisco with a starter from the Gold Rush Era?

Both sourdough and AI rose during a gold rush, are dynamic evolving entities, and need to be refreshed regularly - one with water and flour, the other with data.

AI has a lot in common with sourdough bread.

Sourdough has a starter that can be 100 years old, each time adding more as it keeps growing, learning, and evolving.

AI is similar, but while we all like sourdough bread (at least the smell of it), we don't like what AI's doing.

Part of the problem is we can't see, taste, feel, or touch AI - and we're told that our content fed into it is gone. The way it happened was such a slap in the face to creators.

What I like about Japan's model is they stated their approach clearly. I'm no lawyer and I'm not giving legal advice, but they took an interesting and intelligent approach that protects both AI and creators.

That's what I want to work on with you this year. I think both sides are heading toward the same goal and should work together.

This isn't about stopping AI - it's about making clear boundaries when AI is taking things, when it's replicating, and when it's intended to replace people.

The one thing Japan recognized that Silicon Valley missed is that AI isn't just about technology - it's about community. If you hurt individuals, you hurt the community.

What goes in is what comes out. Garbage in, garbage out. A relationship with your community is what will come out in your AI.

How can we find that copyright middle ground where creators and engineers work together?

Creators bring perspectives you don't see - they're the edge cases personified.

Engineers bring structure, coding, logic - everything. Together?

Smart software companies always rely on their users as a major source of growth, and AI won't be different.

They need creators. The challenge is acknowledging this opens the floodgates to lawsuits and compensation for everyone who's created content online.

It’s the copyright version of mutually assured destruction, zero sum games.

If only one side wins, both sides lose.

What happens when countries take completely different approaches?

Japan is moving forward, allowing AI access to copyrightable content but with clear guidelines that make things easier to navigate than the US approach.

Currently, the US is still figuring out whether AI-generated content is copyrightable and what happens when copyrighted content goes into AI.

This is the core business practice bumbled by scraping, buying, and avoiding copyright until the lawyers come….which only happens when the money comes.

As we grow, each country thinks their AI approach is the right one, but Japan's model recognizes the community – via its AI - needs to learn while protecting individual rights.

The US is making major copyright decisions around this time next year.

Do you think taking copyrighted content without permission should be allowed?

How do we handle permission at the scale of billions of documents?

Japan's model does something smart - it stands up for both AI's ability to use copyrighted materials to learn AND for the people who own the IP and content.

They're doing this case by case, and I think that's where the US is heading, just moving slower.

I don't want us to become exactly like Japan, but I'm keeping an eye on their approach. A lot of what engineers say makes sense - these models can get smarter and do things for us.

Not eliminating people, but handling tasks that aren't worthy of our time, freeing us to focus on what really matters.

Allowing people to do the work, not the busy work.

Lawsuits over IP scraped, bought, and ingested?
Busy work trying to avoid the inevitable and costly retrofit until the next round.

"This isn't about stopping AI - it's about making clear boundaries when AI is taking things, when it's replicating, and when it's intended to replace people."

RESOURCES

Make it fair: The government is siding with big tech over British creativity.

Report on AI and Copyright Issues by Japanese Government

NO&T IP Law Update

Artists release silent album in protest against AI using their work

Chinese Court Issues First Decision on AI Copyright Infringement

China’s First Case on AIGC Output Infringement--Ultraman

AI Copyright Tools

Pixsy - Fight Image Theft (founded 2014)

Created by Humans: Jan 2025

AI Rights Licensing Platform for books

Bernstein.io

Web3 solution to secure and leverage IP assets

MUSIC

SongSecure

https://songsecure.com/

CONTENT AND ART

Credtent.org

=============================

AI Optimist Playlist