In a dimly lit conference room in San Francisco, tech executives are huddled around a whiteboard, sketching out complex diagrams of AI safety protocols.
Across town, researchers are debugging thousands of lines of code meant to teach AI systems ethics. And in Washington, politicians debate new regulations for artificial intelligence.
They're all missing something hiding in plain sight.
At Valory, they create co-owned AI, representing a shift from traditional AI ownership models, which are typically centralized and controlled by a single entity (usually a large tech company like OpenAI).
David uses an obvious tool to regulate all this activity, we'll share later as many in AI don't even think about how simple regulating Agents may be.....
Co-ownership in AI involves multiple parties jointly holding rights to the intellectual property, decision-making authority, and potential profits generated by AI systems. Rules matter, and the hidden rulebook is waking up.
Follow Valory's work on X/Twitter @autonolas for Olas network and @ValoryAG for Valory, or visit their website at www.valory.xyz.
"I don't think it's helpful to think of the sort of singular AI system which will rule us all," says David Minarsch, CEO of Valory, leaning forward in his chair.
"I think we should design for a world where there's competition amongst AI, like there's competition amongst humans."
David Minarsch is the CEO of Valory, and a pioneering force in the development of Multi-Agent Systems in the Distributed Ledger Technology (DLT) space.
With a PhD in Applied Game Theory from the University of Cambridge, David has extensive expertise in both theoretical and practical aspects of AI and blockchain technology.
He has founding experience through Entrepreneur First and has been instrumental in advancing the intersection of crypto and AI.
David's mission is to empower communities, organizations, and countries to co-own AI, fostering the creation of agent-based economies across major blockchains.
Through innovative projects like Olas Predict, David and his team are shaping the future of AI and blockchain integration.
The AI Agents Control Problem
Every day brings new headlines about AI systems going rogue - chatbots turning malicious, trading algorithms making catastrophic decisions, social media bots spreading misinformation.
The standard response is to try harder at programming ethics and safety directly into AI systems.
But there's a fundamental problem with this approach. As Minarsch explains,
"If I call into some centralized labs, I don't even actually know what's happening behind this API.
In the context of our stack, it's all open source. And the on-chain transactions are also trackable because it's a public blockchain."
This lack of transparency isn't just an academic concern. Take social media, where the line between human and AI-generated content has become increasingly blurred.
LinkedIn "pods" automatically generate engagement, while Twitter bots create a synthetic ecosystem that leaves users wondering if they're interacting with real people at all.
Valory, the Olas Protocol, and Smart Contracts may bring better AI Agents
Five years ago, Minarsch and his team at Valory began exploring a different approach.
"What we found is that you can actually benefit when you build these multi-agent systems when you give these agents wallets," he recalls.
This seemingly simple insight - giving AI agents their own crypto wallets - led to a much bigger revelation.
The team's first breakthrough came with prediction markets.
Their system, Olas Predict, has now processed over a million transactions, with AI agents participating in markets for everything from election outcomes to economic indicators.
But the real innovation wasn't in the predictions themselves - it was in how they controlled the agents' behavior.
The Technical Implementation of Rules
Rather than trying to program every possible ethical scenario into their AI agents, Valory created what Minarsch calls "rails" using smart contracts.
These contracts define clear boundaries for agent behavior while allowing flexibility within those boundaries.
"Rather than saying to the agent,
'Oh, here you have a bunch of funds and now go do it with it,' you can say,
'Here's a bunch of funds, but you're constrained to only do X, and it's constrained by cryptography so the agent can't work around it,'"
Minarsch explains.
The technical implementation includes:
1. Wallet Integration: Each agent gets its own crypto wallet, allowing it to participate in transactions
2. State Machine Design: "On the outer control loop, we have this sort of finite state machine design... we effectively give the agent rails on which it can travel."
3. Smart Contract Rules: Contracts define permitted actions and consequences
4. Identity Verification: "What would be ideal is if I, as a user, can create a sort of set of credentials which define my online identity... and then that identity is cryptographically tied to my posts."
Co-owned AI: Real-World Applications
The applications already in production are impressive:
Prediction Markets
"These agents have done over a million transactions," Minarsch notes. "And hundreds of community members in our ecosystem run these agents... day in, day out, participating in prediction markets with your capital."
Trading Systems
Their trading agent allows users to give high-level instructions while the agent handles complex crypto trading operations autonomously - but always within pre-defined smart contract boundaries.
Governance
"You have this problem where people need to vote in these different protocols when they hold these tokens," Minarsch explains.
Their "Governator" agent handles voting rights while maintaining accountability through smart contracts.
Social Media Authentication
Perhaps most intriguingly, smart contracts could solve the bot crisis in social media by creating verifiable digital identities.
"What we really need is a link between any single post and who actually created it," Minarsch says.
The Revelation: Ancient Wisdom for Modern Problems
And here's the twist: the solution to controlling AI might not be in writing better code or creating more sophisticated neural networks.
Instead, it might be in using one of humanity's oldest tools for creating trust and enforcing behavior: contracts.
Just as human society uses contracts to define boundaries, establish trust, and create predictable outcomes, smart contracts could provide the same framework for AI systems.
They're transparent, immutable, and cryptographically enforced - exactly what we need for trustworthy AI.
Looking Forward: The Path Not Taken
While the tech world chases increasingly complex solutions to AI control, the answer might have been sitting in plain sight all along.
Smart contracts represent a bridge between human and artificial intelligence, allowing us to create systems that are both powerful and predictable.
The irony is striking: in our rush to create new solutions for AI control, we overlooked one of humanity's most successful inventions for governing behavior. Smart contracts don't just offer a technical solution - they offer a philosophical framework for thinking about AI governance. Instead of trying to program ethics into black boxes, we can create transparent systems of rules and incentives that both humans and AI can understand and trust.
As Minarsch puts it in his closing thoughts:
"Our mission is very long term. It's about giving everyone an autonomous agent that they can fully own and control that does arbitrarily useful things for them as they define."
In the end, the key to safe AI might not be in teaching machines to be more like humans, but in using human institutions to make machines more trustworthy.
The Multi-Agent Economy: A New Digital Society
What happens when AI agents become economic actors in their own right?
According to Minarsch, we're already seeing the emergence of what he calls "agent economies" - systems where AI agents interact, trade, and create value semi-autonomously.
"These multi-agent systems can be often understood like a mini economy," Minarsch explains.
"A business is one unit and then multiple of them becomes a small economy.
And so they become like their own users with their own desires and requirements."
This isn't just theoretical. Valory's prediction market agents have already conducted over a million transactions, demonstrating how AI systems can participate in complex economic activities while remaining within defined boundaries.
"Rather than you going and looking at some sort of event and saying, okay,
I think this prediction market might not just reflect the reality, the agents do it," Minarsch notes.
Redefining Digital Identity and Trust
Perhaps the most profound possibility of smart contract-controlled AI agents lies in how they could reshape our digital society.
The current crisis of trust online - from deepfakes to bot armies - stems from a fundamental problem: we can't verify who (or what) we're interacting with.
"What we really need is a link between any single post and who actually created it," Minarsch emphasizes.
"The problem is that right now, that's just a trust assumption that we have to place on X, that it is with this kind of entity which is behind it."
Smart contracts offer a solution through cryptographic verification.
"What would be ideal is if I, as a user, can create a sort of set of credentials which define my online identity... and then that identity is cryptographically tied to my posts."
This could revolutionize social media in several ways:
1. Verifiable Content Origin
- Every post could be cryptographically linked to its creator
- Users could choose to reveal or conceal their identity
- Bot accounts would be clearly labeled as such
2. User-Controlled Algorithms
"If I could actually, as a user, select my own algorithm, that would be desirable," Minarsch explains. "We should not have to consume one version of the news, we should be able to compose that in the way we want and then have means to educate each other about maybe composing it in a different way."
3. Transparent AI Interaction
- Clear identification of AI-generated content
- Verifiable chains of content modification
- Traceable decision-making processes
The Coming Crisis in AI Advertising
Minarsch raises a crucial warning about the future of AI without such controls:
"I'm 100% sure we're getting to this point where maybe we already are, where the bigger labs will recognize that
their path to scale is... well, you don't serve the best response.
You bring in your ads and you effectively have the agent actually purchase the outcomes or generate the outcomes which are most
aligned with the highest paying bidder rather than with the end user."
This is exactly why user-owned and co-owned AI, controlled by smart contracts, becomes crucial.
Without it, we risk creating a digital landscape where AI serves advertisers rather than users.
Technical Implementation: Beyond the Basics
The technical implementation of these systems goes beyond simple smart contracts. Valory's approach includes:
1. Finite State Machine Design
"On the outer control loop, we have this sort of finite state machine design... we effectively give the agent rails on which it can travel.
And then within the states, the agent has sort of freedom to decide what to do."
2. Hybrid Systems
- Rules-based frameworks combined with learning systems
- Smart contract boundaries with AI flexibility within those boundaries
- Multiple layers of verification and control
3. Cross-Chain Integration
The system works across "all the big blockchains," allowing for maximum flexibility and interoperability.
The Path Forward: Co-owned AI
The vision extends far beyond current applications.
"Our mission is very long term," Minarsch states.
"It's about giving everyone an autonomous agent that they can fully own and control that does arbitrarily useful things for them as they define."
The implications of this approach extend far beyond technical implementation. Smart contracts could provide:
1. Democratic Control of AI
- Community governance of AI systems
- Transparent decision-making processes
- Shared ownership of outcomes
2. Economic Empowerment
- Direct user ownership of AI agents
- Fair distribution of AI-generated value
- Protection against centralized control
3. Social Trust Reconstruction
- Verifiable digital identities
- Transparent content origins
- Clear boundaries between human and AI interaction
The Revelation: Simple Solutions to Complex Problems
While companies race to create ever-more-sophisticated AI control systems, the answer might lie in one of humanity's oldest inventions:
the contract.
Smart contracts represent more than just technical infrastructure - they're a philosophical framework for thinking about AI governance.
Instead of trying to program ethics into black boxes, we can create transparent systems of rules and incentives that both humans and AI can understand and trust.
The future of AI safety might not lie in breakthrough algorithms or novel neural architectures, but in applying time-tested human institutions to this new frontier.
Maybe it's time to look back to move forward, using the wisdom of centuries of human cooperation to guide our creation of artificial intelligence
*Follow Valory's work on X/Twitter @autonolas for Olas network and @ValoryAG for Valory, or visit their website at www.valory.xyz.*
Share this post