Watch now | It’s the end of the AI world as we know it, what’s next post hype? The answer will impact how you use AI, practical not future guessing. The crash will open up the real power of AI, here's how. EP #52
Declan, your piece “The AI Bubble Burst Wakes Up” really resonated. The way you cut through the noise and focus on practical, value-driven use cases is exactly what’s needed as the hype deflates.
At Qognetix, we’ve deliberately taken that route. Rather than chasing abstract promises of AGI, we’re working with research partners to validate synthetic intelligence against scientific standards first. We see credibility as the foundation for adoption — if systems can’t be measured, tested, and trusted, they won’t survive once the bubble clears.
Like you, we think the next chapter belongs to those who ground their work in outcomes, not optics. For us, that means designing neuromorphic platforms where fidelity and transparency come first, so businesses and labs can build real solutions on top.
Your warning about “social media tricks” being the default example of AI hit the nail on the head. That gap between hype and substance is where we’re putting in the work — making sure our technology solves real problems and sets new standards for evaluation.
Thanks for calling this out. If more of the conversation shifts toward evidence, measurement, and tailored value, the AI field can finally evolve into something sustainable and transformative. That’s the path we’re trying to walk at Qognetix.
Nic, thanks for the kind words, and what you're working on. Neuroscience, computing, and trust....really s the core of all this. We're applying how we think the brain works - as it is all theory and teaching us so much about how little we know about our own brains.
Trust is the currency, and I'm working on an update to that piece this week, not a major one but looking at where we are, and reviewing how current those ideas from a year ago still are....but it's not that there aren't many who have understood this.
Cool part is how you're doing something about it, share a link if you have it, couldn't find it in a quick search, love to know more.
Ai with substance, building trust. First wave in the US at least broke that trust a bit, focusing on the tech and not the people using it. Common, and passing by, plus there are so so many cool projects who defy what the headlines say.
Thanks for the thoughtful response — I couldn’t agree more that trust is the real currency here. That’s exactly why we’ve been focused on grounding our work in science first, rather than hype.
We’ve just released our papers repository, starting with an open preprint on benchmarking and spiking fidelity, and we’ll be adding more in the coming weeks as we expand into network-level demonstrations and methodology standards. You can find it here: https://www.qognetix.com/papers/
Our aim is to show that synthetic intelligence can be built on biologically faithful principles and still deliver something practical, reproducible, and trustworthy. Excited to share more soon — and really appreciate you highlighting the human side of this conversation.
Sounds fascinating, startup I'm helping based its SLM model on biology, starting from the cells and building up to the body - for business. Making data self reliant and self improving, and part of a cohesive whole.
Synthetic data is so core to what's happening. In a way using biological principles is modeling organic, not replicating data only. Psyched to see what you all have to sahre.
Declan, your piece “The AI Bubble Burst Wakes Up” really resonated. The way you cut through the noise and focus on practical, value-driven use cases is exactly what’s needed as the hype deflates.
At Qognetix, we’ve deliberately taken that route. Rather than chasing abstract promises of AGI, we’re working with research partners to validate synthetic intelligence against scientific standards first. We see credibility as the foundation for adoption — if systems can’t be measured, tested, and trusted, they won’t survive once the bubble clears.
Like you, we think the next chapter belongs to those who ground their work in outcomes, not optics. For us, that means designing neuromorphic platforms where fidelity and transparency come first, so businesses and labs can build real solutions on top.
Your warning about “social media tricks” being the default example of AI hit the nail on the head. That gap between hype and substance is where we’re putting in the work — making sure our technology solves real problems and sets new standards for evaluation.
Thanks for calling this out. If more of the conversation shifts toward evidence, measurement, and tailored value, the AI field can finally evolve into something sustainable and transformative. That’s the path we’re trying to walk at Qognetix.
Nic, thanks for the kind words, and what you're working on. Neuroscience, computing, and trust....really s the core of all this. We're applying how we think the brain works - as it is all theory and teaching us so much about how little we know about our own brains.
Trust is the currency, and I'm working on an update to that piece this week, not a major one but looking at where we are, and reviewing how current those ideas from a year ago still are....but it's not that there aren't many who have understood this.
Cool part is how you're doing something about it, share a link if you have it, couldn't find it in a quick search, love to know more.
Ai with substance, building trust. First wave in the US at least broke that trust a bit, focusing on the tech and not the people using it. Common, and passing by, plus there are so so many cool projects who defy what the headlines say.
Look forward to seeing Qognetix grow!
Thanks for the thoughtful response — I couldn’t agree more that trust is the real currency here. That’s exactly why we’ve been focused on grounding our work in science first, rather than hype.
We’ve just released our papers repository, starting with an open preprint on benchmarking and spiking fidelity, and we’ll be adding more in the coming weeks as we expand into network-level demonstrations and methodology standards. You can find it here: https://www.qognetix.com/papers/
Our aim is to show that synthetic intelligence can be built on biologically faithful principles and still deliver something practical, reproducible, and trustworthy. Excited to share more soon — and really appreciate you highlighting the human side of this conversation.
Sounds fascinating, startup I'm helping based its SLM model on biology, starting from the cells and building up to the body - for business. Making data self reliant and self improving, and part of a cohesive whole.
Synthetic data is so core to what's happening. In a way using biological principles is modeling organic, not replicating data only. Psyched to see what you all have to sahre.