Powering Proxi with a Brain-Based AI Platform
Numenta and Gallium Studios both share a vision that leverages neuroscience in unique and powerful ways. Their collaboration is driving forward new and exciting possibilities in gaming and AI.
Numenta and Gallium Studios both share a vision that leverages neuroscience in unique and powerful ways. Their collaboration is driving forward new and exciting possibilities in gaming and AI.
Numenta and Gallium Studios both share a vision that leverages neuroscience in unique and powerful ways. Their collaboration is driving forward new and exciting possibilities in gaming and AI.
Generative AI is an exciting and transformative technology, which will continue to gain adoption across a wide range of use cases. However, the associated compute costs are significant. Using Numenta’s AI platform, which is deployed directly into customer infrastructure, these costs can be reduced by up to 60X, allowing enterprises of all sizes to fully exploit the game-changing technology.
This is a joint blog post co-authored by Numenta and Intel on accelerating Large Language Models with long sequence lengths. Numenta running on the Intel Xeon CPU Max Series delivers 20x inference acceleration compared to other CPUs.
On January 10, as part of Intel’s 4th Gen Xeon Scalable processors launch, we announced that our technology improves low-latency BERT-Large inference throughput by over two orders of magnitude.
In this piece originally written for Cheers Publishing, Jeff answers 3 questions about the book: the relationship between On Intelligence and A Thousand Brains, if he recommends reading both, and whether the ideas proposed in the two books have been validated.
We highlight our top takeaways from the Brains@Bay meetup we hosted in spring, featuring Srikanth Ramaswamy, Jie Mei and Thomas Miconi. Our speakers did not disappoint, and the meetup was jam-packed with insights on how neuromodulators can lead to more flexible and adaptive AI.
Over the last ten years AI, specifically deep learning, has yielded remarkable results. What is less noticed is that these models are churning away at a staggering cost, not just in terms of dollars and cents, but also in terms of energy consumed. This blog post aims to explain what causes this outsized energy consumption, and how brain-based techniques can address AI’s incredibly high energy cost.