Our AI technology is based on two decades of neuroscience research and breakthrough advances in understanding what the neocortex does and how it does it.
We have developed a framework of intelligence called the Thousand Brains Theory and made discoveries on how neurons make predictions, the role of dendritic spikes in cortical processing, how cortical layers learn sequences, and how cortical columns learn to model objects through movement.
The challenge is to apply these discoveries to practical AI systems. By translating neuroscience theory to hardware architectures, data structures, and algorithms, we can deliver dramatic performance gains in today’s deep learning networks and unlock new capabilities for future AI systems. Learn more about the neuroscience behind our work here.
We investigate if biologically inspired neurons can lead to a general solution to catastrophic interference. We found that active dendrites and sparse representations work together to mitigate catastrophic interference in dynamic settings.
We introduce Complementary Sparsity, a novel technique that significantly improves the performance of dual sparse networks on existing hardware. We demonstrate that we can achieve high performance running weight-sparse networks, and can multiply those speedups by incorporating activation sparsity.
We have a growing collection of published peer-reviewed papers, supplemental white papers and research manuscripts. We document our research in several ways, including peer-reviewed journal papers, conference proceedings, research reports, and invited talks. You can search our publications by category or by year.