How Can We Be So Dense? Sparsity in the Neocortex and Its Implications for AI
Most AI systems and deep learning networks today rely on dense representations. This is in stark contrast to our brains, which are extremely sparse. In this talk, I will first discuss the many ways sparsity is deeply ingrained in the brain. While some may be familiar with a few common types of sparsity, there are many more than you might think. In a neuroscience overview, I will go through what is known about the sparsity of activations and connectivity in the neocortex. In the second half of the talk, I will discuss how these insights from the brain can be applied to practical AI systems that exist today. I will show how sparse representations can give rise to improved robustness, continuous learning, and improved computational efficiency. I’ll make the case for how sparsity in the neocortex opens up new implications for hardware architectures and paves the way to general intelligence.