How Can We Be So Dense? The Benefits of Using Highly Sparse Representations
The human brain is able to learn continuously from experience, predict sensations and events, and rapidly adapt to new situations. Current machine learning systems cannot come close to this level of flexibility and generality. Is it possible to learn from the brain and improve today’s learning systems? A key starting point is sparsity. Most artificial networks today rely on dense representations in stark contrast to our brains which are extremely sparse. In this talk, I will summarize what is known about sparsity in the brain. I will then discuss the advantages of sparse representations in practical applications. Deep learning networks that mimic the brain and contain both sparse weights and sparse activations are simple to implement, more robust to noise and interference, and can be extremely computationally efficient. I will then describe neuroscience inspired models that show how sparsity can lead to systems that adapt continuously using powerful unsupervised learning rules.