Sparsity in the neocortex, and its implications for machine learning

Mon, Nov 18, 2019 4:00 PM — 5:00 PM
Mon, Nov 18, 2019 4:00 PM — 5:00 PM

Most deep learning networks today rely on dense representations. This is in stark contrast to our brains which are extremely sparse. In this talk, I will first discuss what is known about the sparsity of activations and connectivity in the neocortex. I will also summarize new experimental data around active dendrites, branch specific plasticity, and structural plasticity, each of which has surprising implications for how we think about sparsity. In the second half of the talk, I discuss how these insights from the brain can be applied to practical machine learning applications. I will show how sparse representations can give rise to improved robustness, continuous learning, powerful unsupervised learning rules, and improved computational efficiency.

Authors

Subutai Ahmad • CEO

Share