Abstract
Most deep learning networks today rely on dense representations. This is in stark contrast to our brains which are extremely sparse. In this talk, Subutai will first discuss what is known about the sparsity of activations and connectivity in the neocortex. He will also summarize new experimental data around active dendrites, branch specific plasticity, and structural plasticity, each of which has surprising implications for how we think about sparsity. In the second half of the talk, Subutai will discuss how these insights from the brain can be applied to practical machine learning applications. He will show how sparse representations can give rise to improved robustness, continuous learning, powerful unsupervised learning rules, and improved computational efficiency.