NAISys 2020: Sparsity in the Neocortex, and its Implications for Machine Learning

Click on image to enlarge

Abstract:

Most deep learning networks today rely on dense representations. This is in stark contrast to our brains, which are extremely sparse. Why is this? Are there benefits to sparsity? In this poster, we review how sparsity is deeply ingrained in the brain. We then show how insights from the brain can be applied to practical AI systems. We show that sparse representations are generally not subject to interference and are extremely robust, as long as the underlying dimensionality is sufficiently high. A key property is that the ratio of the operable volume around a sparse vector divided by the volume of the representational space decreases exponentially with dimensionality. We then analyze computationally efficient sparse networks containing both sparse weights and sparse activations.  Through simulations on popular benchmark datasets we show that sparse networks are more robust than dense networks, and more than 50 times faster than dense networks on FPGA platforms.

Poster Walkthrough:

You can find additional resources on this topic from NAISys 2020 here:

You can also watch a re-recording of Jeff’s NAISys talk and download the slides from his presentation.