Jeff Hawkins and Subutai Ahmad presents a keynote “From Brains to Silicon — Applying lessons from neuroscience to machine learning.” This keynote was presented on March 17th, 2021 at the virtual Neuro-Inspired Computational Elements (NICE) workshop.
Keynote Abstract
In this talk we will review some of the latest neuroscience discoveries and suggest how they describe a roadmap to achieving true machine intelligence. We will then describe our progress of applying one neuroscience principle, sparsity, to existing deep learning networks. We show that sparse networks are significantly more resilient and robust than traditional dense networks. With the right hardware substrate, sparsity can also lead to significant performance improvements. On an FPGA platform our sparse convolutional network runs inference 50X faster than the equivalent dense network on a speech dataset. In addition, we show that sparse networks can run efficiently on small power-constrained embedded chips that cannot run equivalent dense networks. We conclude our talk by proposing that neuroscience principles implemented on the right hardware substrate offer the only feasible path to scalable intelligent systems.
Video
Slides
For more information, you can find our papers at https://numenta.com/papers.