NICE Workshop 2021
Conventional, stored program architecture systems are designed for algorithmic and exact calculations.
However, problems with highest impact involve large, noisy and incomplete data sets that do not lend themselves to convenient solutions by current systems. The task is to build upon the convergence among neuroscience, microelectronics and computational systems to develop new architectures and approaches designed to handle the hardest challenges.
The 8th Annual Neuro-Inspired Computational Elements (NICE) workshop takes place 16 – 19 March 2021. Jeff Hawkins and Subutai Ahmad will be presenting a keynote on March 17th, 6 – 6:40 AM PST. To register, click here.
In this talk we will review some of the latest neuroscience discoveries and suggest how they describe a roadmap to achieving true machine intelligence. We will then describe our progress of applying one neuroscience principle, sparsity, to existing deep learning networks. We show that sparse networks are significantly more resilient and robust than traditional dense networks. With the right hardware substrate, sparsity can also lead to significant performance improvements. On an FPGA platform our sparse convolutional network runs inference 50X faster than the equivalent dense network on a speech dataset. In addition, we show that sparse networks can run efficiently on small power-constrained embedded chips that cannot run equivalent dense networks. We conclude our talk by proposing that neuroscience principles implemented on the right hardware substrate offer the only feasible path to scalable intelligent systems.
You can find the video recording and slides here.