In this month’s newsletter, we have an exciting announcement on a new technology demonstration, upcoming events, and more.
Technology Demonstration: Numenta Sparse Algorithms Achieve 50x Performance Acceleration in Deep Learning Networks
Last week we announced a technology demonstration showing that sparse networks comprising our brain-derived sparse algorithms are 50 times faster than dense networks on an inference task, with competitive accuracy. These dramatic performance gains validate our ongoing effort to apply our neuroscience research to machine learning challenges. As the machine learning field continues to progress, the reliance on more data and power poses a significant threat to future advancements. Sparsity, a fundamental component of our Thousand Brains Theory, can unlock previously unattainable efficiencies.
For this demonstration, we ran our algorithms on Xilinx FPGAs (Field Programmable Gate Array) using the Google Speech Commands (GSC) dataset. This dataset consists of 65,000 one-second long audio clips of spoken words, and the task is to recognize the spoken word from any given clip. Using the metric of number of words processed per second, our results show that sparse networks yield more than 50x acceleration over dense networks, with no loss of accuracy.
In addition to the performance speed-up, we also demonstrated that sparse networks can run on smaller chips where dense networks are too large. Lastly, we showed that sparse networks use significantly less power than dense networks.
This performance improvement suggests several important benefits:
- Implementation of larger or more complex networks using the same resource
- Implementation of more copies of networks on the same resource
- Implementation of networks on smaller, edge platforms with limited resources
- Significant energy and cost savings
We plan to build on this work in several ways. We expect to achieve additional performance improvements on the GSC dataset by adding more sparse networks on chip. We are also in the process of applying these same techniques to more challenging datasets like image data. While the current work focuses on inference tasks, we plan to apply these techniques to training tasks to demonstrate that sparsity can reduce the need for large training sets and times.
We are excited about these results and the potential benefits of applying sparsity to neural networks, but sparsity is only the beginning. As we add more elements of the Thousand Brains Theory, we expect to see even more benefits, like continual learning without batch training, unsupervised learning, and robustness.
We have several additional resources if you’d like to learn more:
- Press Release: Numenta Demonstrates 50x Speed Improvements on Deep Learning Networks By Applying Brain-like Sparse Algorithms
- White Paper: Technology Validation: Sparsity Enables 50x Performance Acceleration in Deep Learning Networks
- Visualizations Video: Technology Demonstration Visualization
- Blog: How Sparsity Enables Energy Efficient Deep Learning Networks
Brains@Bay Meetup: Alternatives to Backpropagation in Neural Networks – Nov 18
Are you interested in brain inspired machine learning algorithms? Brains@Bay is designed to foster the study and development of machine learning algorithms heavily inspired by neuroscience empirical and theoretical research. For our next meetup on Wednesday, November 18, 9:30am PST, we invited three fantastic speakers to discuss Alternatives to Backpropagation in Neural Networks. From the neuroscience side, Professor Rafal Bogacz from University of Oxford will discuss the viability of backpropagation in the brain. From the machine learning side, we have Sindy Löwe from University of Amsterdam and Jack Kendall from RAIN Neuromorphics who will present their latest machine learning approaches. Reserve your spot today here.
Stay tuned for further updates on events and visit our YouTube channel for videos of past meetups, events, and research meetings.
Thank you for continuing to follow Numenta.
VP of Marketing
Note: Numenta, Google, Xilinx, Alveo and Zynq are trademarks of their respective owners