In a recent technology demonstration, we showed that brain-derived sparse network algorithms were 50 times faster than dense networks and used significantly less power. In this blog post, we walk through some visualizations of our results that ultimately validate how sparsity can enable massive energy savings and lower costs.
Numenta Research Staff Member Lucas Souza continues his series on sparse neural networks. He provides a review of current techniques to train networks from scratch and updates on dynamic sparsity, or sparse networks that learn their structure dynamically through training. Finally, he walks through the implications for hardware.
Numenta Research Staff Members Lucas Souza and Michaelangelo Carporale attended ICLR 2020 and share their takeaways of the best sessions in neuroscience, deep learning theory, and pruning and sparsity.
Led by Jeff Hawkins, the Numenta Research team has been livestreaming their research meetings for a year. This post highlights how opening research meetings to the public has led to important collaborations with other researchers.
In this blog post, Numenta employees share their recommendations for brain-related movies, books, podcasts, and more. If you’re looking for new content to consume, read this post to get the top picks from the Numenta team.