We’ve been getting a lot of questions lately as to the differences between Hinton’s GLOM model and Numenta’s Thousand Brains Theory. In this blog, we will outline the commonalities and main differences of both models at a high level.
In this post, Karan describes the technicalities of why neural networks do not learn continually, briefly discusses how the brain is thought to succeed at learning task after task, and finally highlights some exciting work in the machine learning community that builds on fundamental principles of neural computation to alleviate catastrophic forgetting.
Designed to promote collaboration, our Visiting Scholar Program lets researchers and professors spend time at our offices and learn about Numenta’s work in depth while continuing their normal research. As one of Numenta’s first “virtual” interns, I asked Niels Leadholm to share his work and experience interning at Numenta.
In a recent technology demonstration, we showed that brain-derived sparse network algorithms were 50 times faster than dense networks and used significantly less power. In this blog post, we walk through some visualizations of our results that ultimately validate how sparsity can enable massive energy savings and lower costs.
Numenta Research Staff Member Lucas Souza continues his series on sparse neural networks. He provides a review of current techniques to train networks from scratch and updates on dynamic sparsity, or sparse networks that learn their structure dynamically through training. Finally, he walks through the implications for hardware.