As Numenta followers know, this year we have focused on how best to apply neuroscience principles to machine intelligence. Today’s machine learning systems are solving many problems, but they are not intelligent the way humans are. We’ve asked ourselves this question: How can we apply what we’ve learned about the brain to make machine learning more flexible, efficient, generalizable and robust, in other words, more intelligent?
In this month’s newsletter, I’m pleased to share some new resources and upcoming events for those interested in learning more about our efforts to apply neuroscience principles towards machine intelligence.
Applying HTM ideas to deep learning
In the latest episode of the Numenta On Intelligence podcast, VP of Research Subutai Ahmad sat down with Open Source Community Manager Matt Taylor to discuss how we’re applying HTM ideas to deep learning. In their 30 minute conversation they cover continuous learning, catastrophic forgetting, and the different types and benefits of sparsity. You can subscribe to the podcast series, and download, stream or watch the episode here.
A machine learning guide to HTM (Hierarchical Temporal Memory)
Our Visiting Research Scientist Vincenzo Lomonaco recently created a guide to HTM “for people who have never been exposed to Numenta research but have a basic machine learning background.” In his blog post, he discusses Numenta research from a machine learning perspective, covering our overall theory and then diving into the details of our algorithms. He concludes by creating a curriculum that’s designed for anyone in the field of machine learning who wants to learn about HTM starting from scratch.
We have details below on past and future events that feature talks from Numenta Researchers.
Brains@Bay October Meetup: Hebbian Learning in Neural Networks
In the October Brains@Bay meetup, the topic discussed was Hebbian Learning in Neural Networks. There were two presentations, one from the perspective of neuroscience and one that focused on machine learning. Florian Fiebig, PhD, Visiting Scientist at Numenta, presented Bayesian-Hebbian learning in spike-based cortical network simulations. Thomas Miconi, PhD, presented Differentiable plasticity: training self-modifying networks for fast learning with Hebbian plasticity. You can watch the recording on our YouTube channel.
Cornell Silicon Valley Presents: Past, Present & Future of AI
For Cornell alums in the Bay Area, on November 7 at 6.30pm, VP of Research Subutai Ahmad will join a panel of Cornell industry leaders in artificial intelligence at the Intel Executive Briefing Center. They will discuss the latest advances in the field, AI research topics, and the impact on our future.
Yale University seminar: Sparsity in the neocortex, and its implications for machine learning
On November 18 at 4pm, Subutai will give a talk at the Yale University School of Engineering & Applied Science in New Haven, CT, on “Sparsity in the neocortex, and its implications for machine learning.” Subutai will discuss how sparsity works in the brain, and how applying sparsity to machine learning applications can lead to improved robustness, continuous learning, powerful unsupervised learning rules, and improved computational efficiency. The talk is open to the public, and you can find more details on the website.
Partner update: Cortical.io announces collaboration with Xilinx
At the Xilinx Developer Forum (XDF) in San Jose, October 1-2, Cortical.io announced a strategic relationship with Xilinx, Inc. The company unveiled its Natural Language Understanding technology running on Xilinx Alveo accelerator cards, and showed orders-of-magnitude performance increases over standard computing platforms. This month, Cortical.io will be at XDF Europe at the World Forum in the Hague. CEO and Co-founder Francisco Webber will give a presentation titled, “Semantic Supercomputing: Implementing Semantic Folding Technology on Xilinx Alveo FPGA Accelerator Hardware” on November 13 at 11:30.