Are you a machine learning researcher looking for better learning algorithms? Interested in how neuroscience research can help inform the development of artificial intelligence systems?
Brains@Bay may be the Meetup group for you!
Brains@Bay is a meetup hosted by Numenta with the goal of bringing together experts and practitioners at the intersection of neuroscience and AI.
A Meetup on a Mission
Machine learning and deep learning networks have revolutionized the world in recent years. However, the majority of these successes boil down to large amounts of compute power and training data. As most scientists will tell you, we are still decades away from building machines that can solve problems as efficiently as humans.
What new lessons from the brain can help us move beyond the limitations of machine learning?
That’s what Brains@Bay is all about. The mission is to foster the study and development of machine learning algorithms heavily inspired by neuroscience empirical and theoretical research.
How It All Began
Brains@Bay started as a journal club by our Senior Researcher Lucas Souza in an effort to explore topics in the field of neuroscience and artificial intelligence. The name Brains@Bay is a play on the phrase “Brain-Inspired” – shortened to BraIns, and then eventually to Brains. And the Bay, of course, refers to the Bay Area.
It’s been 2 years since we first held our Brains@Bay Meetup in July 2019. Lucas and Sam Heiserman from our HTM community kicked off the first meetup in Numenta HQ. 30 people both offline and online (via our live stream) came together to learn what continuous learning was.
Our next meetup focused on sparsity in neural networks. Attendance at this meetup more than doubled, prompting us to look for bigger venues to accommodate the growing group. In October 2019, we hosted our meetup in partnership with UCSC at their Silicon Valley Campus and talked about Hebbian Learning in neural networks over pizza. Each meetup ended with plenty of time for discussion and Q&A, which turned out to be the perfect atmosphere for one-to-one, one-to-many conversations with passionate individuals from different fields.
With the pandemic on our hands, we made our Brains@Bay meetup virtual. Fast forward a year and this local Bay Area meetup group is now an international community with more than 1000 members. We were able to invite data scientists and researchers from around the world to unpack topics, ranging from grid cells to alternatives to backpropagation in neural networks, that blew even the sturdiest of minds.
Review of Continuous Learning
- Lucas Souza from Numenta reviewed existing literature and common approaches to continuous learning.
- Sam Heiserman from our HTM Community gave an overview of Numenta’s HTM algorithms and discussed why continuous learning is desirable in machine learning systems.
Sparsity in Neural Networks
- Subutai Ahmad, Numenta VP of Research, reviewed sparsity in the neocortex and emphasized that dynamic sparsity, like in the neocortex, is required for building intelligent systems.
- Numenta Senior Researcher Lucas Souza covered the story and impact of sparsity in neural networks with a list of curated papers.
- Hattie Zhou from Uber AI Labs talked about the lottery ticket hypothesis and showed how pruned models can outperform their dense counterparts and even achieve high levels of accuracy without any re-training.
- Gordon Wilson, CEO and Co-Founder of RAIN Neuromorphics, dove into how the existing paradigms in the brain and the role of sparsity can be applied to hardware.
Hebbian Learning in Neural Networks
- Florian Fiebig from Numenta explored bayesian-hebbian learning in spike-based cortical network simulations.
- Thomas Miconi from Uber AI Labs showed that plastic networks can be trained to learn quickly and efficiently with Hebbian plasticity.
Predictive Processing in Brains and Machines
- Georg Keller, Professor at Friedrich Miescher Institute for Biomedical Research, discussed the framework of predictive processing and a possible implementation in cortical circuits based on prediction errors.
- Avi Pfeffer, Chief Scientist at Charles River Analytics, introduced Scruff, a new probabilistic programming language based on the cognitive principle of predictive processing designed for long-lived AI systems that interact with their environment and improve over time.
Lateral Connections in the Neocortex
- Ramakrishnan Iyer, Brian Hu, and Stefan Mihalas from the Allen Institute for Brain Science showed that adding lateral connections to deep convolution networks makes models more robust to noise in the input image and leads to better classification accuracy under noise.
The Role of Active Dendrites in Learning
- Professor Matthew Larkum from Larkum Lab(Humboldt University of Berlin) touched on the main advances in active dendrites and learning over the last two decades – NMDA Spikes and Behavioral Time Scale Plasticity.
- Illena Jones, PhD Candidate from Kording Lab(University of Pennsylvania), discussed the computational power of biological dendritic trees and how repeated inputs to a dendritic tree could improve its ability to perform complex computations.
- Blake Richards from Linc Lab (McGill University) suggested that dendrites have the most potential for chip development but are not necessarily key to algorithmic problems.
Alternatives to Backpropagation in Neural Networks
- Professor Rafal Bogacz from University of Oxford, discussed the viability of backpropagation in the brain, highlighting how predictive coding networks are different from artificial neural networks.
- Sindy Löwe from University of Amsterdam shared her latest research on Greedy Infomax – a local self-supervised representation learning approach that is inspired by the brain.
- Jack Kendall, co-founder of RAIN Neuromorphics, showed how equilibrium propagation can be used to train end-to-end analog networks, which can guide the development of a new generation of ultra-fast, compact, and low-power neural networks supporting on-chip learning.
A Thousand Brains: A Fireside Chat with Jeff Hawkins
- Numenta Co-founder Jeff Hawkins covered the main aspects of the Thousand Brains Theory of Intelligence, what it represents for neuroscience and machine learning, and how we can incorporate these breakthrough ideas into our existing learning algorithms.
An Exploration of Grid Cells in Machine Learning
- Marcus Lewis, Senior Researcher at Numenta, gave us a quick introduction to grid cells and suggested that the hippocampal formation uses memory associations to create graph- or tree-like composite maps that may support an independent mapping system.
- James Whittington from University of Oxford talked about his work – the Tolman-Eichenbaum Machine – and how this model learns and generalizes relational knowledge, similar to the hippocampus.
- Kimberly Stachnefeld, Research Scientists at DeepMind, presented a relational view of grid cells in which grid cells represent geometry over graphs and showed how these cells could support sophisticated reasoning.
What’s Next for Brains@Bay?
In many ways, I feel like we’re just getting started. There are still plenty of immensely interesting topics and research we haven’t covered. Possible topics include:
- Structural Plasticity
- Sensorimotor Integration
- Behavior Generation
- Object Representation and Composition
A special thank you to our Brains@Bay community members for the continual support and for always bringing the most brain-tickling questions to the stage. We are also grateful for all our incredible speakers.
It’s been a pretty exciting ride so far, and it’s only going to get better. If you have any suggestions, feel free to contact me through the meetup page – we are always looking for interesting ideas. If you’re interested in joining the next event, join our Brains@Bay community or follow us on Twitter for updates.