How can we take a step towards the brain’s efficiency without sacrificing accuracy? One strategy is to invoke sparsity. Today I’m excited to share a step in that direction – a 10x parameter reduction in BERT with no loss of accuracy on the GLUE benchmark.
In our new pre-print titled “Going Beyond the Point Neuron: Active Dendrites and Sparse Representations for Continual Learning”, we investigated how to augment neural networks with properties of real neurons, specifically active dendrites and sparse representations.
Our research meetings are the cornerstone of everything we do. It’s where we share hypotheses, review papers, and often invite other researchers to share their work. Here are our most popular research meetings from the previous 12 months – just in case you missed them!
Are you a machine learning researcher looking for better learning algorithms? Interested in how neuroscience research can help inform the development of artificial intelligence systems? Brains@Bay may be the Meetup group for you! Brains@Bay is a meetup hosted by Numenta with the goal of bringing together experts and practitioners at the intersection of neuroscience and AI.
Can CPUs leverage sparsity? In this blog post, our Director of ML Architecture Lawrence Spracklen compares Numenta’s FPGA sparse model performance with the performance of sparse models running on modern CPUs.