A Thousand Brains is one of Bill Gates’ 5 Books to Read in 2021, Research Highlights, and More | December 2021

Christy Maver • VP of Marketing

As we close out the year, I’m pleased to share some exciting news, including A Thousand Brains being named as one of Bill Gates’ Top 5 Books of 2021, and new research demonstrating our progress in applying the principles of the Thousand Brains Theory to artificial intelligence and machine learning.

A Thousand Brains is one of Bill Gates’ 5 Books to Read in 2021

It’s filled with fascinating insights into the architecture of the brain and tantalizing clues about the future of intelligent machines. – Bill Gates

We were thrilled to see Bill Gates name A Thousand Brains as one of his five favorite books of the year when he released his annual holiday reading list last month. As he explains in his review, “If you’re interested in learning more about what it might take to create a true AI, this book offers a fascinating theory.” To hear more of his impressions from the book, you can read the full review on his website.  Be sure not to miss his fun 3-minute movie review, where he offers highlights and easter eggs from each of his five favorite books.

Research Updates: Applying Sparsity to Deep Learning

This year we’ve made great progress in demonstrating the benefits of applying brain-based sparsity to Deep Learning.  As we look ahead to 2022, I’m happy to share two exciting new results to close out the year.

➤  Sparsity Without Sacrifice: Accurate BERT with 10x Fewer Parameters

As deep learning continues to generate massive power bills due to the rapidly rising number of parameters in today’s models, companies everywhere are looking for ways to increase efficiency without sacrificing accuracy. We have been studying how applying the sparse properties of the brain can enable both fast and accurate sparse networks in machine learning. I’m pleased to share that our research team achieved a 10x parameter reduction in GoogleAI’s BERT, a large transformer model, with no loss of accuracy on the popular GLUE performance benchmark.  Read more about this important milestone in Senior Research Engineer Ben Cohen’s new blog post.

 ➤  Using Active Dendrites and Sparse Representations to Enable Continual Learning

In addition to our sparse transformer networks, we recently found that sparse activations can help mitigate catastrophic forgetting, another key problem in deep learning today. Our new pre-print, “Going Beyond the Point Neuron: Active Dendrites and Sparse Representations for Continual Learning,” outlines how we augmented neural networks with two key properties of real neurons: active dendrites and sparse representations.  Our team hypothesized that these two properties would permit knowledge retention over time, and we discovered that adding them to neural networks significantly enhanced the network’s ability to learn sequentially, just as humans do. First author Karan Grewal explains why this discovery is important and what it means in this companion blog post, Can Active Dendrites Mitigate Catastrophic Forgetting?

Brains@Bay MeetUp December 15, 10am PST: Sensorimotor Learning in AI

Our final Brains@Bay MeetUp of the year will take place Wednesday December 15, 10am PST. This month, our topic is sensorimotor learning in AI, and we’re thrilled to have three distinguished speakers joining us. Rich Sutton, often considered the father of modern computational reinforcement learning, will kick things off with a talk on The Increasing Role of Sensorimotor Experience in Artificial Intelligence. See below for the full lineup and register for your spot today.  If you can’t attend live, you can watch the replay on our YouTube channel.

  • Richard Sutton, DeepMind and University of Alberta
    The Increasing Role of Sensorimotor Experience in Artificial Intelligence
  • Clément Moulin-Frier, Flowers Laboratory
    Open-ended Skill Acquisition in Humans and Machines: An Evolutionary and Developmental Perspective
  • Viviane Clay, Numenta and University of Osnabrück
    The Effect of Sensorimotor Learning on the Learned Representations in Deep Neural Networks

RSVP to Brains@Bay here

Thank you for your continued support and interest in Numenta throughout the year.  I look forward to bringing you even more exciting news in the coming year as we continue to work on implementing the ideas in the Thousand Brains Theory. Stay tuned and don’t forget to follow us on Twitter to make sure you don’t miss any updates.

Christy Maver
VP of Marketing

Christy Maver • VP of Marketing

Leave a Reply

Your email address will not be published. Required fields are marked *