BAAI Conference 2021: The Thousand Brains Theory: A Roadmap to Machine Intelligence

Jeff Hawkins • Co-Founder

Jeff Hawkins presented a talk on “The Thousand Brains Theory: A Roadmap to Machine Intelligence” at the Beijing Academy of Artificial Intelligence Conference on 1st June 2021. In this talk, he discussed the key components of The Thousand Brains Theory, outlined in his recent book A Thousand Brains, and Numenta’s recent work.

The BAAI Conference 2021 had over 70,000 attendees. The conference is committed to promoting international exchange and cooperation in academia and the AI industry. It also aims to cultivate a community and nurture technological research and breakthroughs across theory, methods, tools, and systems.

Video

This video is provided by BAAI.

Slides

Follow Up Q&A

1) Does the brain use many learning paradigms? If it does, how?

JH: The brain as a whole uses several different learning paradigms. The neocortex is more uniform. The basic way it learns a model of the world is via sensing and moving and sensing and moving. This is the core of the Thousand Brains Theory. The neural mechanisms required to do this are complex and varied. Most learning is via forming new synapses using typical associative/Hebbian methods, but there are other neural mechanisms as well.

2) How do cortical columns transform and align various reference frames through mutual recurrent connections?

JH: This is a complex question. Transforming reference frames, say from allocentric to egocentric, occurs in several parts of each column. We have a theory of how this occurs at the neural level but we have published it yet. As a general rule the reference frames in each column are independently created, they are not aligned. There are some exceptions to this.

3) Do we know the mechanism of how the human brain encodes and retrieves knowledge?

JH: The Thousand Brains Theory is the answer to this question. Knowledge about the world is stored in a structured way using a type of reference frame. This is explained in my book.

4) How can you build a neural network that learns without explicitly giving a task, or by imagining/creating its own task?

JH: The key to intelligence is learning a model of the world. The model is learned by moving our sensors. For example, as we visually attend to something and then by moving our eye we attend to something else, the neocortex stores the relative position and orientation of the two attended objects. We just sequentially moving our sensors we build a model. Of course we may be motivated to explore and learn different parts of the world, but the process itself does not require labeled data or a specific task. Again, I explain this in my book.

Jeff Hawkins • Co-Founder