HTM School

HTM School is a series of educational videos created by former Numenta Open Source Community Manager Matt Taylor (1978-2020). Watch these videos to see detailed visualizations of HTM systems running, and thoughtful breakdowns of the biological algorithms involved.

This series was designed for a general audience to be viewed in order, but feel free to jump into any episode. There is no need to have a background in neuroscience, mathematics, or computer science to understand HTM Theory. For those of you wanting more detailed resources on this subject, please have a look at Numenta’s technical papers.

Since 2016, both the framework and our terminology have evolved. Notably, HTM Theory has changed to The Thousand Brains Theory of Intelligence. While HTM School is not a complete guide, it covers many of the fundamental concepts of our theory as of 2016.

Read below for a description of the videos, and click the video icons to watch.

HTM School hosted by Numenta Open Source Community Flag-Bearer Matt Taylor

Overview

In this introductory episode of HTM School, Matt walks you through the high-level theory of Hierarchical Temporal Memory in less than 15 minutes.

Hierarchical Temporal Memory is a theory of intelligence based upon neuroscience research. The neocortex is the seat of intelligence in the brain, and it is structurally homogeneous throughout. This means a common algorithm is processing all your sensory input, no matter which sense. We believe we have discovered some of the foundational algorithms of the neocortex, and we’ve implemented them in software.

HTM School hosted by Numenta Open Source Community Flag-Bearer Matt Taylor
Play Video

SDRs

Sparse Distributed Representations (SDRs) are a fundamental aspect of HTM systems. Before we talk about neurons and dendrites, we need to establish the communications medium of the brain. Each neuron could be connected to thousands of other neurons, and each of those synapses could activate at any time. In the brain, only about 2% of your neurons are in an active state right at any time. Watch the videos below to better understand why this is important, and why this type of medium lends the brain so much flexibility.

Bit Arrays

An SDR is simply a list of bits, each bit being 0 or 1. The brain performs a lot of binary operations on these long bit arrays as it is trying to predict future input. This episode introduces bit arrays and some basic binary operations like OR and AND. We will also introduce the idea of semantic data storage within SDRs.

Play Video

Capacity and Comparison

In this episode, we talk about the massive amount of data that can be represented in typical SDR structures. We also show how different SDRs can be compared to identify how similar they are. Of particular interest is the overlap score between two SDRs as a measure of their similarity.

Play Video

Overlap Sets and Sampling

How many SDRs can be expressed in different input spaces? What are the chances of false positive collisions? What happens if we only compare a sample of the on bits in different SDRs? Believe it or not, HTM systems prove to be extremely fault-tolerant. This is expressed in this episode as Matt talks about sampling SDRs instead of storing every on bit.

Play Video

Sets and Unions

We can collect sets of SDRs over time. As we see new SDRs, we can compare them to our sets using the binary comparison operations described earlier. Even in the presence of large amounts of noise, Matt shows how SDRs can still be dependably classified. If we squash the sets into unions, we can still tell if we’ve seen it before while performing exponentially fewer operations.

Play Video

Encoders

Encoding real-world data into SDRs is a very important process to understand in HTM. Semantic meaning within the input data must be encoded into a binary representation. These videos show some examples of encoding data into binary arrays so they can be processed by the Spatial Pooler.

Scalar Encoding

So how can data be translated into Sparse Distributed Representations? In this episode, Matt introduces some encoding concepts and talks about encoding scalar values. These examples are very simple, but widely used in HTM systems.

How many ways can scalar data be encoded into a binary input space? You’ll find out two ways we do it, but there are countless other ways to semantically encode data.

Play Video

Datetime Encoding

If we want an HTM system to comprehend the passage of time as we humans have (minutes, hours, days, months), that data should be encoded into a semantic representation and included along with any data in an input row. In this episode, Matt explains how a Date-Time Encoder works by joining together several periodic scalar encodings.

After this episode, you might have some ideas about your own encoders. This space has endless potential. If you’re interested in writing your own encoder, be sure to check out the extra resources below.

Play Video

Spatial Pooling

Input coming from the senses or other parts of the brain are messy and irregular. The Spatial Pooler’s job is to normalize the sparsity of the input while retaining semantically encoded information.

Input Space and Connections

An input space is like a fiber optics cable. The Spatial Pooler needs to map its cells to the input space in a way that they will be able to learn once patterns in the space start to change. Watch this video to find out how the Spatial Pooler’s columns are initialized onto the input space, and how random connections are established. 

Play Video

Learning

Now we are going to start feeding real data into the Spatial Pooler and watching as different columns learn to recognize different characteristics of the input space.

Matt will show you how each column becomes active depending on its connections to the input space, and he’ll show you some learning rules columns use. You will also see how a “random” Spatial Pooler compares to an SP with learning turned on.

Play Video

Boosting and Inhibition

Today’s topic is “Homeostatic Regulation of Neuronal Excitability”, or boosting. Learn about what this is, why it’s necessary, and how it works by watching this episode of HTM School.

You’ll learn about active duty cycles and see how some columns can become much more active than others, limiting the total capacity and efficiency of the Spatial Pooler. After boost factors are calculated, watch as cellular activity spreads more evenly.

Play Video

Topology

This episode, we’re traveling into another dimension… the 2nd dimension. We describe why topology in HTM is important and how it is implemented today.

Topology indicates strong spatial relationships between the bits within the input pattern streaming into the Spatial Pooler. With topology engaged, the behavior of the Spatial Pooler changes to better identify localized relationships.

Play Video

Temporal Memory

The Temporal Memory component of HTM recognizes sequences of incoming spatial patterns from the Spatial Pooler by activating individual cells within each active column to indicate a temporal context for each input.

Temporal Memory Part 1

This episode offers a detailed introduction to a key component of HTM theory and describes how neurons in the neocortex can remember spatial sequences within the context of previous inputs by activating specific cells within each column.

Using detailed examples, drawings, and computer animated visualizations, we walk through how cells are put into predictive states in response to new stimulus, and how segments and synapses connect between cells in the columnar structure.

Play Video

Temporal Memory Part 2

We start off this episode by explaining the puzzler question from the last episode, introducing the concepts of “single order” and “high order” memory systems.

Next, we dive into the mechanics of bursting mini-columns, and how winner cells are chosen to learn brand new transitions within sequences.

Play Video

Cortical Circuitry

In this episode, we’ll walk through concepts introduced in A Theory of How Columns in the Neocortex Enable Learning the Structure of the World . We talk about larger structures in the cortex that contain neurons, like layers and columns.

Play Video

Grid Cells

In this video, we explore the discovery of grid cells. We go over the discovery of these and other location cells in the brain, how they project onto space to represent locations, and how they can be interpreted as SDRs within HTM systems.

Play Video

A Framework for Intelligence

In this video, we explore the discovery of grid cells. We go over the discovery of these and other location cells in the brain, how they project onto space to represent locations, and how they can be interpreted as SDRs within HTM systems.

Play Video