HTM School is a series of educational videos created by former Numenta Open Source Community Manager Matt Taylor (1978-2020). Watch these videos to see detailed visualizations of HTM systems running, and thoughtful breakdowns of the biological algorithms involved.

This series was designed for a general audience to be viewed in order, but feel free to jump into any episode. There is no need to have a background in neuroscience, mathematics, or computer science to understand HTM Theory. For those of you wanting more detailed resources on this subject, please have a look at Numenta’s technical papers.

Since 2016, both the framework and our terminology have evolved. Notably, HTM Theory has changed to *The Thousand Brains Theory of Intelligence*. While HTM School is not a complete guide, it covers many of the fundamental concepts of our theory as of 2016.

Read below for a description of the videos, and click the video icons to watch.

In this introductory episode of HTM School, Matt walks you through the high-level theory of **Hierarchical Temporal Memory** in less than 15 minutes.

Hierarchical Temporal Memory is a theory of intelligence based upon neuroscience research. The neocortex is the seat of intelligence in the brain, and it is structurally homogeneous throughout. This means a common algorithm is processing all your sensory input, no matter which sense. We believe we have discovered some of the foundational algorithms of the neocortex, and we’ve implemented them in software.

Play Video

**Sparse Distributed Representations (SDRs)** are a fundamental aspect of HTM systems. Before we talk about neurons and dendrites, we need to establish the communications medium of the brain. Each neuron could be connected to thousands of other neurons, and each of those synapses could activate at any time. In the brain, only about 2% of your neurons are in an active state right at any time. Watch the videos below to better understand why this is important, and why this type of medium lends the brain so much flexibility.

An SDR is simply a list of bits, each bit being 0 or 1. The brain performs a lot of binary operations on these long bit arrays as it is trying to predict future input. This episode introduces bit arrays and some basic binary operations like OR and AND. We will also introduce the idea of semantic data storage within SDRs.

Play Video

In this episode, we talk about the massive amount of data that can be represented in typical SDR structures. We also show how different SDRs can be compared to identify how similar they are. Of particular interest is the overlap score between two SDRs as a measure of their similarity.

Play Video

How many SDRs can be expressed in different input spaces? What are the chances of false positive collisions? What happens if we only compare a sample of the on bits in different SDRs? Believe it or not, HTM systems prove to be extremely fault-tolerant. This is expressed in this episode as Matt talks about sampling SDRs instead of storing every on bit.

Play Video

We can collect sets of SDRs over time. As we see new SDRs, we can compare them to our sets using the binary comparison operations described earlier. Even in the presence of large amounts of noise, Matt shows how SDRs can still be dependably classified. If we squash the sets into unions, we can still tell if we’ve seen it before while performing exponentially fewer operations.

Play Video

- Modeling Data Streams Using Sparse Distributed Representations
- Sparse Distributed Representations: Our Brain’s Data Structure
- How Do Neurons Operate on Sparse Distributed Representations? A Mathematical Theory of Sparsity, Neurons and Active Dendrites
- Properties of Sparse Distributed Representations and their Application To Hierarchical Temporal Memory

So how can data be translated into Sparse Distributed Representations? In this episode, Matt introduces some encoding concepts and talks about encoding scalar values. These examples are very simple, but widely used in HTM systems.

How many ways can scalar data be encoded into a binary input space? You’ll find out two ways we do it, but there are countless other ways to semantically encode data.

Play Video

If we want an HTM system to comprehend the passage of time as we humans have (minutes, hours, days, months), that data should be encoded into a semantic representation and included along with any data in an input row. In this episode, Matt explains how a Date-Time Encoder works by joining together several periodic scalar encodings.

After this episode, you might have some ideas about your own encoders. This space has endless potential. If you’re interested in writing your own encoder, be sure to check out the extra resources below.

Play Video

An input space is like a fiber optics cable. The Spatial Pooler needs to map its cells to the input space in a way that they will be able to learn once patterns in the space start to change. Watch this video to find out how the Spatial Pooler’s columns are initialized onto the input space, and how random connections are established.

Play Video

Now we are going to start feeding real data into the Spatial Pooler and watching as different columns learn to recognize different characteristics of the input space.

Matt will show you how each column becomes active depending on its connections to the input space, and he’ll show you some learning rules columns use. You will also see how a “random” Spatial Pooler compares to an SP with learning turned on.

Play Video

Today’s topic is “Homeostatic Regulation of Neuronal Excitability”, or boosting. Learn about what this is, why it’s necessary, and how it works by watching this episode of HTM School.

You’ll learn about active duty cycles and see how some columns can become much more active than others, limiting the total capacity and efficiency of the Spatial Pooler. After boost factors are calculated, watch as cellular activity spreads more evenly.

Play Video

This episode, we’re traveling into another dimension… the 2nd dimension. We describe why topology in HTM is important and how it is implemented today.

Topology indicates strong spatial relationships between the bits within the input pattern streaming into the Spatial Pooler. With topology engaged, the behavior of the Spatial Pooler changes to better identify localized relationships.

Play Video

- The HTM Spatial Pooler—A Neocortical Algorithm for Online Sparse Distributed Coding
- Why Neurons Have Thousands of Synapses, A Theory of Sequence Memory in Neocortex
- The HTM Spatial Pooler—A Neocortical Algorithm for Online Sparse Distributed Coding
- Why Neurons Have Thousands of Synapses, A Theory of Sequence Memory in Neocortex
- Spatial Pooling Forum Discussions
- SP in Biological and Machine Intelligence

This episode offers a detailed introduction to a key component of HTM theory and describes how neurons in the neocortex can remember spatial sequences within the context of previous inputs by activating specific cells within each column.

Using detailed examples, drawings, and computer animated visualizations, we walk through how cells are put into predictive states in response to new stimulus, and how segments and synapses connect between cells in the columnar structure.

Play Video

We start off this episode by explaining the puzzler question from the last episode, introducing the concepts of “single order” and “high order” memory systems.

Next, we dive into the mechanics of bursting mini-columns, and how winner cells are chosen to learn brand new transitions within sequences.

Play Video

Play Video

Play Video

Play Video