Posters
CNS 2018: Sparse Distributed Representations
This poster highlights one of the foundational topics of Numenta research: sparse distributed representations, or SDRs for short. SDRs are how the brain represents information. The mathematical properties of SDRs are essential components of biological intelligence. This poster examines how accurately neurons can recognize sparse patterns.
CNS 2018: Learning Relative Landmark Locations
This poster introduces a proposal that the brain uses grid cells to perform unsupervised learning of landmark locations. It shows the results of a network model trained on 1000 environments, compared to a bag-of-features model. It also lays out discussion topics for future extensions of this work.
Grid Cell Meeting 2018: Using Grid Cells for Coordinate Transforms
In this poster, we show how the brain might use a grid cell code to represent 1) sensed structures at locations in viewer-centric coordinates and 2) sensed features and locations in object-centric coordinates. We lay out a mechanism that shows the transform routes between grid cells that enable object recognition.
Cosyne 2018: Determining Allocentric Locations of Sensed Features
In this poster, we propose a neural mechanism for determining allocentric locations of sensed features. We show how cortical columns can use multiple independent moving sensors to identify and locate objects. We lay out a model inspired by grid cell modules that describes how the brain computes and represents locations.
Cosyne 2018: Sparse Distributed Representations
This poster highlights sparse distributed representations, a method the brain uses to represent information. Sparse distributed representations and their mathematical properties are essential components of biological intelligence. This poster examines the robust dendritic computations in the neocortex with sparse distributed representations.
CNS 2017: A Neural Mechanism for Sequence Learning – HTM Sequence Memory
In this poster, we propose a neural mechanism for sequence learning– HTM Sequence Memory, where 1) neurons learn to recognize hundreds of patterns; 2) recognition of patterns act as predictions; 3) a network of neurons forms a powerful sequence memory; and 4) sparse representations lead to highly robust recognition.