Neuroscience 2018 (SfN)

Wed, Nov 07, 2018 8:00 AM — 12:00 PM
Wed, Nov 07, 2018 8:00 AM — 12:00 PM

Posters:

Numenta Researchers will present three posters at Neuroscience 2018, hosted by SfN (The Society for Neuroscience). All three posters are part of the same session:

  • Session Number: 702
  • Session Title: Computation, Modeling, and Simulation: Network Models: Theory II
  • Date and Time: Wednesday Nov 7, 2018 8:00 AM – 12:00 PM
  • Location: San Diego Convention Center: SDCC Halls B-H | LLL54-56

Poster 1: Grid cells in the neocortex, a framework for cortical computation, abstract number 9704

Authors: Subutai Ahmad, Jeff Hawkins, Marcus Lewis, Scott Purdy

Abstract:
Recent evidence suggests that grid cell-like mechanisms may be present in the neocortex. In this paper we propose that cells that behave similar to grid cells exist in every cortical column and play an essential role in all cortical function. We present a theoretical framework for understanding how the neocortex operates based on cortical grid cells. Grid cells in the medial entorhinal cortex represent the location of an animal in various environments. We propose that cortical grid cells, in the lower layers of the neocortex, also represent a location. Whereas grid cells in the entorhinal cortex represent the location of one thing, the body, grid cells in the cortex simultaneously represent the location of hundreds of things. Cortical columns that receive input from different parts of the body track the location of each body part relative to external reference frames. Similarly, cortical columns that receive input from the retina track the location of visual features relative to external reference frames. Including a representation of location in each column provides a framework for understanding how the cortex learns the structure and behavior of objects (“what” regions) and how the cortex maps the space around our bodies (“where” regions). The similarity of circuitry observed in all cortical regions suggests that even high-level concepts are learned and represented in a location-based framework. Cortical grid cells suggest a new way of thinking about cortical function, one that is based on the interplay of sensory input and location processing. In this paper we describe this idea and explore some of its implications.


Poster 2: A mechanism for sensorimotor object recognition using cortical grid cells, abstract number 9340

Authors: Marcus Lewis, Scott Purdy, Subutai Ahmad, Jeff Hawkins

Abstract:
The neocortex is capable of modeling complex objects through sensorimotor interaction but the neural mechanisms are poorly understood. Previously we have proposed that grid cell-like neurons exist in every cortical column. In this paper, we expand on this idea and describe a two-layer network model that uses cortical grid cells and path integration to learn and recognize objects through movement. Grid cells exhibit regular tiling over environments and are organized into modules, each with a common scale and orientation. A single module encodes position within the spatial scale of the module but is ambiguous over larger spaces. A set of modules can uniquely encode many large spaces. In our model, one layer contains several grid cell-like modules. This layer provides a location signal for each learned object such that features can be associated with a specific location in the reference frame of that object. A second layer, a sensory input layer, receives the location representation as context, and uses it to encode the sensory input in the context of a location in the object’s reference frame. Projections from the input layer to the location layer invoke possible locations that are consistent with the current input. Movement of the sensor updates the locations via path integration. Projections from the location layer back to the input layer predict the next input. A series of sensations followed by movements quickly results in the unique object identity that is consistent with the series of sensations and movements. Simulations show that the model can learn thousands of objects with high noise tolerance. We characterize the convergence time for object recognition, which is dependent on the number of unique features, the number of unique locations, and the total number of objects stored in the network. We compare our model to experimental data and propose testable predictions made by the model. We discuss the relationship to cortical circuitry and suggest that the reciprocal connections between layers 4 and 6 fit the requirements of the model.


Poster 3: The predictive neuron, how active dendrites enable spatiotemporal computation in the neocortex, abstract number 2852

Authors: Subutai Ahmad, Jeff Hawkins

Abstract:
Pyramidal neurons receive input from thousands of synapses spread throughout dendritic branches with diverse integration properties. The majority of these synapses have negligible impact on the soma. It is therefore a mystery how pyramidal neurons integrate the input from all these synapses, and what kind of network behavior this enables in cortical tissue. It has been previously proposed that active dendrites enable neurons to recognize multiple independent patterns. In this paper we extend this idea. We propose a model where patterns detected on active basal dendrites act as predictions by slightly depolarizing the soma without generating an action potential. A single neuron can then predict its activation in hundreds of independent contexts. We show how a network of pyramidal neurons combined with fast local inhibition and branch specific plasticity mechanisms can learn complex time-based sequences and form precise predictive codes. The algorithm scales well, learns continuously and demonstrates excellent performance on real-world data. We then extend the idea to handle sensorimotor sequences. Sensory inputs can change due to external factors or they can change due to our own behavior. Interpreting behavior-generated changes requires knowledge of how the body is moving, whereas interpreting externally-generated changes relies solely on the temporal sequence of input patterns. We show that our predictive network mechanism can learn both pure external temporal sequences as well as sensorimotor sequences. When the contextual input includes information derived from efference motor copies, the cells learn sensorimotor sequences. If the contextual input consists of nearby cellular activity, the cells learn temporal sequences. Through simulation we show that a network containing both types of contextual input can automatically separate and learn sensory sequences containing a blended mixture of both types of input patterns. We discuss the relationship to experimental data and testable predictions made by the model. Given the prevalence of pyramidal neurons throughout the neocortex and the importance of prediction in inference and behavior, we propose that this form of sequence memory may be a universal property of neocortical tissue.

Authors

Jeff Hawkins, Subutai Ahmad, Marcus Lewis and Mirko Klukas

Share