In this poster, we review how sparsity is deeply ingrained in the brain. We then show how insights from the brain can be applied to practical AI system. Numenta VP Research Subutai Ahmad presented this poster at the NAISys 2020 conference.
This poster, which is based on a paper by the same name, proposes that sparsity should be a key design principle for robustness. Numenta VP Research Subutai Ahmad presented this poster and a presentation on the topic at an ICML 2019 workshop on Uncertainty & Robustness in Deep Learning
At this point in time there is no consensus in neuroscience literature on how grid cells are involved in the representation of 3D location, and their contribution to coding variables beyond 2 or 3 dimensions is completely uncharted territory. This poster explores how grid cells can encode N-dimensional variables, using random velocity projections. The poster covers path integration, relation to band cells, and capacity and tuning curve.
This poster shows a model where patterns detected on active basal dendrites act as predictions by slightly depolarizing the soma without generating an action potential. A single neuron can then predict its activation in hundreds of independent contexts. The predictive network mechanism can learn both pure external temporal sequences as well as sensorimotor sequences. When the contextual input includes information derived from efference motor copies, the cells learn sensorimotor sequences. If the contextual input consists of nearby cellular activity, the cells learn temporal sequences.
This poster describes a two-layer network model that uses cortical grid cells and path integration to learn and recognize objects through movement. In our model, one layer contains several grid cell-like modules and provides a location signal for each learned object such that features can be associated with a specific location in the reference frame of that object. A second layer, a sensory input layer, receives the location representation as context, and uses it to encode the sensory input in the context of a location in the object’s reference frame.