The Technology

Numenta’s research mission is to be a catalyst for machine intelligence. By reverse engineering the neocortex and building products that work on neocortical principles, we hope to accelerate the creation of machines that learn, and to create the foundation for machine intelligence. Grok is our first product based on neocortical principles; its unique abilities are derived from a deep understanding of neuroscience.

To use Grok you do not need to know neuroscience or even machine learning principles. However, by understanding the technology used in Grok you can better understand what Grok can do and how it is different from other approaches to anomaly detection and machine learning.

Numenta is open about our technology. Not only can you learn how Grok works, but the source code for the learning algorithms used in Grok is available in the NuPIC open source project. The home for NuPIC is at www.numenta.org. There you will find the latest information and developments on the technology used by Numenta.

This web page provides introductory material and links to help you get started.

The Machine Intelligence Behind Grok

Grok uses sophisticated machine learning techniques to analyze streaming data and detect unusual behavior. Our Science of Anomaly Detection whitepaper provides an overview of how Grok does this. Reading this short paper is the best place to start if you are new to Numenta and want a quick introduction to how Grok works.

Comparison to Other Machine Learning Algorithms

There are many machine learning techniques. Typically, a data scientist will look at a problem and choose one or more of these techniques to address the specific problem. Every machine learning technique has its strengths and weaknesses.

The technology used by Grok is a model of a slice of the neocortex. It exhibits many of the capabilities we find in brains, making it best suited for streaming data applications where the underlying patterns in the data are continually changing.

The following attributes differentiate Grok’s machine learning technology from other techniques:

  • Grok is a memory-based system; it learns the patterns in the data. Techniques such as linear regression use formulae to model data. Formulaic systems can learn quickly but only work on specific types of patterns. Memory-based systems like Grok may take more data to train but they can learn any pattern, including those that don’t fit any kind of mathematical expression. Servers and server-based applications generate data that require memory-based models.
  • Grok is an online learning system. Online systems learn continuously and thus are better suited for applications where the patterns in the data change over time. Grok learns continuously, so new patterns replace old patterns in the same way you remember recent events better than old events.
  • Grok learns time-based patterns. These are patterns that exist over time, like a melody. Most machine learning techniques do not have the ability to learn time-based patterns. Techniques such as ARIMA or ARMA are two techniques that do work with time series but are based on averages with limited applicability to the types of patterns exhibited by servers and other fast changing data sources. Grok learns high-order time-based patterns. This capability dramatically improves its predictive and anomaly detection capability because Grok automatically uses as much temporal context as it can to improve predictions.
  • Grok uses Sparse Distributed Representations (SDRs) to represent data. SDRs are the language of the brain. SDRs allow Grok to handle almost any kind of data whereas some machine learning techniques are restricted in the kinds of data that can be used or predicted. SDRs are a kind of universal representation. You don’t have to tell Grok anything about what the data you send it represents. SDRs also give Grok the ability to generalize across different but similar patterns.

Technology Resources

CLA White Paper

The neocortical model used by Grok is called the “Cortical Learning Algorithm”, or CLA. It is described in a white paper. The white paper was written before Grok was envisioned so it doesn’t mention Grok or how the CLA can be applied to analytics. The paper describes how the learning algorithms work and their biological mapping. The white paper is available in several languages thanks to the generosity of the translators listed below (Numenta has not verified the accuracy of these translations).

In addition to the white paper you can watch videos of talks about the CLA given by Jeff Hawkins. Search for “Jeff Hawkins” on YouTube.

  • English
    Numenta
  • Chinese
    Translated by Yu Tianxiang
  • German
    Translated by Ingmar Baetge
  • Japanese
    Translated by Akihiro Yoshikawa
  • Korean
    Translated by Jihoon Oh
  • Portuguese
    Translated by David Ragazzi
  • Russian
    Translated by Mikhaile Netov
  • Spanish
    Translated by Garikoitz Lerma Usabiaga
White Paper

NuPIC

NuPIC, the Numenta Platform for Intelligent Computing, is an open source project created by Numenta in 2013. The code in NuPIC includes the CLA algorithms described in the white paper and used in Grok. The CLA learning algorithms faithfully capture how a layer of neurons in the neocortex, arranged in columns, learn a predictive model from a stream of sensory data.

We created the NuPIC open source project because outside developers read the white paper and wanted to work with the algorithms. They asked us to make them available in an open source project. For a detailed explanation of our motivations and goals for this project, see Jeff Hawkins’ introduction to NuPIC.

NuPIC is home to an active community of developers. Some are interested in applications using the CLA and others are interested in neocortical theory. We welcome you to join the NuPIC community. There are opportunities to contribute on many different levels.

Check the events section for hackathons, meetups, and other events related to NuPIC. Learn more at www.numenta.org.


On Intelligence

The Cortical Learning Algorithm used by Grok is part of an overall theory of neocortex called Hierarchical Temporal Memory, or HTM. The basics of HTM theory are described in the CLA white paper mentioned above. The core concepts in HTM theory were first described in the book titled On Intelligence, which was written by Jeff Hawkins with the help of Sandra Blakeslee. On Intelligence is available in the following languages.

  • English
    Jeff Hawkins with Sandra Blakeslee
  • Portuguese
    Translated by David Ragazzi
  • Spanish
    Translated by Espasa Calpe

On Intelligence is also available in the following languages (with publishers):

Chinese (Complex) - Yuan-Liou
Chinese (Simplified) - Shaanxi Science & Technology
Finnish - Edita Publishing Oy
French - Campus Presse/ Village Mondial
German - Rowohlt
Hebrew - Aryeh Nir Publishing House Ltd
Indonesian - PT Bhuana Ilmu Populer
Italian - Feltrinelli
Korean - Sejong
Japanese - Random House Kodansha
Polish - Helion Press
Russian - Dialektika-Williams Publishing Group
Spanish - Espasa Calpe
Vietnamese - Tre Publishing
White Paper