A Mathematical Formalization of Hierarchical Temporal Memory\u27s Spatial Pooler for use in Machine Learning

Abstract

Hierarchical temporal memory (HTM) is an emerging machine learning algorithm, with the potential to provide a means to perform predictions on spatiotemporal data. The algorithm, inspired by the neocortex, consists of two primary components, namely the spatial pooler (SP) and the temporal memory (TM). The SP is utilized to map similar inputs into generalized sparse distributed representations (SDRs). Those SDRs are then utilized by the TM, which performs sequence learning and prediction. One challenge with HTM is ensuring that proper SDRs are generated from the SP. If the SDRs are not generalizable, the TM will not be able to make proper predictions. This work focuses on the SP and its corresponding output SDRs. A single unifying mathematical framework was created for the SP. The primary learning mechanism was explored, where a maximum likelihood estimator for determining the degree of permanence update was proposed. The boosting mechanisms were studied and found to only be relevant during the initial few iterations of the network. Observations were made relating HTM to well-known algorithms such as competitive learning and attribute bagging. Methods were provided for using the SP for classification as well as dimensionality reduction. Empirical evidence verified that given the proper parameterizations, the SP may be used for feature learning. Similarity metrics were created for scoring the SDRs produced by the SP. The overlap metric proved that the SP is extremely robust to noise. The SP was able to produce similar outputs for a given input, provided the noise did not cause the input to change classes. This overlap metric was further utilized to create a classifier for novelty detection. The SP proved to be able to withstand more noise than the well-known support vector machine (SVM)

    Similar works