26 research outputs found

    Design and Analysis of a Reconfigurable Hierarchical Temporal Memory Architecture

    Get PDF
    Self-learning hardware systems, with high-degree of plasticity, are critical in performing spatio-temporal tasks in next-generation computing systems. To this end, hierarchical temporal memory (HTM) offers time-based online-learning algorithms that store and recall temporal and spatial patterns. In this work, a reconfigurable and scalable HTM architecture is designed with unique pooling realizations. Virtual synapse design is proposed to address the dynamic interconnections occurring in the learning process. The architecture is interweaved with parallel cells and columns that enable high processing speed for the cortical learning algorithm. HTM has two core operations, spatial and temporal pooling. These operations are verified for two different datasets: MNIST and European number plate font. The spatial pooling operation is independently verified for classification with and without the presence of noise. The temporal pooling is verified for simple prediction. The spatial pooler architecture is ported onto an Altera cyclone II fabric and the entire architecture is synthesized for Xilinx Virtex IV. The results show that 91% classification accuracy is achieved with MNIST database and 90% accuracy for the European number plate font numbers with the presence of Gaussian and Salt & Pepper noise. For the prediction, first and second order predictions are observed for a 5-number long sequence generated from European number plate font and ~95% accuracy is obtained. Moreover, the proposed hardware architecture offers 3902X speedup over the software realization. These results indicate that the proposed architecture can serve as a core to build the HTM in hardware and eventually as a standalone self-learning hardware system

    A Novel FPGA Implementation of Hierarchical Temporal Memory Spatial Pooler

    Get PDF
    There is currently a strong focus across the technological landscape to create machines capable of performing complex, objective based tasks in a manner similar to, or superior to a human. Many of the methods being explored in the machine intelligence space require large sets of labeled data to first train, and then classify inputs. Hierarchical Temporal Memory (HTM) is a biologically inspired machine intelligence framework which aims to classify and interpret streaming unlabeled data, without supervision, and be able to detect anomalies in such data. In software HTM models, increasing the number of “columns” or processing elements to the levels required to make meaningful predictions in complex data can be prohibitive to analyzing in real time. There exists a need to improve the throughput of such systems. HTMs require large amounts of data available to be accessed randomly, and then processed independently. FPGAs provide a reconfigurable, and easily scalable platform ideal for these types of operations. One of the two main components of the HTM architecture is the “spatial pooler”. This thesis explores a novel hardware implementation of an HTM spatial pooler, with a boosting algorithm to increase homeostasis, and a novel classification algorithm to interpret input data in real time. This implementation shows a significant speedup in data processing, and provides a framework to scale the implementation based on the available hardware resources of the FPGA

    Introduction to Memristive HTM Circuits

    Get PDF
    Hierarchical temporal memory (HTM) is a cognitive learning algorithm intended to mimic the working principles of neocortex, part of the human brain said to be responsible for data classification, learning, and making predictions. Based on the combination of various concepts of neuroscience, it has already been shown that the software realization of HTM is effective on different recognition, detection, and prediction making tasks. However, its distinctive features, expressed in terms of hierarchy, modularity, and sparsity, suggest that hardware realization of HTM can be attractive in terms of providing faster processing speed as well as small memory requirements, on-chip area, and total power consumption. Despite there are few works done on hardware realization for HTM, there are promising results which illustrate effectiveness of incorporating an emerging memristor device technology to solve this open-research problem. Hence, this chapter reviews hardware designs for HTM with specific focus on memristive HTM circuits

    A Scalable Flash-Based Hardware Architecture for the Hierarchical Temporal Memory Spatial Pooler

    Get PDF
    Hierarchical temporal memory (HTM) is a biomimetic machine learning algorithm focused upon modeling the structural and algorithmic properties of the neocortex. It is comprised of two components, realizing pattern recognition of spatial and temporal data, respectively. HTM research has gained momentum in recent years, leading to both hardware and software exploration of its algorithmic formulation. Previous work on HTM has centered on addressing performance concerns; however, the memory-bound operation of HTM presents significant challenges to scalability. In this work, a scalable flash-based storage processor unit, Flash-HTM (FHTM), is presented along with a detailed analysis of its potential scalability. FHTM leverages SSD flash technology to implement the HTM cortical learning algorithm spatial pooler. The ability for FHTM to scale with increasing model complexity is addressed with respect to design footprint, memory organization, and power efficiency. Additionally, a mathematical model of the hardware is evaluated against the MNIST dataset, yielding 91.98% classification accuracy. A fully custom layout is developed to validate the design in a TSMC 180nm process. The area and power footprints of the spatial pooler are 30.538mm2 and 5.171mW, respectively. Storage processor units have the potential to be viable platforms to support implementations of HTM at scale

    Scalable Digital Architecture of Hierarchical Temporal Memory Spatial Pooler

    Get PDF
    Hierarchical Temporal memory is an unsupervised machine learning algorithm. Inspired by the structural and functional properties of the human brain, it is capable of processing spatio-temporal signals which are used for data storage and predictions. The algorithm is composed of two main components; the Spatial Pooler and the Temporal Memory. The spatial pooler produces a sparse distribution representation for the given pattern. These generalized representations are used by the temporal memory to make predictions. Therefore, it is important to ensure that more generalized sparse distribution representations are obtained for the spatio-temporal data patterns. This work presents the digital design of spatial pooler implementation for an existing mathematical algorithm along with an analysis of its scalability for the target FPGA device. The digital design is implemented in two ways; Conventional and Parallel architectures. The architectures are compared in terms of speedup, area and power consumption. Based on the analysis of results, it is seen that the parallel approach is more efficient in terms of speed and power, with a negligible increase in device utilization. The spatial pooler design is evaluated against the standard MNIST dataset, obtaining up to 90% and 88% classication accuracy for the train and test data, respectively. Additionally, the designs are tested on the MNIST dataset, in the presence of noise, to determine its robustness. Fluctuations of up to 10% of the peak accuracy are observed during classication, and are noted in the classication accuracy plots for the dataset with noise. The design is synthesized for the Xilinx Virtex 7 family with a total power consumption of up to 260 mW
    corecore