1,238 research outputs found

    Investigation and Modelling of a Cortical Learning Algorithm in the Neocortex

    Get PDF
    Many algorithms today provide a good machine learning solution in the specific problem domain, like pattern recognition, clustering, classification, sequence learning, image recognition, etc. They are all suitable for solving some particular problem but are limited regarding flexibility. For example, the algorithm that plays Go cannot do image classification, anomaly detection, or learn sequences. Inspired by the functioning of the neocortex, this work investigates if it is possible to design and implement a universal algorithm that can solve more complex tasks more intelligently in the way the neocortex does. Motivated by the remarkable replication degree of the same and similar circuitry structures in the entire neocortex, this work focuses on the idea of the generality of the neocortex cortical algorithm and suggests the existence of canonical cortical units that can solve more complex tasks if combined in the right way inside of a neural network. Unlike traditional neural networks, algorithms used and created in this work rely only on the finding of neural sciences. Initially inspired by the concept of Hierarchical Temporal Memory (HTM), this work demonstrates how Sparse Encoding, Spatial- and Sequence-Learning can be used to model an artificial cortical area with the cortical algorithm called Neural Association Algorithm (NAA). The proposed algorithm generalises the HTM and can form canonical units that consist of biologically inspired neurons, synapses, and dendrite segments and explains how interconnected canonical units can build a semantical meaning. Results demonstrate how such units can store a large amount of information, learn sequences, build contextual associations that create meaning and provide robustness to noise with high spatial similarity. Inspired by findings in neurosciences, this work also improves some aspects of the existing HTM and introduces the newborn stage of the algorithm. The extended algorithm takes control of a homeostatic plasticity mechanism and ensures that learned patterns remain stable. Finally, this work also delivers the algorithm for the computation over distributed mini-columns that can be executed in parallel using the Actor Programming Model

    Handwritten digit recognition by bio-inspired hierarchical networks

    Full text link
    The human brain processes information showing learning and prediction abilities but the underlying neuronal mechanisms still remain unknown. Recently, many studies prove that neuronal networks are able of both generalizations and associations of sensory inputs. In this paper, following a set of neurophysiological evidences, we propose a learning framework with a strong biological plausibility that mimics prominent functions of cortical circuitries. We developed the Inductive Conceptual Network (ICN), that is a hierarchical bio-inspired network, able to learn invariant patterns by Variable-order Markov Models implemented in its nodes. The outputs of the top-most node of ICN hierarchy, representing the highest input generalization, allow for automatic classification of inputs. We found that the ICN clusterized MNIST images with an error of 5.73% and USPS images with an error of 12.56%

    A Mathematical Formalization of Hierarchical Temporal Memory's Spatial Pooler

    Get PDF
    Hierarchical temporal memory (HTM) is an emerging machine learning algorithm, with the potential to provide a means to perform predictions on spatiotemporal data. The algorithm, inspired by the neocortex, currently does not have a comprehensive mathematical framework. This work brings together all aspects of the spatial pooler (SP), a critical learning component in HTM, under a single unifying framework. The primary learning mechanism is explored, where a maximum likelihood estimator for determining the degree of permanence update is proposed. The boosting mechanisms are studied and found to be only relevant during the initial few iterations of the network. Observations are made relating HTM to well-known algorithms such as competitive learning and attribute bagging. Methods are provided for using the SP for classification as well as dimensionality reduction. Empirical evidence verifies that given the proper parameterizations, the SP may be used for feature learning.Comment: This work was submitted for publication and is currently under review. For associated code, see https://github.com/tehtechguy/mHT

    Gait recognition and understanding based on hierarchical temporal memory using 3D gait semantic folding

    Get PDF
    Gait recognition and understanding systems have shown a wide-ranging application prospect. However, their use of unstructured data from image and video has affected their performance, e.g., they are easily influenced by multi-views, occlusion, clothes, and object carrying conditions. This paper addresses these problems using a realistic 3-dimensional (3D) human structural data and sequential pattern learning framework with top-down attention modulating mechanism based on Hierarchical Temporal Memory (HTM). First, an accurate 2-dimensional (2D) to 3D human body pose and shape semantic parameters estimation method is proposed, which exploits the advantages of an instance-level body parsing model and a virtual dressing method. Second, by using gait semantic folding, the estimated body parameters are encoded using a sparse 2D matrix to construct the structural gait semantic image. In order to achieve time-based gait recognition, an HTM Network is constructed to obtain the sequence-level gait sparse distribution representations (SL-GSDRs). A top-down attention mechanism is introduced to deal with various conditions including multi-views by refining the SL-GSDRs, according to prior knowledge. The proposed gait learning model not only aids gait recognition tasks to overcome the difficulties in real application scenarios but also provides the structured gait semantic images for visual cognition. Experimental analyses on CMU MoBo, CASIA B, TUM-IITKGP, and KY4D datasets show a significant performance gain in terms of accuracy and robustness

    ACCURACY AND MULTI-CORE PERFORMANCE OF MACHINE LEARNING ALGORITHMS FOR HANDWRITTEN CHARACTER RECOGNITION

    Get PDF
    There have been considerable developments in the quest for intelligent machines since the beginning of the cybernetics revolution and the advent of computers. In the last two decades with the onset of the internet the developments have been extensive. This quest for building intelligent machines have led into research on the working of human brain, which has in turn led to the development of pattern recognition models which take inspiration in their structure and performance from biological neural networks. Research in creating intelligent systems poses two main problems. The first one is to develop algorithms which can generalize and predict accurately based on previous examples. The second one is to make these algorithms run fast enough to be able to do real time tasks. The aim of this thesis is to study and compare the accuracy and multi-core performance of some of the best learning algorithms to the task of handwritten character recognition. Seven algorithms are compared for their accuracy on the MNIST database, and the test set accuracy (generalization) for the different algorithms are compared. The second task is to implement and compare the performance of two of the hierarchical Bayesian based cortical algorithms, Hierarchical Temporal Memory (HTM) and Hierarchical Expectation Refinement Algorithm (HERA) on multi-core architectures. The results indicate that the HTM and HERA algorithms can make use of the parallelism in multi-core architectures

    HTM approach to image classification, sound recognition and time series forecasting

    Get PDF
    Dissertação de mestrado em Biomedical EngineeringThe introduction of Machine Learning (ML) on the orbit of the resolution of problems typically associated within the human behaviour has brought great expectations to the future. In fact, the possible development of machines capable of learning, in a similar way as of the humans, could bring grand perspectives to diverse areas like healthcare, the banking sector, retail, and any other area in which we could avoid the constant attention of a person dedicated to the solving of a problem; furthermore, there are those problems that are still not at the hands of humans to solve - these are now at the disposal of intelligent machines, bringing new possibilities to the humankind development. ML algorithms, specifically Deep Learning (DL) methods, lack a bigger acceptance by part of the community, even though they are present in various systems in our daily basis. This lack of confidence, mandatory to let systems make big, important decisions with great impact in the everyday life is due to the difficulty on understanding the learning mechanisms and previsions that result by the same - some algorithms represent themselves as ”black boxes”, translating an input into an output, while not being totally transparent to the outside. Another complication rises, when it is taken into account that the same algorithms are trained to a specific task and in accordance to the training cases found on their development, being more susceptible to error in a real environment - one can argue that they do not constitute a true Artificial Intelligence (AI). Following this line of thought, this dissertation aims at studying a new theory, Hierarchical Temporal Memory (HTM), that can be placed in the area of Machine Intelligence (MI), an area that studies the capacity of how the software systems can learn, in an identical way to the learning of a human being. The HTM is still a fresh theory, that lays on the present perception of the functioning of the human neocortex and assumes itself as under constant development; at the moment, the theory dictates that the neocortex zones are organized in an hierarchical structure, being a memory system, capable of recognizing spatial and temporal patterns. In the course of this project, an analysis was made to the functioning of the theory and its applicability to the various tasks typically solved with ML algorithms, like image classification, sound recognition and time series forecasting. At the end of this dissertation, after the evaluation of the different results obtained in various approaches, it was possible to conclude that even though these results were positive, the theory still needs to mature, not only in its theoretical basis but also in the development of libraries and frameworks of software, to capture the attention of the AI community.A introdução de ML na órbita da resolução de problemas tipicamente dedicados ao foro humano trouxe grandes expectativas para o futuro. De facto, o possível desenvolvimento de máquinas capazes de aprender, de forma semelhante aos humanos, poderia trazer grandes perspetivas para diversas áreas como a saúde, o setor bancário, retalho, e qualquer outra área em que se poderia evitar o constante alerta de uma pessoa dedicada a um problema; para além disso, problemas sem resolução humana passavam a estar a mercê destas máquinas, levando a novas possibilidades no desenvolvimento da humanidade. Apesar de se encontrar em vários sistemas no nosso dia-a-dia, estes algoritmos de ML, especificamente de DL, carecem ainda de maior aceitação por parte da comunidade, devido a dificuldade de perceber as aprendizagens e previsões resultantes, feitas pelos mesmos - alguns algoritmos apresentam-se como ”caixas negras”, traduzindo um input num output, não sendo totalmente transparente para o exterior - é necessária confiança nos sistemas que possam tomar decisões importantes e com grandes impactos no quotidiano; por outro lado, os mesmos algoritmos encontram-se treinados para uma tarefa específica e de acordo com os casos encontrados no desenvolvimento do seu treino, sendo mais suscetíveis a erros em ambientes reais, podendo se discutir que não constituem, por isso, uma verdadeira Inteligência Artificial. Seguindo este segmento, a presente dissertação procura estudar uma nova teoria, HTM, inserida na área de MI, que pretende dar a capacidade aos sistemas de software de aprenderem de uma forma idêntica a do ser humano. Esta recente teoria, assenta na atual perceção do funcionamento do neocórtex, estando por isso em constante desenvolvimento; no momento, e assumida como uma teoria que dita a hierarquização estrutural das zonas do neocórtex, sendo um sistema de memória, reconhecedor de padrões espaciais e temporais. Ao longo deste projeto, foi feita uma análise ao funcionamento da teoria, e a sua aplicabilidade a várias tarefas tipicamente resolvidas com algoritmos de ML, como classificação de imagem, reconhecimento de som e previsão de series temporais. No final desta dissertação, após uma avaliação dos diferentes resultados obtidos em várias abordagens, foi possível concluir que apesar dos resultadospositivos, a teoria precisa ainda de maturar, não só a nível teórico como a nível prático, no desenvolvimento de bibliotecas e frameworks de software, de forma a capturar a atenção da comunidade de Inteligência Artificial

    Temporal - spatial recognizer for multi-label data

    Get PDF
    Pattern recognition is an important artificial intelligence task with practical applications in many fields such as medical and species distribution. Such application involves overlapping data points which are demonstrated in the multi- label dataset. Hence, there is a need for a recognition algorithm that can separate the overlapping data points in order to recognize the correct pattern. Existing recognition methods suffer from sensitivity to noise and overlapping points as they could not recognize a pattern when there is a shift in the position of the data points. Furthermore, the methods do not implicate temporal information in the process of recognition, which leads to low quality of data clustering. In this study, an improved pattern recognition method based on Hierarchical Temporal Memory (HTM) is proposed to solve the overlapping in data points of multi- label dataset. The imHTM (Improved HTM) method includes improvement in two of its components; feature extraction and data clustering. The first improvement is realized as TS-Layer Neocognitron algorithm which solves the shift in position problem in feature extraction phase. On the other hand, the data clustering step, has two improvements, TFCM and cFCM (TFCM with limit- Chebyshev distance metric) that allows the overlapped data points which occur in patterns to be separated correctly into the relevant clusters by temporal clustering. Experiments on five datasets were conducted to compare the proposed method (imHTM) against statistical, template and structural pattern recognition methods. The results showed that the percentage of success in recognition accuracy is 99% as compared with the template matching method (Featured-Based Approach, Area-Based Approach), statistical method (Principal Component Analysis, Linear Discriminant Analysis, Support Vector Machines and Neural Network) and structural method (original HTM). The findings indicate that the improved HTM can give an optimum pattern recognition accuracy, especially the ones in multi- label dataset
    corecore