7,072 research outputs found

    Hardware-Amenable Structural Learning for Spike-based Pattern Classification using a Simple Model of Active Dendrites

    Full text link
    This paper presents a spike-based model which employs neurons with functionally distinct dendritic compartments for classifying high dimensional binary patterns. The synaptic inputs arriving on each dendritic subunit are nonlinearly processed before being linearly integrated at the soma, giving the neuron a capacity to perform a large number of input-output mappings. The model utilizes sparse synaptic connectivity; where each synapse takes a binary value. The optimal connection pattern of a neuron is learned by using a simple hardware-friendly, margin enhancing learning algorithm inspired by the mechanism of structural plasticity in biological neurons. The learning algorithm groups correlated synaptic inputs on the same dendritic branch. Since the learning results in modified connection patterns, it can be incorporated into current event-based neuromorphic systems with little overhead. This work also presents a branch-specific spike-based version of this structural plasticity rule. The proposed model is evaluated on benchmark binary classification problems and its performance is compared against that achieved using Support Vector Machine (SVM) and Extreme Learning Machine (ELM) techniques. Our proposed method attains comparable performance while utilizing 10 to 50% less computational resources than the other reported techniques.Comment: Accepted for publication in Neural Computatio

    Computational physics of the mind

    Get PDF
    In the XIX century and earlier such physicists as Newton, Mayer, Hooke, Helmholtz and Mach were actively engaged in the research on psychophysics, trying to relate psychological sensations to intensities of physical stimuli. Computational physics allows to simulate complex neural processes giving a chance to answer not only the original psychophysical questions but also to create models of mind. In this paper several approaches relevant to modeling of mind are outlined. Since direct modeling of the brain functions is rather limited due to the complexity of such models a number of approximations is introduced. The path from the brain, or computational neurosciences, to the mind, or cognitive sciences, is sketched, with emphasis on higher cognitive functions such as memory and consciousness. No fundamental problems in understanding of the mind seem to arise. From computational point of view realistic models require massively parallel architectures

    Astrophysical Data Analytics based on Neural Gas Models, using the Classification of Globular Clusters as Playground

    Get PDF
    In Astrophysics, the identification of candidate Globular Clusters through deep, wide-field, single band HST images, is a typical data analytics problem, where methods based on Machine Learning have revealed a high efficiency and reliability, demonstrating the capability to improve the traditional approaches. Here we experimented some variants of the known Neural Gas model, exploring both supervised and unsupervised paradigms of Machine Learning, on the classification of Globular Clusters, extracted from the NGC1399 HST data. Main focus of this work was to use a well-tested playground to scientifically validate such kind of models for further extended experiments in astrophysics and using other standard Machine Learning methods (for instance Random Forest and Multi Layer Perceptron neural network) for a comparison of performances in terms of purity and completeness.Comment: Proceedings of the XIX International Conference "Data Analytics and Management in Data Intensive Domains" (DAMDID/RCDL 2017), Moscow, Russia, October 10-13, 2017, 8 pages, 4 figure

    Temporal album

    Get PDF
    Transient synchronization has been used as a mechanism of recognizing auditory patterns using integrate-and-fire neural networks. We first extend the mechanism to vision tasks and investigate the role of spike dependent learning. We show that such a temporal Hebbian learning rule significantly improves accuracy of detection. We demonstrate how multiple patterns can be identified by a single pattern selective neuron and how a temporal album can be constructed. This principle may lead to multidimensional memories, where the capacity per neuron is considerably increased with accurate detection of spike synchronization

    Some thoughts on neural network modelling of micro-abrasion-corrosion processes

    Get PDF
    There is increasing interest in the interactions of microabrasion, involving small particles of less than 10 μm in size, with corrosion. This is because such interactions occur in many environments ranging from the offshore to health care sectors. In particular, micro-abrasion-corrosion can occur in oral processing, where the abrasive components of food interacting with the acidic environment, can lead to degradation of the surface dentine of teeth. Artificial neural networks (ANNs) are computing mechanisms based on the biological brain. They are very effective in various areas such as modelling, classification and pattern recognition. They have been successfully applied in almost all areas of engineering and many practical industrial applications. Hence, in this paper an attempt has been made to model the data obtained in microabrasion-corrosion experiments on polymer/steel couple and a ceramic/lasercarb coating couple using ANN. A multilayer perceptron (MLP) neural network is applied and the results obtained from modelling the tribocorrosion processes will be compared with those obtained from a relatively new class of neural networks namely resource allocation network

    Iterative Application of the aiNET Algorithm in the Construction of a Radial Basis Function Neural Network

    Get PDF
    This paper presents some of the procedures adopted in the construction of a Radial Basis Function Neural Network by iteratively applying the aiNET, an Artificial Immune Systems Algorithm. These procedures have shown to be effective in terms of i) the free determination of centroids inspired by an immune heuristics; and ii) the achievement of appropriate minimal square errors after a number of iterations. Experimental and empirical results are compared aiming at confirming (or not) some hypotheses

    NMDA-based pattern discrimination in a modeled cortical neuron

    Get PDF
    Compartmental simulations of an anatomically characterized cortical pyramidal cell were carried out to study the integrative behavior of a complex dendritic tree. Previous theoretical (Feldman and Ballard 1982; Durbin and Rumelhart 1989; Mel 1990; Mel and Koch 1990; Poggio and Girosi 1990) and compartmental modeling (Koch et al. 1983; Shepherd et al. 1985; Koch and Poggio 1987; Rall and Segev 1987; Shepherd and Brayton 1987; Shepherd et al. 1989; Brown et al. 1991) work had suggested that multiplicative interactions among groups of neighboring synapses could greatly enhance the processing power of a neuron relative to a unit with only a single global firing threshold. This issue was investigated here, with a particular focus on the role of voltage-dependent N-methyl-D-asparate (NMDA) channels in the generation of cell responses. First, it was found that when a large proportion of the excitatory synaptic input to dendritic spines is carried by NMDA channels, the pyramidal cell responds preferentially to spatially clustered, rather than random, distributions of activated synapses. Second, based on this mechanism, the NMDA-rich neuron is shown to be capable of solving a nonlinear pattern discrimination task. We propose that manipulation of the spatial ordering of afferent synaptic connections onto the dendritic arbor is a possible biological strategy for pattern information storage during learning

    Can we identify non-stationary dynamics of trial-to-trial variability?"

    Get PDF
    Identifying sources of the apparent variability in non-stationary scenarios is a fundamental problem in many biological data analysis settings. For instance, neurophysiological responses to the same task often vary from each repetition of the same experiment (trial) to the next. The origin and functional role of this observed variability is one of the fundamental questions in neuroscience. The nature of such trial-to-trial dynamics however remains largely elusive to current data analysis approaches. A range of strategies have been proposed in modalities such as electro-encephalography but gaining a fundamental insight into latent sources of trial-to-trial variability in neural recordings is still a major challenge. In this paper, we present a proof-of-concept study to the analysis of trial-to-trial variability dynamics founded on non-autonomous dynamical systems. At this initial stage, we evaluate the capacity of a simple statistic based on the behaviour of trajectories in classification settings, the trajectory coherence, in order to identify trial-to-trial dynamics. First, we derive the conditions leading to observable changes in datasets generated by a compact dynamical system (the Duffing equation). This canonical system plays the role of a ubiquitous model of non-stationary supervised classification problems. Second, we estimate the coherence of class-trajectories in empirically reconstructed space of system states. We show how this analysis can discern variations attributable to non-autonomous deterministic processes from stochastic fluctuations. The analyses are benchmarked using simulated and two different real datasets which have been shown to exhibit attractor dynamics. As an illustrative example, we focused on the analysis of the rat's frontal cortex ensemble dynamics during a decision-making task. Results suggest that, in line with recent hypotheses, rather than internal noise, it is the deterministic trend which most likely underlies the observed trial-to-trial variability. Thus, the empirical tool developed within this study potentially allows us to infer the source of variability in in-vivo neural recordings
    corecore