34 research outputs found

    Input-driven unsupervised learning in recurrent neural networks

    Get PDF
    Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is an attractor neural network with Hebbian learning (e.g. the Hopfield model). The model simplicity and the locality of the synaptic update rules come at the cost of a limited storage capacity, compared with the capacity achieved with supervised learning algorithms, whose biological plausibility is questionable. Here, we present an on-line learning rule for a recurrent neural network that achieves near-optimal performance without an explicit supervisory error signal and using only locally accessible information, and which is therefore biologically plausible. The fully connected network consists of excitatory units with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the patterns to be memorized are presented on-line as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs ('local fields'). Synapses corresponding to active inputs are modified as a function of the position of the local field with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. An additional parameter of the model allows to trade storage capacity for robustness, i.e. increased size of the basins of attraction. We simulated a network of 1001 excitatory neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction: our results show that, for any given basin size, our network more than doubles the storage capacity, compared with a standard Hopfield network. Our learning rule is consistent with available experimental data documenting how plasticity depends on firing rate. It predicts that at high enough firing rates, no potentiation should occu

    Multifeatural shape processing in rats engaged in invariant visual object recognition

    Get PDF
    The ability to recognize objects despite substantial variation in their appearance (e.g., because of position or size changes) represents such a formidable computational feat that it is widely assumed to be unique to primates. Such an assumption has restricted the investigation of its neuronal underpinnings to primate studies, which allow only a limited range of experimental approaches. In recent years, the increasingly powerful array of optical and molecular tools that has become available in rodents has spurred a renewed interest for rodent models of visual functions. However, evidence of primate-like visual object processing in rodents is still very limited and controversial. Here we show that rats are capable of an advanced recognition strategy, which relies on extracting the most informative object features across the variety of viewing conditions the animals may face. Rat visual strategy was uncovered by applying an image masking method that revealed the features used by the animals to discriminate two objects across a range of sizes, positions, in-depth, and in-plane rotations. Noticeably, rat recognition relied on a combination of multiple features that were mostly preserved across the transformations the objects underwent, and largely overlapped with the features that a simulated ideal observer deemed optimal to accomplish the discrimination task. These results indicate that rats are able to process and efficiently use shape information, in a way that is largely tolerant to variation in object appearance. This suggests that their visual system may serve as a powerful model to study the neuronal substrates of object recognition

    Input-driven unsupervised learning in recurrent neural networks

    Get PDF
    Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is an attractor neural network with Hebbian learning (e.g. the Hopfield model). The model simplicity and the locality of the synaptic update rules come at the cost of a limited storage capacity, compared with the capacity achieved with supervised learning algorithms, whose biological plausibility is questionable. Here, we present an on-line learning rule for a recurrent neural network that achieves near-optimal performance without an explicit supervisory error signal and using only locally accessible information, and which is therefore biologically plausible. The fully connected network consists of excitatory units with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the patterns to be memorized are presented on-line as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs (’local fields’). Synapses corresponding to active inputs are modified as a function of the position of the local field with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. An additional parameter of the model allows to trade storage capacity for robustness, i.e. increased size of the basins of attraction. We simulated a network of 1001 excitatory neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction: our results show that, for any given basin size, our network more than doubles the storage capacity, compared with a standard Hopfield network. Our learning rule is consistent with available experimental data documenting how plasticity depends on firing rate. It predicts that at high enough firing rates, no potentiation should occur

    Object similarity affects the perceptual strategy underlying invariant visual object recognition in rats

    Get PDF
    In recent years, a number of studies have explored the possible use of rats as models of high-level visual functions. One central question at the root of such an investigation is to understand whether rat object vision relies on the processing of visual shape features or, rather, on lower-order image properties (e.g., overall brightness). In a recent study, we have shown that rats are capable of extracting multiple features of an object that are diagnostic of its identity, at least when those features are, structure-wise, distinct enough to be parsed by the rat visual system. In the present study, we have assessed the impact of object structure on rat perceptual strategy. We trained rats to discriminate between two structurally similar objects, and compared their recognition strategies with those reported in our previous study. We found that, under conditions of lower stimulus discriminability, rat visual discrimination strategy becomes more view-dependent and subject-dependent. Rats were still able to recognize the target objects, in a way that was largely tolerant (i.e., invariant) to object transformation; however, the larger structural and pixel-wise similarity affected the way objects were processed. Compared to the findings of our previous study, the patterns of diagnostic features were: (i) smaller and more scattered; (ii) only partially preserved across object views; and (iii) only partially reproducible across rats. On the other hand, rats were still found to adopt a multi-featural processing strategy and to make use of part of the optimal discriminatory information afforded by the two objects. Our findings suggest that, as in humans, rat invariant recognition can flexibly rely on either view-invariant representations of distinctive object features or view-specific object representations, acquired through learning

    Shape similarity, better than semantic membership, accounts for the structure of visual object representations in a population of monkey inferotemporal neurons

    Get PDF
    The anterior inferotemporal cortex (IT) is the highest stage along the hierarchy of visual areas that, in primates, processes visual objects. Although several lines of evidence suggest that IT primarily represents visual shape information, some recent studies have argued that neuronal ensembles in IT code the semantic membership of visual objects (i.e., represent conceptual classes such as animate and inanimate objects). In this study, we investigated to what extent semantic, rather than purely visual information, is represented in IT by performing a multivariate analysis of IT responses to a set of visual objects. By relying on a variety of machine-learning approaches (including a cutting-edge clustering algorithm that has been recently developed in the domain of statistical physics), we found that, in most instances, IT representation of visual objects is accounted for by their similarity at the level of shape or, more surprisingly, low-level visual properties. Only in a few cases we observed IT representations of semantic classes that were not explainable by the visual similarity of their members. Overall, these findings reassert the primary function of IT as a conveyor of explicit visual shape information, and reveal that low-level visual properties are represented in IT to a greater extent than previously appreciated. In addition, our work demonstrates how combining a variety of state-of-the-art multivariate approaches, and carefully estimating the contribution of shape similarity to the representation of object categories, can substantially advance our understanding of neuronal coding of visual objects in cortex

    2009 Second International Conference on Computer and Electrical Engineering GLS Optimization Algorithm for Solving Travelling Salesman Problem

    No full text
    Abstract — Travelling salesman problem (TSP) is well known as one of the combinatorial optimization problems. There are many approaches for finding solution to the TSP. In this paper we used combination of local search heuristics and genetic algorithm (GLS) that has been shown to be an efficient algorithm for finding near optimal to the TSP. We also evaluate the run time behavior and fitness of our approach and compare it with other methods. A reasonable result is obtained and the proposed algorithm is able to get to a better solution in less time

    Wavelet transform and Fusion of linear and non linear method for Face Recognition

    No full text
    This work presents a method to increase the face recognition accuracy using a combination of Wavelet, PCA, KPCA, and RBF Neural Networks. Preprocessing, feature extraction and classification rules are three crucial issues for face recognition. This paper presents a hybrid approach to employ these issues. For preprocessing and feature extraction steps, we apply a combination of wavelet transform, PCA and KPCA. During the classification stage, the Neural Network (RBF) is explored to achieve a robust decision in presence of wide facial variations. At first derives a feature vector from a set of downsampled wavelet representation of face images, then the resulting PCA-based linear features and KPCA- based nonlinear features on wavelet feature vector for reduces the dimensionary of the vector, are extracted. During the classification stage, the Neural Network (RBF) is explored to achieve a robust decision in presence of wide facial variations. The computational load of the proposed method is greatly reduced as comparing with the original PCA, KPCA, ICA and LDA based method on the ORL, Yale and AR face databases. Moreover, the accuracy of the proposed method is improved

    Input-driven unsupervised learning in recurrent neural networks

    No full text
    Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is an attractor neural network with Hebbian learning (e.g. the Hopfield model). The model simplicity and the locality of the synaptic update rules come at the cost of a limited storage capacity, compared with the capacity achieved with supervised learning algorithms, whose biological plausibility is questionable. Here, we present an on-line learning rule for a recurrent neural network that achieves near-optimal performance without an explicit supervisory error signal and using only locally accessible information, and which is therefore biologically plausible. The fully connected network consists of excitatory units with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the patterns to be memorized are presented on-line as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs (’local fields’). Synapses corresponding to active inputs are modified as a function of the position of the local field with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. An additional parameter of the model allows to trade storage capacity for robustness, i.e. increased size of the basins of attraction. We simulated a network of 1001 excitatory neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction: our results show that, for any given basin size, our network more than doubles the storage capacity, compared with a standard Hopfield network. Our learning rule is consistent with available experimental data documenting how plasticity depends on firing rate. It predicts that at high enough firing rates, no potentiation should occur

    From luminance to semantics: how natural objects are represented in monkey inferotemporal cortex.

    No full text
    In primates, visual object information is processed through a hierarchy of cortico-cortical stages that culminates with the inferotemporal cortex (IT). Although the nature of visual processing in IT is still poorly understood, several lines of evidence suggest that IT conveys an explicit object representation that can directly serve as a basis for decision, action and memory - e.g., it can support flexible formation of semantic categories in downstream areas, such as prefrontal and perirhinal cortex. However, some recent studies (Kiani et al, 2007; Kriegeskorte et al, 2008) have argued that IT neuronal ensembles may themselves code the semantic membership of visual objects (i.e., represent such abstract conceptual classes such as animate and inanimate objects, animals, etc). In this study, we have applied an array of multi-variate computational approaches to investigate the nature of visual objects' representation in IT. Our results show that IT neuronal ensembles represent a surprisingly broad spectrum of visual features complexity, ranging from low-level visual properties (e.g., brightness), to visual patterns of intermediate complexity (e.g., star-like shapes), to complex objects (e.g., four-leg animals) that appear to be coded so invariantly that their clustering in the IT neuronal space is not easily accountable by any similarity metric we used. On the one hand, these findings show that IT supports recognition of low-level properties of the visual input that are typically though to be extracted by lower-level visual areas. On the other hand, IT appears to convey such an explicit representation of some object classes that coding of semantic membership in IT (at least for a few categories) cannot be excluded. Overall, these results shed new light on IT amazing pluripotency in supporting recognition tasks as diverse as detection of brightness and categorization of complex shapes
    corecore