5,609 research outputs found

    Real-time computational attention model for dynamic scenes analysis: from implementation to evaluation.

    Get PDF
    International audienceProviding real time analysis of the huge amount of data generated by computer vision algorithms in interactive applications is still an open problem. It promises great advances across a wide variety of elds. When using dynamics scene analysis algorithms for computer vision, a trade-o must be found between the quality of the results expected, and the amount of computer resources allocated for each task. It is usually a design time decision, implemented through the choice of pre-de ned algorithms and parameters. However, this way of doing limits the generality of the system. Using an adaptive vision system provides a more exible solution as its analysis strategy can be changed according to the new information available. As a consequence, such a system requires some kind of guiding mechanism to explore the scene faster and more e ciently. We propose a visual attention system that it adapts its processing according to the interest (or salience) of each element of the dynamic scene. Somewhere in between hierarchical salience based and competitive distributed, we propose a hierarchical yet competitive and non salience based model. Our original approach allows the generation of attentional focus points without the need of neither saliency map nor explicit inhibition of return mechanism. This new real- time computational model is based on a preys / predators system. The use of this kind of dynamical system is justi ed by an adjustable trade-o between nondeterministic attentional behavior and properties of stability, reproducibility and reactiveness

    Structure Learning in Coupled Dynamical Systems and Dynamic Causal Modelling

    Get PDF
    Identifying a coupled dynamical system out of many plausible candidates, each of which could serve as the underlying generator of some observed measurements, is a profoundly ill posed problem that commonly arises when modelling real world phenomena. In this review, we detail a set of statistical procedures for inferring the structure of nonlinear coupled dynamical systems (structure learning), which has proved useful in neuroscience research. A key focus here is the comparison of competing models of (ie, hypotheses about) network architectures and implicit coupling functions in terms of their Bayesian model evidence. These methods are collectively referred to as dynamical casual modelling (DCM). We focus on a relatively new approach that is proving remarkably useful; namely, Bayesian model reduction (BMR), which enables rapid evaluation and comparison of models that differ in their network architecture. We illustrate the usefulness of these techniques through modelling neurovascular coupling (cellular pathways linking neuronal and vascular systems), whose function is an active focus of research in neurobiology and the imaging of coupled neuronal systems

    Evolutionary Robotics: a new scientific tool for studying cognition

    Get PDF
    We survey developments in Artificial Neural Networks, in Behaviour-based Robotics and Evolutionary Algorithms that set the stage for Evolutionary Robotics in the 1990s. We examine the motivations for using ER as a scientific tool for studying minimal models of cognition, with the advantage of being capable of generating integrated sensorimotor systems with minimal (or controllable) prejudices. These systems must act as a whole in close coupling with their environments which is an essential aspect of real cognition that is often either bypassed or modelled poorly in other disciplines. We demonstrate with three example studies: homeostasis under visual inversion; the origins of learning; and the ontogenetic acquisition of entrainment

    Brain Dynamics across levels of Organization

    Get PDF
    After presenting evidence that the electrical activity recorded from the brain surface can reflect metastable state transitions of neuronal configurations at the mesoscopic level, I will suggest that their patterns may correspond to the distinctive spatio-temporal activity in the Dynamic Core (DC) and the Global Neuronal Workspace (GNW), respectively, in the models of the Edelman group on the one hand, and of Dehaene-Changeux, on the other. In both cases, the recursively reentrant activity flow in intra-cortical and cortical-subcortical neuron loops plays an essential and distinct role. Reasons will be given for viewing the temporal characteristics of this activity flow as signature of Self-Organized Criticality (SOC), notably in reference to the dynamics of neuronal avalanches. This point of view enables the use of statistical Physics approaches for exploring phase transitions, scaling and universality properties of DC and GNW, with relevance to the macroscopic electrical activity in EEG and EMG

    Learning to Skim Text

    Full text link
    Recurrent Neural Networks are showing much promise in many sub-areas of natural language processing, ranging from document classification to machine translation to automatic question answering. Despite their promise, many recurrent models have to read the whole text word by word, making it slow to handle long documents. For example, it is difficult to use a recurrent network to read a book and answer questions about it. In this paper, we present an approach of reading text while skipping irrelevant information if needed. The underlying model is a recurrent network that learns how far to jump after reading a few words of the input text. We employ a standard policy gradient method to train the model to make discrete jumping decisions. In our benchmarks on four different tasks, including number prediction, sentiment analysis, news article classification and automatic Q\&A, our proposed model, a modified LSTM with jumping, is up to 6 times faster than the standard sequential LSTM, while maintaining the same or even better accuracy
    • …
    corecore