27 research outputs found

    Asynchronous spiking neurons, the natural key to exploit temporal sparsity

    Get PDF
    Inference of Deep Neural Networks for stream signal (Video/Audio) processing in edge devices is still challenging. Unlike the most state of the art inference engines which are efficient for static signals, our brain is optimized for real-time dynamic signal processing. We believe one important feature of the brain (asynchronous state-full processing) is the key to its excellence in this domain. In this work, we show how asynchronous processing with state-full neurons allows exploitation of the existing sparsity in natural signals. This paper explains three different types of sparsity and proposes an inference algorithm which exploits all types of sparsities in the execution of already trained networks. Our experiments in three different applications (Handwritten digit recognition, Autonomous Steering and Hand-Gesture recognition) show that this model of inference reduces the number of required operations for sparse input data by a factor of one to two orders of magnitudes. Additionally, due to fully asynchronous processing this type of inference can be run on fully distributed and scalable neuromorphic hardware platforms

    Asynchronous Spiking Neurons, the Natural Key to Exploit Temporal Sparsity

    Get PDF
    Inference of Deep Neural Networks for stream signal (Video/Audio) processing in edge devices is still challenging. Unlike the most state of the art inference engines which are efficient for static signals, our brain is optimized for real-time dynamic signal processing. We believe one important feature of the brain (asynchronous state-full processing) is the key to its excellence in this domain. In this work, we show how asynchronous processing with state-full neurons allows exploitation of the existing sparsity in natural signals. This paper explains three different types of sparsity and proposes an inference algorithm which exploits all types of sparsities in the execution of already trained networks. Our experiments in three different applications (Handwritten digit recognition, Autonomous Steering and Hand-Gesture recognition) show that this model of inference reduces the number of required operations for sparse input data by a factor of one to two orders of magnitudes. Additionally, due to fully asynchronous processing this type of inference can be run on fully distributed and scalable neuromorphic hardware platforms.European Union's Horizon 2020 No 687299 NeuRAMEuropean Union's Horizon 2020 No 824164 HERMESMinisterio de Economía y Competitividad TEC2015-63884-C2-1-

    Motion-based position coding in the visual system : a computational study

    No full text
    Cette thèse est centralisée sur cette question : comment est-ce que le système visuel peut coder efficacement la position des objets en mouvement, en dépit des diverses sources d'incertitude ? Cette étude déploie une hypothèse sur la connaissance a priori de la cohérence temporelle du mouvement (Burgi et al 2000; Yuille and Grzywacz 1989). Nous avons ici étendu le cadre de modélisation précédemment proposé pour expliquer le problème de l'ouverture (Perrinet and Masson, 2012). C'est un cadre d'estimation de mouvement Bayésien mis en oeuvre par un filtrage particulaire, que l'on appelle la prévision basé sur le mouvement (MBP). Sur cette base, nous avons introduit une théorie du codage de position basée sur le mouvement, et étudié comment les mécanismes neuronaux codant la position instantanée de l'objet en mouvement pourraient être affectés par le signal de mouvement le long d'une trajectoire. Les résultats de cette thèse suggèrent que le codage de la position basé sur le mouvement peut constituer un calcul neuronal générique parmi toutes les étapes du système visuel. Cela peut en partie compenser les effets cumulatifs des délais neuronaux dans le codage de la position. En outre, il peut expliquer des changements de position basés sur le mouvement, comme par example, l'Effect de Saut de Flash. Comme un cas particulier, nous avons introduit le modèle de MBP diagonal et avons reproduit la réponse anticipée de populations de neurones dans l'aire cortical V1. Nos résultats indiquent qu'un codage en position efficace et robuste peut être fortement dépendant de l'intégration le long de la trajectoire.Coding the position of moving objects is an essential ability of the visual system in fulfilling precise and robust tracking tasks. This thesis is focalized upon this question: How does the visual system efficiently encode the position of moving objects, despite various sources of uncertainty? This study deploys the hypothesis that the visual systems uses prior knowledge on the temporal coherency of motion (Burgi et al 2000; Yuille and Grzywacz 1989). We implemented this prior by extending the modeling framework previously proposed to explain the aperture problem (Perrinet and Masson, 2012), so-called motion-based prediction (MBP). This model is a Bayesian motion estimation framework implemented by particle filtering. Based on that, we have introduced a theory on motion-based position coding, to investigate how neural mechanisms encoding the instantaneous position of moving objects might be affected by motion. Results of this thesis suggest that motion-based position coding might be a generic neural computation among all stages of the visual system. This mechanism might partially compensate the accumulative and restrictive effects of neural delays in position coding. Also it may account for motion-based position shifts as the flash lag effect. As a specific case, results of diagonal MBP model reproduced the anticipatory response of neural populations in the primary visual cortex of macaque monkey. Our results imply that an efficient and robust position coding might be highly dependent on trajectory integration and that it constitutes a key neural signature to study the more general problem of predictive coding in sensory areas

    The Flash-Lag Effect as a Motion-Based Predictive Shift

    Get PDF
    International audienceDue to its inherent neural delays, the visual system has an outdated access to sensory information about the current position of moving objects. In contrast, living organisms are remarkably able to track and intercept moving objects under a large range of challenging environmental conditions. Physiological, behavioral and psychophysical evidences strongly suggest that position coding is extrapolated using an explicit and reliable representation of object's motion but it is still unclear how these two representations interact. For instance, the so-called flash-lag effect supports the idea of a differential processing of position between moving and static objects. Although elucidating such mechanisms is crucial in our understanding of the dynamics of visual processing, a theory is still missing to explain the different facets of this visual illusion. Here, we reconsider several of the key aspects of the flash-lag effect in order to explore the role of motion upon neural coding of objects' position. First, we formalize the problem using a Bayesian modeling framework which includes a graded representation of the degree of belief about visual motion. We introduce a motion-based prediction model as a candidate explanation for the perception of coherent motion. By including the knowledge of a fixed delay, we can model the dynamics of sensory information integration by extrapolating the information acquired at previous instants in time. Next, we simulate the optimal estimation of object position with and without delay compensation and compared it with human perception under a broad range of different psychophysical conditions. Our computational study suggests that the explicit, probabilistic representation of velocity information is crucial in explaining position coding, and therefore the flash-lag effect. We discuss these theoretical results in light of the putative corrective mechanisms that can be used to cancel out the detrimental effects of neural delays and illuminate the more general question of the dynamical representation at the present time of spatial information in the visual pathways. Author Summary Visual illusions are powerful tools to explore the limits and constraints of human perception. One of them has received considerable empirical and theoretical interests: the so-called "flash-lag effect". When a visual stimulus moves along a continuous trajectory, it may be seen ahead of its veridical position with respect to an unpredictable event such as PLOS Computational Biology

    SpArNet: Sparse Asynchronous Neural Network execution for energy efficient inference

    No full text
    Biological neurons are known to have sparse and asynchronous communications using spikes. Despite our incomplete understanding of processing strategies of the brain, its low energy consumption in fulfilling delicate tasks suggests the existence of energy efficient mechanisms. Inspired by these key factors, we introduce SpArNet, a bio-inspired quantization scheme to convert a pre-trained convolutional neural network to a spiking neural network, with the aim of minimizing the computational load for execution on neuromorphic processors. The proposed scheme has significant advantages over the reference CNN in a reduced number of synaptic operations, and can be used for frequent executions of inference tasks. The computational load of SpArNet is adjusted to the spatio-temporal dynamics of the the input data. We have tested the converted network on two applications (autonomous steering and hand gesture recognition), demonstrating a significant reduction on the number of required synaptic operations
    corecore