17 research outputs found

    Real-time 6-DoF Pose Estimation by an Event-based Camera using Active LED Markers

    Full text link
    Real-time applications for autonomous operations depend largely on fast and robust vision-based localization systems. Since image processing tasks require processing large amounts of data, the computational resources often limit the performance of other processes. To overcome this limitation, traditional marker-based localization systems are widely used since they are easy to integrate and achieve reliable accuracy. However, classical marker-based localization systems significantly depend on standard cameras with low frame rates, which often lack accuracy due to motion blur. In contrast, event-based cameras provide high temporal resolution and a high dynamic range, which can be utilized for fast localization tasks, even under challenging visual conditions. This paper proposes a simple but effective event-based pose estimation system using active LED markers (ALM) for fast and accurate pose estimation. The proposed algorithm is able to operate in real time with a latency below \SI{0.5}{\milli\second} while maintaining output rates of \SI{3}{\kilo \hertz}. Experimental results in static and dynamic scenarios are presented to demonstrate the performance of the proposed approach in terms of computational speed and absolute accuracy, using the OptiTrack system as the basis for measurement.Comment: 14 pages, 12 figures, this paper has been accepted to WACV 202

    Live Demo: E2P–Events to Polarization Reconstruction from PDAVIS Events

    Get PDF
    This demonstration shows live operation of of PDAVIS polarization event camera reconstruction by the E2P DNN reported in the main CVPR conference paper Deep Polarization Reconstruction with PDAVIS Events (paper 9149 [7]). Demo code: github.com/SensorsINI/e2

    Calcul neuromorphique à l'aide de capteurs évÚnementiels : algorithmes et implémentations matérielles

    No full text
    This thesis is about the implementation of neuromorphic algorithms, using, as a first step, data from a silicon retina, mimicking the human eye’s behavior, and then evolve towards all kind of event-based signals. These eventbased signals are coming from a paradigm shift in the data representation, thus allowing a high dynamic range, a precise temporal resolution and a sensor-level data compression. Especially, we will study the development of a high frequency monocular depth map generator, a real-time spike sorting algorithm for intelligent brain-machine interfaces, and an unsupervised learning algorithm for pattern recognition. Some of these algorithms (Optical flow detection, depth map construction from stereovision) will be in the meantime developed on available neuromorphic platforms (SpiNNaker, TrueNorth), thus allowing a fully neuromorphic pipeline, from sensing to computing, with a low power budget.Cette thĂšse porte sur l’implĂ©mentation d’algorithmes Ă©vĂ©nementiels, en utilisant, dans un premier temps, des donnĂ©es provenant d’une rĂ©tine artificielle, mimant le fonctionnement de la rĂ©tine humaine, pour ensuite Ă©voluer vers tous types de signaux Ă©vĂ©nementiels. Ces signaux Ă©vĂ©nementiels sont issus d’un changement de paradigme dans la reprĂ©sentation du signal, offrant une grande plage dynamique de fonctionnement, une rĂ©solution temporelle importante ainsi qu’une compression native du signal. Sera notamment Ă©tudiĂ©e la rĂ©alisation d’un dispositif de crĂ©ation de cartes de profondeur monoculaires Ă  haute frĂ©quence, un algorithme de tri cellulaire en temps rĂ©el, ainsi que l’apprentissage non supervisĂ© pour de la reconnaissance de formes. Certains de ces algorithmes (dĂ©tection de flot optique, construction de cartes de profondeur en stĂ©rĂ©ovision) seront dĂ©veloppĂ©s en parallĂšle sur des plateformes de simulation neuromorphiques existantes (SpiNNaker, TrueNorth), afin de proposer une chaĂźne de traitement de l’information entiĂšrement neuromorphique, du capteur au calcul, Ă  faible coĂ»t Ă©nergĂ©tique

    Calcul neuromorphique à l'aide de capteurs évÚnementiels : algorithmes et implémentations matérielles

    No full text
    This thesis is about the implementation of neuromorphic algorithms, using, as a first step, data from a silicon retina, mimicking the human eye’s behavior, and then evolve towards all kind of event-based signals. These eventbased signals are coming from a paradigm shift in the data representation, thus allowing a high dynamic range, a precise temporal resolution and a sensor-level data compression. Especially, we will study the development of a high frequency monocular depth map generator, a real-time spike sorting algorithm for intelligent brain-machine interfaces, and an unsupervised learning algorithm for pattern recognition. Some of these algorithms (Optical flow detection, depth map construction from stereovision) will be in the meantime developed on available neuromorphic platforms (SpiNNaker, TrueNorth), thus allowing a fully neuromorphic pipeline, from sensing to computing, with a low power budget.Cette thĂšse porte sur l’implĂ©mentation d’algorithmes Ă©vĂ©nementiels, en utilisant, dans un premier temps, des donnĂ©es provenant d’une rĂ©tine artificielle, mimant le fonctionnement de la rĂ©tine humaine, pour ensuite Ă©voluer vers tous types de signaux Ă©vĂ©nementiels. Ces signaux Ă©vĂ©nementiels sont issus d’un changement de paradigme dans la reprĂ©sentation du signal, offrant une grande plage dynamique de fonctionnement, une rĂ©solution temporelle importante ainsi qu’une compression native du signal. Sera notamment Ă©tudiĂ©e la rĂ©alisation d’un dispositif de crĂ©ation de cartes de profondeur monoculaires Ă  haute frĂ©quence, un algorithme de tri cellulaire en temps rĂ©el, ainsi que l’apprentissage non supervisĂ© pour de la reconnaissance de formes. Certains de ces algorithmes (dĂ©tection de flot optique, construction de cartes de profondeur en stĂ©rĂ©ovision) seront dĂ©veloppĂ©s en parallĂšle sur des plateformes de simulation neuromorphiques existantes (SpiNNaker, TrueNorth), afin de proposer une chaĂźne de traitement de l’information entiĂšrement neuromorphique, du capteur au calcul, Ă  faible coĂ»t Ă©nergĂ©tique

    An error-propagation spiking neural network compatible with neuromorphic processors

    Full text link
    Spiking neural networks have shown great promise for the design of low-power sensory-processing and edge-computing hardware platforms. However, implementing onchip learning algorithms on such architectures is still an open challenge, especially for multi-layer networks that rely on the back-propagation algorithm. In this paper, we present a spike-based learning method that approximates back-propagation using local weight update mechanisms and which is compatible with mixed-signal analog/digital neuromorphic circuits. We introduce a network architecture that enables synaptic weight update mechanisms to back-propagate error signals across layers and present a network that can be trained to distinguish between two spike-based patterns that have identical mean firing rates, but different spike-timings. This work represents a first step towards the design of ultra-low power mixed-signal neuromorphic processing systems with on-chip learning circuits that can be trained to recognize different spatio-temporal patterns of spiking activity (e.g. produced by event-based vision or auditory sensors)

    Online Detection of Vibration Anomalies Using Balanced Spiking Neural Networks

    Full text link
    Vibration patterns yield valuable information about the health state of a running machine, which is commonly exploited in predictive maintenance tasks for large industrial systems. However, the overhead, in terms of size, complexity and power budget, required by classical methods to exploit this information is often prohibitive for smaller-scale applications such as autonomous cars, drones or robotics. Here we propose a neuromorphic approach to perform vibration analysis using spiking neural networks that can be applied to a wide range of scenarios. We present a spike-based end-to-end pipeline able to detect system anomalies from vibration data, using building blocks that are compatible with analog-digital neuromorphic circuits. This pipeline operates in an online unsupervised fashion, and relies on a cochlea model, on feedback adaptation and on a balanced spiking neural network. We show that the proposed method achieves state-of-the-art performance or better against two publicly available data sets. Further, we demonstrate a working proof-of-concept implemented on an asynchronous neuromorphic processor device. This work represents a significant step towards the design and implementation of autonomous low-power edge-computing devices for online vibration monitoring

    Neuromorphic networks on the SpiNNaker platform

    Full text link
    This paper describes spike-based neural networks for optical flow and stereo estimation from Dynamic Vision Sensors data. These methods combine the Asynchronous Time-based Image Sensor with the SpiNNaker platform. The sensor generates spikes with sub-millisecond resolution in response to scene illumination changes. These spike are processed by a spiking neural network running on SpiNNaker with a 1 millisecond resolution to accurately determine the order and time difference of spikes from neighboring pixels, and therefore infer the velocity, direction or depth. The spiking neural networks are a variant of the Barlow-Levick method for optical flow estimation, and Marr& Poggio for the stereo matching
    corecore