18 research outputs found

    Asynchronous, Photometric Feature Tracking using Events and Frames

    Full text link
    We present a method that leverages the complementarity of event cameras and standard cameras to track visual features with low-latency. Event cameras are novel sensors that output pixel-level brightness changes, called "events". They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. However, because the same scene pattern can produce different events depending on the motion direction, establishing event correspondences across time is challenging. By contrast, standard cameras provide intensity measurements (frames) that do not depend on motion direction. Our method extracts features on frames and subsequently tracks them asynchronously using events, thereby exploiting the best of both types of data: the frames provide a photometric representation that does not depend on motion direction and the events provide low-latency updates. In contrast to previous works, which are based on heuristics, this is the first principled method that uses raw intensity measurements directly, based on a generative event model within a maximum-likelihood framework. As a result, our method produces feature tracks that are both more accurate (subpixel accuracy) and longer than the state of the art, across a wide variety of scenes.Comment: 22 pages, 15 figures, Video: https://youtu.be/A7UfeUnG6c

    Inceptive Event Time-Surfaces for Object Classification Using Neuromorphic Cameras

    Full text link
    This paper presents a novel fusion of low-level approaches for dimensionality reduction into an effective approach for high-level objects in neuromorphic camera data called Inceptive Event Time-Surfaces (IETS). IETSs overcome several limitations of conventional time-surfaces by increasing robustness to noise, promoting spatial consistency, and improving the temporal localization of (moving) edges. Combining IETS with transfer learning improves state-of-the-art performance on the challenging problem of object classification utilizing event camera data

    Efficient aberrations pre-compensation and wavefront correction with a deformable mirror in the middle of a petawatt-class CPA laser system

    Get PDF
    AbstractIn this paper, we describe the experimental validation of the technique of correction of wavefront aberration in the middle of the laser amplifying chain. This technique allows the correction of the aberrations from the first part of the laser system, and the pre-compensation of the aberrations built in the second part. This approach will allow an effective aberration management in the laser chain, to protect the optical surfaces and optimize performances, and is the only possible approach for multi-petawatt laser system from the technical and economical point of view. This approach is now possible after the introduction of new deformable mirrors with lower static aberrations and higher dynamic than the standard devices

    An embedded human motion capture system for an assistive walking robot

    No full text
    International audienc

    Real-time tracking using Wavelets Representation

    No full text
    International audienceno abstrac

    TRACKING WITH A PAN-TILT-ZOOM CAMERA FOR AN ACC SYSTEM

    No full text
    In this paper, visual perception of frontal view in intelligent cars is considered. A Pan-Tilt-Zoom (PTZ) camera is used to track preceding vehicles. The aim of this work is to keep the rear view image of the target vehicle stable in scale and position. An efficient real time tracking algorithm is integrated. It is a generic and robust approach, particularly well suited for the detection of scale changes. The camera rotations and zoom are controlled by visual servoing. The methods presented here were tested on real road sequences within the VELAC demonstration vehicle. Experimental results show the effectiveness of such an approach. The perspectives are in the development of a visual sensor combining a PTZ camera and a standard camera. The standard camera has small focal length and is devoted to an analysis of the whole frontal scene. The PTZ camera gives a local view of this scene to increase sensor range and precision. 1

    What can neuromorphic event-driven precise timing add to spike-based pattern recognition?

    No full text
    This letter introduces a study to precisely measure what an increase in spike timing precision can add to spike-driven pattern recognition algorithms. The concept of generating spikes from images by converting gray levels into spike timings is currently at the basis of almost every spike-based modeling of biological visual systems. The use of images naturally leads to generating incorrect artificial and redundant spike timings and, more important, also contradicts biological findings indicating that visual processing is massively parallel, asynchronous with high temporal resolution. A new concept for acquiring visual information through pixel-individual asynchronous level-crossing sampling has been proposed in a recent generation of asynchronous neuromorphic visual sensors. Unlike conventional cameras, these sensors acquire data not at fixed points in time for the entire array but at fixed amplitude changes of their input, resulting optimally sparse in space and time—pixel individually and precisely timed only if new, (previously unknown) information is available (event based). This letter uses the high temporal resolution spiking output of neuromorphic event-based visual sensors to show that lowering time precision degrades performance on several recognition tasks specifically when reaching the conventional range of machine vision acquisition frequencies (30–60 Hz). The use of information theory to characterize separability between classes for each temporal resolution shows that high temporal acquisition provides up to 70% more information that conventional spikes generated from frame-based acquisition as used in standard artificial vision, thus drastically increasing the separability between classes of objects. Experiments on real data show that the amount of information loss is correlated with temporal precision. Our information-theoretic study highlights the potentials of neuromorphic asynchronous visual sensors for both practical applications and theoretical investigations. Moreover, it suggests that representing visual information as a precise sequence of spike times as reported in the retina offers considerable advantages for neuro-inspired visual computations
    corecore