10,882 research outputs found

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    Vision-based analysis of pedestrian traffic data

    Get PDF
    Reducing traffic congestion has become a major issue within urban environments. Traditional approaches, such as increasing road sizes, may prove impossible in certain scenarios, such as city centres, or ineffectual if current predictions of large growth in world traffic volumes hold true. An alternative approach lies with increasing the management efficiency of pre-existing infrastructure and public transport systems through the use of Intelligent Transportation Systems (ITS). In this paper, we focus on the requirement of obtaining robust pedestrian traffic flow data within these areas. We propose the use of a flexible and robust stereo-vision pedestrian detection and tracking approach as a basis for obtaining this information. Given this framework, we propose the use of a pedestrian indexing scheme and a suite of tools, which facilitates the declaration of user-defined pedestrian events or requests for specific statistical traffic flow data. The detection of the required events or the constant flow of statistical information can be incorporated into a variety of ITS solutions for applications in traffic management, public transport systems and urban planning

    Independent Motion Detection with Event-driven Cameras

    Full text link
    Unlike standard cameras that send intensity images at a constant frame rate, event-driven cameras asynchronously report pixel-level brightness changes, offering low latency and high temporal resolution (both in the order of micro-seconds). As such, they have great potential for fast and low power vision algorithms for robots. Visual tracking, for example, is easily achieved even for very fast stimuli, as only moving objects cause brightness changes. However, cameras mounted on a moving robot are typically non-stationary and the same tracking problem becomes confounded by background clutter events due to the robot ego-motion. In this paper, we propose a method for segmenting the motion of an independently moving object for event-driven cameras. Our method detects and tracks corners in the event stream and learns the statistics of their motion as a function of the robot's joint velocities when no independently moving objects are present. During robot operation, independently moving objects are identified by discrepancies between the predicted corner velocities from ego-motion and the measured corner velocities. We validate the algorithm on data collected from the neuromorphic iCub robot. We achieve a precision of ~ 90 % and show that the method is robust to changes in speed of both the head and the target.Comment: 7 pages, 6 figure

    Overcoming the Challenges Associated with Image-based Mapping of Small Bodies in Preparation for the OSIRIS-REx Mission to (101955) Bennu

    Get PDF
    The OSIRIS-REx Asteroid Sample Return Mission is the third mission in NASA's New Frontiers Program and is the first U.S. mission to return samples from an asteroid to Earth. The most important decision ahead of the OSIRIS-REx team is the selection of a prime sample-site on the surface of asteroid (101955) Bennu. Mission success hinges on identifying a site that is safe and has regolith that can readily be ingested by the spacecraft's sampling mechanism. To inform this mission-critical decision, the surface of Bennu is mapped using the OSIRIS-REx Camera Suite and the images are used to develop several foundational data products. Acquiring the necessary inputs to these data products requires observational strategies that are defined specifically to overcome the challenges associated with mapping a small irregular body. We present these strategies in the context of assessing candidate sample-sites at Bennu according to a framework of decisions regarding the relative safety, sampleability, and scientific value across the asteroid's surface. To create data products that aid these assessments, we describe the best practices developed by the OSIRIS-REx team for image-based mapping of irregular small bodies. We emphasize the importance of using 3D shape models and the ability to work in body-fixed rectangular coordinates when dealing with planetary surfaces that cannot be uniquely addressed by body-fixed latitude and longitude.Comment: 31 pages, 10 figures, 2 table
    corecore