4,859 research outputs found

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    Event-Based Noise Filtration with Point-of-Interest Detection and Tracking for Space Situational Awareness

    Get PDF
    This thesis explores an asynchronous noise-suppression technique to be used in conjunction with asynchronous, Gaussian-blob tracking on dynamic vision sensor (DVS) data. This type of sensor is a member of a relatively new class of neuromorphic sensing devices that emulate the change-based detection properties of the human eye. By leveraging a biologically inspired mode of operation, these sensors can achieve significantly higher sampling rates as compared to conventional cameras, while also eliminating redundant data generated by static backgrounds. The resulting high dynamic range and fast acquisition time of DVS recordings enables the imaging of high-velocity targets despite ordinarily problematic lighting conditions. The technique presented here relies on treating each pixel of the sensor as a spiking cell keeping track of its own activity over time, which in turn can be filtered out of the resulting sensor event stream by user-configurable threshold values that form a temporal bandpass filter. In addition, asynchronous blob-tracking is supplemented with double-exponential smoothing prediction and Bezier curve-fitting in order to smooth tracker movement and interpolate target trajectory respectively. This overall scheme is intended to achieve asynchronous point-source tracking using a DVS for space-based applications, particularly in tracking distant, dim satellites. In the space environment, radiation effects are expected to introduce transient, and possibly persistent, noise into the asynchronous event-stream of the DVS. Given the large distances between objects in space, targets of interest may be no larger than a single pixel and can therefore appear similar to such noise-induced events. In this thesis, the asynchronous approach is experimentally compared to a more traditional approach applied to reconstructed frame data for both performance and accuracy metrics. The results of this research show that the asynchronous approach can produce comparable or even better tracking accuracy, while also drastically reducing the execution time of the process by seven times on average

    Inceptive Event Time-Surfaces for Object Classification Using Neuromorphic Cameras

    Full text link
    This paper presents a novel fusion of low-level approaches for dimensionality reduction into an effective approach for high-level objects in neuromorphic camera data called Inceptive Event Time-Surfaces (IETS). IETSs overcome several limitations of conventional time-surfaces by increasing robustness to noise, promoting spatial consistency, and improving the temporal localization of (moving) edges. Combining IETS with transfer learning improves state-of-the-art performance on the challenging problem of object classification utilizing event camera data

    A Comparative Evaluation of the Detection and Tracking Capability Between Novel Event-Based and Conventional Frame-Based Sensors

    Get PDF
    Traditional frame-based technology continues to suffer from motion blur, low dynamic range, speed limitations and high data storage requirements. Event-based sensors offer a potential solution to these challenges. This research centers around a comparative assessment of frame and event-based object detection and tracking. A basic frame-based algorithm is used to compare against two different event-based algorithms. First event-based pseudo-frames were parsed through standard frame-based algorithms and secondly, target tracks were constructed directly from filtered events. The findings show there is significant value in pursuing the technology further

    Real-time event-based unsupervised feature consolidation and tracking for space situational awareness

    Get PDF
    Earth orbit is a limited natural resource that hosts a vast range of vital space-based systems that support the international community's national, commercial and defence interests. This resource is rapidly becoming depleted with over-crowding in high demand orbital slots and a growing presence of space debris. We propose the Fast Iterative Extraction of Salient targets for Tracking Asynchronously (FIESTA) algorithm as a robust, real-time and reactive approach to optical Space Situational Awareness (SSA) using Event-Based Cameras (EBCs) to detect, localize, and track Resident Space Objects (RSOs) accurately and timely. We address the challenges of the asynchronous nature and high temporal resolution output of the EBC accurately, unsupervised and with few tune-able parameters using concepts established in the neuromorphic and conventional tracking literature. We show this algorithm is capable of highly accurate in-frame RSO velocity estimation and average sub-pixel localization in a simulated test environment to distinguish the capabilities of the EBC and optical setup from the proposed tracking system. This work is a fundamental step toward accurate end-to-end real-time optical event-based SSA, and developing the foundation for robust closed-form tracking evaluated using standardized tracking metrics

    Exploring space situational awareness using neuromorphic event-based cameras

    Get PDF
    The orbits around earth are a limited natural resource and one that hosts a vast range of vital space-based systems that support international systems use by both commercial industries, civil organisations, and national defence. The availability of this space resource is rapidly depleting due to the ever-growing presence of space debris and rampant overcrowding, especially in the limited and highly desirable slots in geosynchronous orbit. The field of Space Situational Awareness encompasses tasks aimed at mitigating these hazards to on-orbit systems through the monitoring of satellite traffic. Essential to this task is the collection of accurate and timely observation data. This thesis explores the use of a novel sensor paradigm to optically collect and process sensor data to enhance and improve space situational awareness tasks. Solving this issue is critical to ensure that we can continue to utilise the space environment in a sustainable way. However, these tasks pose significant engineering challenges that involve the detection and characterisation of faint, highly distant, and high-speed targets. Recent advances in neuromorphic engineering have led to the availability of high-quality neuromorphic event-based cameras that provide a promising alternative to the conventional cameras used in space imaging. These cameras offer the potential to improve the capabilities of existing space tracking systems and have been shown to detect and track satellites or ‘Resident Space Objects’ at low data rates, high temporal resolutions, and in conditions typically unsuitable for conventional optical cameras. This thesis presents a thorough exploration of neuromorphic event-based cameras for space situational awareness tasks and establishes a rigorous foundation for event-based space imaging. The work conducted in this project demonstrates how to enable event-based space imaging systems that serve the goals of space situational awareness by providing accurate and timely information on the space domain. By developing and implementing event-based processing techniques, the asynchronous operation, high temporal resolution, and dynamic range of these novel sensors are leveraged to provide low latency target acquisition and rapid reaction to challenging satellite tracking scenarios. The algorithms and experiments developed in this thesis successfully study the properties and trade-offs of event-based space imaging and provide comparisons with traditional observing methods and conventional frame-based sensors. The outcomes of this thesis demonstrate the viability of event-based cameras for use in tracking and space imaging tasks and therefore contribute to the growing efforts of the international space situational awareness community and the development of the event-based technology in astronomy and space science applications

    Event Probability Mask (EPM) and Event Denoising Convolutional Neural Network (EDnCNN) for Neuromorphic Cameras

    Full text link
    This paper presents a novel method for labeling real-world neuromorphic camera sensor data by calculating the likelihood of generating an event at each pixel within a short time window, which we refer to as "event probability mask" or EPM. Its applications include (i) objective benchmarking of event denoising performance, (ii) training convolutional neural networks for noise removal called "event denoising convolutional neural network" (EDnCNN), and (iii) estimating internal neuromorphic camera parameters. We provide the first dataset (DVSNOISE20) of real-world labeled neuromorphic camera events for noise removal.Comment: submitted to CVPR 202
    • …
    corecore