363 research outputs found

    Event-based Camera Tracker by \nablat NeRF

    Full text link
    When a camera travels across a 3D world, only a fraction of pixel value changes; an event-based camera observes the change as sparse events. How can we utilize sparse events for efficient recovery of the camera pose? We show that we can recover the camera pose by minimizing the error between sparse events and the temporal gradient of the scene represented as a neural radiance field (NeRF). To enable the computation of the temporal gradient of the scene, we augment NeRF's camera pose as a time function. When the input pose to the NeRF coincides with the actual pose, the output of the temporal gradient of NeRF equals the observed intensity changes on the event's points. Using this principle, we propose an event-based camera pose tracking framework called TeGRA which realizes the pose update by using the sparse event's observation. To the best of our knowledge, this is the first camera pose estimation algorithm using the scene's implicit representation and the sparse intensity change from events

    Application of an event-based camera for real-time velocity resolved kinetics

    Get PDF
    We describe here the application of an inexpensive event-based/neuromorphic camera in an ion imaging experiment operated at 1 kHz detection rate to study real-time velocity-resolved kinetics of thermal desorption. Such measurements involve a single gas pulse to initiate a time-dependent desorption process and a high repetition rate laser, where each pulse of the laser is used to produce an ion image. The sequence of ion images allows the time dependence of the desorption flux to be followed in real time. In previous work where a conventional framing camera was used, the large number of megapixel-sized images required data transfer and storage rates of up to 16 GB/s. This necessitated a large onboard memory that was quickly filled and limited continuous measurement to only a few seconds. Read-out of the memory became the bottleneck to the rate of data acquisition. We show here that since most pixels in each ion image contain no data, the data rate can be dramatically reduced by using an event-based/neuromorphic camera. The data stream is thus reduced to the intensity and location information on the pixels that are lit up by each ion event together with a time-stamp indicating the arrival time of an ion at the detector. This dramatically increases the duty cycle of the method and provides insights for the execution of other high rep-rate ion imaging experiments

    Event-based Camera Simulation using Monte Carlo Path Tracing with Adaptive Denoising

    Full text link
    This paper presents an algorithm to obtain an event-based video from noisy frames given by physics-based Monte Carlo path tracing over a synthetic 3D scene. Given the nature of dynamic vision sensor (DVS), rendering event-based video can be viewed as a process of detecting the changes from noisy brightness values. We extend a denoising method based on a weighted local regression (WLR) to detect the brightness changes rather than applying denoising to every pixel. Specifically, we derive a threshold to determine the likelihood of event occurrence and reduce the number of times to perform the regression. Our method is robust to noisy video frames obtained from a few path-traced samples. Despite its efficiency, our method performs comparably to or even better than an approach that exhaustively denoises every frame.Comment: 8 pages, 6 figures, 3 table

    eWand: A calibration framework for wide baseline frame-based and event-based camera systems

    Full text link
    Accurate calibration is crucial for using multiple cameras to triangulate the position of objects precisely. However, it is also a time-consuming process that needs to be repeated for every displacement of the cameras. The standard approach is to use a printed pattern with known geometry to estimate the intrinsic and extrinsic parameters of the cameras. The same idea can be applied to event-based cameras, though it requires extra work. By using frame reconstruction from events, a printed pattern can be detected. A blinking pattern can also be displayed on a screen. Then, the pattern can be directly detected from the events. Such calibration methods can provide accurate intrinsic calibration for both frame- and event-based cameras. However, using 2D patterns has several limitations for multi-camera extrinsic calibration, with cameras possessing highly different points of view and a wide baseline. The 2D pattern can only be detected from one direction and needs to be of significant size to compensate for its distance to the camera. This makes the extrinsic calibration time-consuming and cumbersome. To overcome these limitations, we propose eWand, a new method that uses blinking LEDs inside opaque spheres instead of a printed or displayed pattern. Our method provides a faster, easier-to-use extrinsic calibration approach that maintains high accuracy for both event- and frame-based cameras

    Real-time 6-DoF Pose Estimation by an Event-based Camera using Active LED Markers

    Full text link
    Real-time applications for autonomous operations depend largely on fast and robust vision-based localization systems. Since image processing tasks require processing large amounts of data, the computational resources often limit the performance of other processes. To overcome this limitation, traditional marker-based localization systems are widely used since they are easy to integrate and achieve reliable accuracy. However, classical marker-based localization systems significantly depend on standard cameras with low frame rates, which often lack accuracy due to motion blur. In contrast, event-based cameras provide high temporal resolution and a high dynamic range, which can be utilized for fast localization tasks, even under challenging visual conditions. This paper proposes a simple but effective event-based pose estimation system using active LED markers (ALM) for fast and accurate pose estimation. The proposed algorithm is able to operate in real time with a latency below \SI{0.5}{\milli\second} while maintaining output rates of \SI{3}{\kilo \hertz}. Experimental results in static and dynamic scenarios are presented to demonstrate the performance of the proposed approach in terms of computational speed and absolute accuracy, using the OptiTrack system as the basis for measurement.Comment: 14 pages, 12 figures, this paper has been accepted to WACV 202

    Fusing Event-based Camera and Radar for SLAM Using Spiking Neural Networks with Continual STDP Learning

    Full text link
    This work proposes a first-of-its-kind SLAM architecture fusing an event-based camera and a Frequency Modulated Continuous Wave (FMCW) radar for drone navigation. Each sensor is processed by a bio-inspired Spiking Neural Network (SNN) with continual Spike-Timing-Dependent Plasticity (STDP) learning, as observed in the brain. In contrast to most learning-based SLAM systems%, which a) require the acquisition of a representative dataset of the environment in which navigation must be performed and b) require an off-line training phase, our method does not require any offline training phase, but rather the SNN continuously learns features from the input data on the fly via STDP. At the same time, the SNN outputs are used as feature descriptors for loop closure detection and map correction. We conduct numerous experiments to benchmark our system against state-of-the-art RGB methods and we demonstrate the robustness of our DVS-Radar SLAM approach under strong lighting variations
    corecore