13 research outputs found

    High Speed and High Dynamic Range Video with an Event Camera

    Full text link
    Event cameras are novel sensors that report brightness changes in the form of a stream of asynchronous "events" instead of intensity frames. They offer significant advantages with respect to conventional cameras: high temporal resolution, high dynamic range, and no motion blur. While the stream of events encodes in principle the complete visual signal, the reconstruction of an intensity image from a stream of events is an ill-posed problem in practice. Existing reconstruction approaches are based on hand-crafted priors and strong assumptions about the imaging process as well as the statistics of natural images. In this work we propose to learn to reconstruct intensity images from event streams directly from data instead of relying on any hand-crafted priors. We propose a novel recurrent network to reconstruct videos from a stream of events, and train it on a large amount of simulated event data. During training we propose to use a perceptual loss to encourage reconstructions to follow natural image statistics. We further extend our approach to synthesize color images from color event streams. Our quantitative experiments show that our network surpasses state-of-the-art reconstruction methods by a large margin in terms of image quality (>20%), while comfortably running in real-time. We show that the network is able to synthesize high framerate videos (> 5,000 frames per second) of high-speed phenomena (e.g. a bullet hitting an object) and is able to provide high dynamic range reconstructions in challenging lighting conditions. As an additional contribution, we demonstrate the effectiveness of our reconstructions as an intermediate representation for event data. We show that off-the-shelf computer vision algorithms can be applied to our reconstructions for tasks such as object classification and visual-inertial odometry and that this strategy consistently outperforms algorithms that were specifically designed for event data. We release the reconstruction code, a pre-t..

    A higher-order MRF based variational model for multiplicative noise reduction

    Get PDF
    The Fields of Experts (FoE) image prior model, a filter-based higher-order Markov Random Fields (MRF) model, has been shown to be effective for many image restoration problems. Motivated by the successes of FoE-based approaches, in this letter, we propose a novel variational model for multiplicative noise reduction based on the FoE image prior model. The resulted model corresponds to a non-convex minimization problem, which can be solved by a recently published non-convex optimization algorithm. Experimental results based on synthetic speckle noise and real synthetic aperture radar (SAR) images suggest that the performance of our proposed method is on par with the best published despeckling algorithm. Besides, our proposed model comes along with an additional advantage, that the inference is extremely efficient. {Our GPU based implementation takes less than 1s to produce state-of-the-art despeckling performance.}Comment: 5 pages, 5 figures, to appear in IEEE Signal Processing Letter

    Deep Drone Acrobatics

    Full text link
    Performing acrobatic maneuvers with quadrotorsis extremely challenging. Acrobatic flight requires high thrustand extreme angular accelerations that push the platform to itsphysical limits. Professional drone pilots often measure their levelof mastery by flying such maneuvers in competitions. In thispaper, we propose to learn a sensorimotor policy that enablesan autonomous quadrotor to fly extreme acrobatic maneuverswith only onboard sensing and computation. We train the policyentirely in simulation by leveraging demonstrations from anoptimal controller that has access to privileged information. Weuse appropriate abstractions of the visual input to enable transferto a real quadrotor. We show that the resulting policy can bedirectly deployed in the physical world without any fine-tuningon real data. Our methodology has several favorable properties:it does not require a human expert to provide demonstrations,it cannot harm the physical system during training, and it canbe used to learn maneuvers that are challenging even for thebest human pilots. Our approach enables a physical quadrotorto fly maneuvers such as the Power Loop, the Barrel Roll, andthe Matty Flip, during which it incurs accelerations of up to 3g

    Insights Into Analysis Operator Learning: From Patch-Based Sparse Models to Higher Order MRFs

    No full text

    Multi-Modality Depth Map Fusion using Primal-Dual Optimization

    No full text
    We present a novel fusion method that combines complementary 3D and 2D imaging techniques. Consider a Time-of-Flight sensor that acquires a dense depth map on a wide depth range but with a comparably small resolution. Complementary, a stereo sensor generates a disparity map in high resolution but with occlusions and outliers. In our method, we fuse depth data, and optionally also intensity data using a primal-dual optimization, with an energy functional that is designed to compensate for missing parts, filter strong outliers and reduce the acquisition noise. The numerical algorithm is efficiently implemented on a GPU to achieve a processing speed of 10 to 15 frames per second. Experiments on synthetic, real and benchmark datasets show that the results are superior compared to each sensor alone and to competing optimization techniques. In a practical example, we are able to fuse a Kinect triangulation sensor and a small size Time-of-Flight camera to create a gaming sensor with superior resolution, acquisition range and accuracy. 1

    Trajectory Optimization for Legged Robots With Slipping Motions

    No full text
    The dynamics of legged systems are characterized by under-actuation, instability, and contact state switching. We present a trajectory optimization method for generating physically consistent motions under these conditions. By integrating a custom solver for hard contact forces in the system dynamics model, the optimal control algorithm has the authority to freely transition between open, closed, and sliding contact states along the trajectory. Our method can discover stepping motions without a predefined contact schedule. Moreover, the optimizer makes use of slipping contacts if a no-slip condition is too restrictive for the task at hand. Additionally, we show that new behaviors like skating over slippery surfaces emerge automatically, which would not be possible with classical methods that assume stationary contact points. Experiments in simulation and on hardware confirm the physical consistency of the generated trajectories. Our solver achieves iteration rates of 40 Hz for a 1 s horizon and is therefore fast enough to run in a receding horizon setting

    Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer

    No full text
    corecore