3 research outputs found
Learning to Super Resolve Intensity Images from Events
An event camera detects per-pixel intensity difference and produces
asynchronous event stream with low latency, high dynamic range, and low power
consumption. As a trade-off, the event camera has low spatial resolution. We
propose an end-to-end network to reconstruct high resolution, high dynamic
range (HDR) images directly from the event stream. We evaluate our algorithm on
both simulated and real-world sequences and verify that it captures fine
details of a scene and outperforms the combination of the state-of-the-art
event to image algorithms with the state-of-the-art super resolution schemes in
many quantitative measures by large margins. We further extend our method by
using the active sensor pixel (APS) frames or reconstructing images
iteratively.Comment: To appear in CVPR 2020 as an oral presentatio
Lossy Event Compression based on Image-derived Quad Trees and Poisson Disk Sampling
With several advantages over conventional RGB cameras, event cameras have
provided new opportunities for tackling visual tasks under challenging
scenarios with fast motion, high dynamic range, and/or power constraint. Yet
unlike image/video compression, the performance of event compression algorithm
is far from satisfying and practical. The main challenge for compressing events
is the unique event data form, i.e., a stream of asynchronously fired event
tuples each encoding the 2D spatial location, timestamp, and polarity (denoting
an increase or decrease in brightness). Since events only encode temporal
variations, they lack spatial structure which is crucial for compression. To
address this problem, we propose a novel event compression algorithm based on a
quad tree (QT) segmentation map derived from the adjacent intensity images. The
QT informs 2D spatial priority within the 3D space-time volume. In the event
encoding step, events are first aggregated over time to form polarity-based
event histograms. The histograms are then variably sampled via Poisson Disk
Sampling prioritized by the QT based segmentation map. Next, differential
encoding and run length encoding are employed for encoding the spatial and
polarity information of the sampled events, respectively, followed by Huffman
encoding to produce the final encoded events. Our Poisson Disk Sampling based
Lossy Event Compression (PDS-LEC) algorithm performs rate-distortion based
optimal allocation. On average, our algorithm achieves greater than 6x
compression compared to the state of the art.Comment: 8 main page
Event Enhanced High-Quality Image Recovery
With extremely high temporal resolution, event cameras have a large potential
for robotics and computer vision. However, their asynchronous imaging mechanism
often aggravates the measurement sensitivity to noises and brings a physical
burden to increase the image spatial resolution. To recover high-quality
intensity images, one should address both denoising and super-resolution
problems for event cameras. Since events depict brightness changes, with the
enhanced degeneration model by the events, the clear and sharp high-resolution
latent images can be recovered from the noisy, blurry and low-resolution
intensity observations. Exploiting the framework of sparse learning, the events
and the low-resolution intensity observations can be jointly considered. Based
on this, we propose an explainable network, an event-enhanced sparse learning
network (eSL-Net), to recover the high-quality images from event cameras. After
training with a synthetic dataset, the proposed eSL-Net can largely improve the
performance of the state-of-the-art by 7-12 dB. Furthermore, without additional
training process, the proposed eSL-Net can be easily extended to generate
continuous frames with frame-rate as high as the events