63,323 research outputs found
CED: Color Event Camera Dataset
Event cameras are novel, bio-inspired visual sensors, whose pixels output
asynchronous and independent timestamped spikes at local intensity changes,
called 'events'. Event cameras offer advantages over conventional frame-based
cameras in terms of latency, high dynamic range (HDR) and temporal resolution.
Until recently, event cameras have been limited to outputting events in the
intensity channel, however, recent advances have resulted in the development of
color event cameras, such as the Color-DAVIS346. In this work, we present and
release the first Color Event Camera Dataset (CED), containing 50 minutes of
footage with both color frames and events. CED features a wide variety of
indoor and outdoor scenes, which we hope will help drive forward event-based
vision research. We also present an extension of the event camera simulator
ESIM that enables simulation of color events. Finally, we present an evaluation
of three state-of-the-art image reconstruction methods that can be used to
convert the Color-DAVIS346 into a continuous-time, HDR, color video camera to
visualise the event stream, and for use in downstream vision applications.Comment: Conference on Computer Vision and Pattern Recognition Workshop
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
EV-FlowNet: Self-Supervised Optical Flow Estimation for Event-based Cameras
Event-based cameras have shown great promise in a variety of situations where
frame based cameras suffer, such as high speed motions and high dynamic range
scenes. However, developing algorithms for event measurements requires a new
class of hand crafted algorithms. Deep learning has shown great success in
providing model free solutions to many problems in the vision community, but
existing networks have been developed with frame based images in mind, and
there does not exist the wealth of labeled data for events as there does for
images for supervised training. To these points, we present EV-FlowNet, a novel
self-supervised deep learning pipeline for optical flow estimation for event
based cameras. In particular, we introduce an image based representation of a
given event stream, which is fed into a self-supervised neural network as the
sole input. The corresponding grayscale images captured from the same camera at
the same time as the events are then used as a supervisory signal to provide a
loss function at training time, given the estimated flow from the network. We
show that the resulting network is able to accurately predict optical flow from
events only in a variety of different scenes, with performance competitive to
image based networks. This method not only allows for accurate estimation of
dense optical flow, but also provides a framework for the transfer of other
self-supervised methods to the event-based domain.Comment: 9 pages, 5 figures, 1 table. Accompanying video:
https://youtu.be/eMHZBSoq0sE. Dataset:
https://daniilidis-group.github.io/mvsec/, Robotics: Science and Systems 201
The Event-Camera Dataset and Simulator: Event-based Data for Pose Estimation, Visual Odometry, and SLAM
New vision sensors, such as the Dynamic and Active-pixel Vision sensor
(DAVIS), incorporate a conventional global-shutter camera and an event-based
sensor in the same pixel array. These sensors have great potential for
high-speed robotics and computer vision because they allow us to combine the
benefits of conventional cameras with those of event-based sensors: low
latency, high temporal resolution, and very high dynamic range. However, new
algorithms are required to exploit the sensor characteristics and cope with its
unconventional output, which consists of a stream of asynchronous brightness
changes (called "events") and synchronous grayscale frames. For this purpose,
we present and release a collection of datasets captured with a DAVIS in a
variety of synthetic and real environments, which we hope will motivate
research on new algorithms for high-speed and high-dynamic-range robotics and
computer-vision applications. In addition to global-shutter intensity images
and asynchronous events, we provide inertial measurements and ground-truth
camera poses from a motion-capture system. The latter allows comparing the pose
accuracy of ego-motion estimation algorithms quantitatively. All the data are
released both as standard text files and binary files (i.e., rosbag). This
paper provides an overview of the available data and describes a simulator that
we release open-source to create synthetic event-camera data.Comment: 7 pages, 4 figures, 3 table
In vivo volumetric imaging of human retinal circulation with phase-variance optical coherence tomography
We present in vivo volumetric images of human retinal micro-circulation using Fourier-domain optical coherence tomography (Fd-OCT) with the phase-variance based motion contrast method. Currently fundus fluorescein angiography (FA) is the standard technique in clinical settings for visualizing blood circulation of the retina. High contrast imaging of retinal vasculature is achieved by injection of a fluorescein dye into the systemic circulation. We previously reported phase-variance optical coherence tomography (pvOCT) as an alternative and non-invasive technique to image human retinal capillaries. In contrast to FA, pvOCT allows not only noninvasive visualization of a two-dimensional retinal perfusion map but also volumetric morphology of retinal microvasculature with high sensitivity. In this paper we report high-speed acquisition at 125 kHz A-scans with pvOCT to reduce motion artifacts and increase the scanning area when compared with previous reports. Two scanning schemes with different sampling densities and scanning areas are evaluated to find optimal parameters for high acquisition speed in vivo imaging. In order to evaluate this technique, we compare pvOCT capillary imaging at 3x3 mm^2 and 1.5x1.5 mm^2 with fundus FA for a normal human subject. Additionally, a volumetric view of retinal capillaries and a stitched image acquired with ten 3x3 mm^2 pvOCT sub-volumes are presented. Visualization of retinal vasculature with pvOCT has potential for diagnosis of retinal vascular diseases
HST NICMOS Images of the HH 7/11 Outflow in NGC1333
We present near infrared images in H2 at 2.12um of the HH 7/11 outflow and
its driving source SVS 13 taken with HST NICMOS 2 camera, as well as archival
Ha and [SII] optical images obtained with the WFPC2 camera. The NICMOS high
angular resolution observations confirm the nature of a small scale jet arising
from SVS 13, and resolve a structure in the HH 7 working surface that could
correspond to Mach disk H2 emission. The H2 jet has a length of 430 AU (at a
distance of 350 pc), an aspect ratio of 2.2 and morphologically resembles the
well known DG Tau optical micro-jet. The kinematical age of the jet (approx. 10
yr) coincides with the time since the last outburst from SVS 13. If we
interpret the observed H2 flux density with molecular shock models of 20-30
km/s, then the jet has a density as high as 1.e+5 cc. The presence of this
small jet warns that contamination by H2 emission from an outflow in studies
searching for H2 in circumstellar disks is possible. At the working surface,
the smooth H2 morphology of the HH 7 bowshock indicates that the magnetic field
is strong, playing a major role in stabilizing this structure. The H2 flux
density of the Mach disk, when compared with that of the bowshock, suggests
that its emission is produced by molecular shocks of less than 20 km/s. The
WFPC2 optical images display several of the global features already inferred
from groundbased observations, like the filamentary structure in HH 8 and HH
10, which suggests a strong interaction of the outflow with its cavity. The H2
jet is not detected in {SII] or Ha, however, there is a small clump at approx.
5'' NE of SVS 13 that could be depicting the presence either of a different
outburst event or the north edge of the outflow cavity.Comment: 13 pages, 5 figures (JPEGs
- âŠ