2,392 research outputs found
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
The Event-Camera Dataset and Simulator: Event-based Data for Pose Estimation, Visual Odometry, and SLAM
New vision sensors, such as the Dynamic and Active-pixel Vision sensor
(DAVIS), incorporate a conventional global-shutter camera and an event-based
sensor in the same pixel array. These sensors have great potential for
high-speed robotics and computer vision because they allow us to combine the
benefits of conventional cameras with those of event-based sensors: low
latency, high temporal resolution, and very high dynamic range. However, new
algorithms are required to exploit the sensor characteristics and cope with its
unconventional output, which consists of a stream of asynchronous brightness
changes (called "events") and synchronous grayscale frames. For this purpose,
we present and release a collection of datasets captured with a DAVIS in a
variety of synthetic and real environments, which we hope will motivate
research on new algorithms for high-speed and high-dynamic-range robotics and
computer-vision applications. In addition to global-shutter intensity images
and asynchronous events, we provide inertial measurements and ground-truth
camera poses from a motion-capture system. The latter allows comparing the pose
accuracy of ego-motion estimation algorithms quantitatively. All the data are
released both as standard text files and binary files (i.e., rosbag). This
paper provides an overview of the available data and describes a simulator that
we release open-source to create synthetic event-camera data.Comment: 7 pages, 4 figures, 3 table
Increasing trap stiffness with position clamping in holographic optical tweezers
We present a holographic optical tweezers system capable of position clamping multiple particles. Moving an optical trap in response to the trapped object's motion is a powerful technique for optical control and force measurement. We have now realised this experimentally using a Boulder Nonlinear Systems Spatial Light Modulator (SLM) with a refresh rate of 203Hz. We obtain a reduction of 44% in the variance of the bead's position, corresponding to an increase in effective trap stiffness of 77%. This reduction relies on the generation of holograms at high speed. We present software capable of calculating holograms in under 1ms using a graphics processor unit. © 2009 Optical Society of America
CED: Color Event Camera Dataset
Event cameras are novel, bio-inspired visual sensors, whose pixels output
asynchronous and independent timestamped spikes at local intensity changes,
called 'events'. Event cameras offer advantages over conventional frame-based
cameras in terms of latency, high dynamic range (HDR) and temporal resolution.
Until recently, event cameras have been limited to outputting events in the
intensity channel, however, recent advances have resulted in the development of
color event cameras, such as the Color-DAVIS346. In this work, we present and
release the first Color Event Camera Dataset (CED), containing 50 minutes of
footage with both color frames and events. CED features a wide variety of
indoor and outdoor scenes, which we hope will help drive forward event-based
vision research. We also present an extension of the event camera simulator
ESIM that enables simulation of color events. Finally, we present an evaluation
of three state-of-the-art image reconstruction methods that can be used to
convert the Color-DAVIS346 into a continuous-time, HDR, color video camera to
visualise the event stream, and for use in downstream vision applications.Comment: Conference on Computer Vision and Pattern Recognition Workshop
MilliSonic: Pushing the Limits of Acoustic Motion Tracking
Recent years have seen interest in device tracking and localization using
acoustic signals. State-of-the-art acoustic motion tracking systems however do
not achieve millimeter accuracy and require large separation between
microphones and speakers, and as a result, do not meet the requirements for
many VR/AR applications. Further, tracking multiple concurrent acoustic
transmissions from VR devices today requires sacrificing accuracy or frame
rate. We present MilliSonic, a novel system that pushes the limits of acoustic
based motion tracking. Our core contribution is a novel localization algorithm
that can provably achieve sub-millimeter 1D tracking accuracy in the presence
of multipath, while using only a single beacon with a small 4-microphone
array.Further, MilliSonic enables concurrent tracking of up to four smartphones
without reducing frame rate or accuracy. Our evaluation shows that MilliSonic
achieves 0.7mm median 1D accuracy and a 2.6mm median 3D accuracy for
smartphones, which is 5x more accurate than state-of-the-art systems.
MilliSonic enables two previously infeasible interaction applications: a) 3D
tracking of VR headsets using the smartphone as a beacon and b) fine-grained 3D
tracking for the Google Cardboard VR system using a small microphone array
- …