4 research outputs found
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
DragonflEYE: a passive approach to aerial collision sensing
"This dissertation describes the design, development and test of a passive wide-field optical aircraft collision sensing instrument titled 'DragonflEYE'. Such a ""sense-and-avoid"" instrument is desired for autonomous unmanned aerial systems operating in civilian airspace. The instrument was configured as a network of smart camera nodes and implemented using commercial, off-the-shelf components. An end-to-end imaging train model was developed and important figures of merit were derived. Transfer functions arising from intermediate mediums were discussed and their impact assessed. Multiple prototypes were developed. The expected performance of the instrument was iteratively evaluated on the prototypes, beginning with modeling activities followed by laboratory tests, ground tests and flight tests. A prototype was mounted on a Bell 205 helicopter for flight tests, with a Bell 206 helicopter acting as the target. Raw imagery was recorded alongside ancillary aircraft data, and stored for the offline assessment of performance. The ""range at first detection"" (R0), is presented as a robust measure of sensor performance, based on a suitably defined signal-to-noise ratio. The analysis treats target radiance fluctuations, ground clutter, atmospheric effects, platform motion and random noise elements. Under the measurement conditions, R0 exceeded flight crew acquisition ranges. Secondary figures of merit are also discussed, including time to impact, target size and growth, and the impact of resolution on detection range. The hardware was structured to facilitate a real-time hierarchical image-processing pipeline, with selected image processing techniques introduced. In particular, the height of an observed event above the horizon compensates for angular motion of the helicopter platform.
Mechanisms of place recognition and path integration based on the insect visual system
Animals are often able to solve complex navigational tasks in very challenging terrain,
despite using low resolution sensors and minimal computational power, providing
inspiration for robots. In particular, many species of insect are known to solve complex
navigation problems, often combining an array of different behaviours (Wehner
et al., 1996; Collett, 1996). Their nervous system is also comparatively simple, relative
to that of mammals and other vertebrates.
In the first part of this thesis, the visual input of a navigating desert ant, Cataglyphis
velox, was mimicked by capturing images in ultraviolet (UV) at similar wavelengths
to the ant’s compound eye. The natural segmentation of ground and sky lead to
the hypothesis that skyline contours could be used by ants as features for navigation.
As proof of concept, sky-segmented binary images were used as input for an
established localisation algorithm SeqSLAM (Milford and Wyeth, 2012), validating
the plausibility of this claim (Stone et al., 2014). A follow-up investigation sought to
determine whether using the sky as a feature would help overcome image matching
problems that the ant often faced, such as variance in tilt and yaw rotation. A robotic
localisation study showed that using spherical harmonics (SH), a representation in
the frequency domain, combined with extracted sky can greatly help robots localise
on uneven terrain. Results showed improved performance to state of the art point
feature localisation methods on fast bumpy tracks (Stone et al., 2016a).
In the second part, an approach to understand how insects perform a navigational
task called path integration was attempted by modelling part of the brain of the sweat
bee Megalopta genalis. A recent discovery that two populations of cells act as a celestial
compass and visual odometer, respectively, led to the hypothesis that circuitry at their
point of convergence in the central complex (CX) could give rise to path integration.
A firing rate-based model was developed with connectivity derived from the overlap
of observed neural arborisations of individual cells and successfully used to build up
a home vector and steer an agent back to the nest (Stone et al., 2016b). This approach
has the appeal that neural circuitry is highly conserved across insects, so findings
here could have wide implications for insect navigation in general. The developed
model is the first functioning path integrator that is based on individual cellular
connections