59,221 research outputs found
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
Statistical Mechanics and Visual Signal Processing
The nervous system solves a wide variety of problems in signal processing. In
many cases the performance of the nervous system is so good that it apporaches
fundamental physical limits, such as the limits imposed by diffraction and
photon shot noise in vision. In this paper we show how to use the language of
statistical field theory to address and solve problems in signal processing,
that is problems in which one must estimate some aspect of the environment from
the data in an array of sensors. In the field theory formulation the optimal
estimator can be written as an expectation value in an ensemble where the input
data act as external field. Problems at low signal-to-noise ratio can be solved
in perturbation theory, while high signal-to-noise ratios are treated with a
saddle-point approximation. These ideas are illustrated in detail by an example
of visual motion estimation which is chosen to model a problem solved by the
fly's brain. In this problem the optimal estimator has a rich structure,
adapting to various parameters of the environment such as the mean-square
contrast and the correlation time of contrast fluctuations. This structure is
in qualitative accord with existing measurements on motion sensitive neurons in
the fly's brain, and we argue that the adaptive properties of the optimal
estimator may help resolve conlficts among different interpretations of these
data. Finally we propose some crucial direct tests of the adaptive behavior.Comment: 34pp, LaTeX, PUPT-143
Towards automated visual flexible endoscope navigation
Background:\ud
The design of flexible endoscopes has not changed significantly in the past 50 years. A trend is observed towards a wider application of flexible endoscopes with an increasing role in complex intraluminal therapeutic procedures. The nonintuitive and nonergonomical steering mechanism now forms a barrier in the extension of flexible endoscope applications. Automating the navigation of endoscopes could be a solution for this problem. This paper summarizes the current state of the art in image-based navigation algorithms. The objectives are to find the most promising navigation system(s) to date and to indicate fields for further research.\ud
Methods:\ud
A systematic literature search was performed using three general search terms in two medical–technological literature databases. Papers were included according to the inclusion criteria. A total of 135 papers were analyzed. Ultimately, 26 were included.\ud
Results:\ud
Navigation often is based on visual information, which means steering the endoscope using the images that the endoscope produces. Two main techniques are described: lumen centralization and visual odometry. Although the research results are promising, no successful, commercially available automated flexible endoscopy system exists to date.\ud
Conclusions:\ud
Automated systems that employ conventional flexible endoscopes show the most promising prospects in terms of cost and applicability. To produce such a system, the research focus should lie on finding low-cost mechatronics and technologically robust steering algorithms. Additional functionality and increased efficiency can be obtained through software development. The first priority is to find real-time, robust steering algorithms. These algorithms need to handle bubbles, motion blur, and other image artifacts without disrupting the steering process
Sparse Coding Predicts Optic Flow Specificities of Zebrafish Pretectal Neurons
Zebrafish pretectal neurons exhibit specificities for large-field optic flow
patterns associated with rotatory or translatory body motion. We investigate
the hypothesis that these specificities reflect the input statistics of natural
optic flow. Realistic motion sequences were generated using computer graphics
simulating self-motion in an underwater scene. Local retinal motion was
estimated with a motion detector and encoded in four populations of
directionally tuned retinal ganglion cells, represented as two signed input
variables. This activity was then used as input into one of two learning
networks: a sparse coding network (competitive learning) and backpropagation
network (supervised learning). Both simulations develop specificities for optic
flow which are comparable to those found in a neurophysiological study (Kubo et
al. 2014), and relative frequencies of the various neuronal responses are best
modeled by the sparse coding approach. We conclude that the optic flow neurons
in the zebrafish pretectum do reflect the optic flow statistics. The predicted
vectorial receptive fields show typical optic flow fields but also "Gabor" and
dipole-shaped patterns that likely reflect difference fields needed for
reconstruction by linear superposition.Comment: Published Conference Paper from ICANN 2018, Rhode
Visual 3-D SLAM from UAVs
The aim of the paper is to present, test and discuss the implementation of Visual SLAM techniques to images taken from Unmanned Aerial Vehicles (UAVs) outdoors, in partially structured environments. Every issue of the whole process is discussed in order to obtain more accurate localization and mapping from UAVs flights. Firstly, the issues related to the visual features of objects in the scene, their distance to the UAV, and the related image acquisition system and their calibration are evaluated for improving the whole process. Other important, considered issues are related to the image processing techniques, such as interest point detection, the matching procedure and the scaling factor. The whole system has been tested using the COLIBRI mini UAV in partially structured environments. The results that have been obtained for localization, tested against the GPS information of the flights, show that Visual SLAM delivers reliable localization and mapping that makes it suitable for some outdoors applications when flying UAVs
Forecasting People Trajectories and Head Poses by Jointly Reasoning on Tracklets and Vislets
In this work, we explore the correlation between people trajectories and
their head orientations. We argue that people trajectory and head pose
forecasting can be modelled as a joint problem. Recent approaches on trajectory
forecasting leverage short-term trajectories (aka tracklets) of pedestrians to
predict their future paths. In addition, sociological cues, such as expected
destination or pedestrian interaction, are often combined with tracklets. In
this paper, we propose MiXing-LSTM (MX-LSTM) to capture the interplay between
positions and head orientations (vislets) thanks to a joint unconstrained
optimization of full covariance matrices during the LSTM backpropagation. We
additionally exploit the head orientations as a proxy for the visual attention,
when modeling social interactions. MX-LSTM predicts future pedestrians location
and head pose, increasing the standard capabilities of the current approaches
on long-term trajectory forecasting. Compared to the state-of-the-art, our
approach shows better performances on an extensive set of public benchmarks.
MX-LSTM is particularly effective when people move slowly, i.e. the most
challenging scenario for all other models. The proposed approach also allows
for accurate predictions on a longer time horizon.Comment: Accepted at IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE
INTELLIGENCE 2019. arXiv admin note: text overlap with arXiv:1805.0065
- …