44,961 research outputs found
Perception of Motion and Architectural Form: Computational Relationships between Optical Flow and Perspective
Perceptual geometry refers to the interdisciplinary research whose objectives
focuses on study of geometry from the perspective of visual perception, and in
turn, applies such geometric findings to the ecological study of vision.
Perceptual geometry attempts to answer fundamental questions in perception of
form and representation of space through synthesis of cognitive and biological
theories of visual perception with geometric theories of the physical world.
Perception of form, space and motion are among fundamental problems in vision
science. In cognitive and computational models of human perception, the
theories for modeling motion are treated separately from models for perception
of form.Comment: 10 pages, 13 figures, submitted and accepted in DoCEIS'2012
Conference: http://www.uninova.pt/doceis/doceis12/home/home.ph
Learning to Extract Motion from Videos in Convolutional Neural Networks
This paper shows how to extract dense optical flow from videos with a
convolutional neural network (CNN). The proposed model constitutes a potential
building block for deeper architectures to allow using motion without resorting
to an external algorithm, \eg for recognition in videos. We derive our network
architecture from signal processing principles to provide desired invariances
to image contrast, phase and texture. We constrain weights within the network
to enforce strict rotation invariance and substantially reduce the number of
parameters to learn. We demonstrate end-to-end training on only 8 sequences of
the Middlebury dataset, orders of magnitude less than competing CNN-based
motion estimation methods, and obtain comparable performance to classical
methods on the Middlebury benchmark. Importantly, our method outputs a
distributed representation of motion that allows representing multiple,
transparent motions, and dynamic textures. Our contributions on network design
and rotation invariance offer insights nonspecific to motion estimation
Deep Lidar CNN to Understand the Dynamics of Moving Vehicles
Perception technologies in Autonomous Driving are experiencing their golden
age due to the advances in Deep Learning. Yet, most of these systems rely on
the semantically rich information of RGB images. Deep Learning solutions
applied to the data of other sensors typically mounted on autonomous cars (e.g.
lidars or radars) are not explored much. In this paper we propose a novel
solution to understand the dynamics of moving vehicles of the scene from only
lidar information. The main challenge of this problem stems from the fact that
we need to disambiguate the proprio-motion of the 'observer' vehicle from that
of the external 'observed' vehicles. For this purpose, we devise a CNN
architecture which at testing time is fed with pairs of consecutive lidar
scans. However, in order to properly learn the parameters of this network,
during training we introduce a series of so-called pretext tasks which also
leverage on image data. These tasks include semantic information about
vehicleness and a novel lidar-flow feature which combines standard image-based
optical flow with lidar scans. We obtain very promising results and show that
including distilled image information only during training, allows improving
the inference results of the network at test time, even when image data is no
longer used.Comment: Presented in IEEE ICRA 2018. IEEE Copyrights: Personal use of this
material is permitted. Permission from IEEE must be obtained for all other
uses. (V2 just corrected comments on arxiv submission
Mapping the spatiotemporal dynamics of calcium signaling in cellular neural networks using optical flow
An optical flow gradient algorithm was applied to spontaneously forming net-
works of neurons and glia in culture imaged by fluorescence optical microscopy
in order to map functional calcium signaling with single pixel resolution.
Optical flow estimates the direction and speed of motion of objects in an image
between subsequent frames in a recorded digital sequence of images (i.e. a
movie). Computed vector field outputs by the algorithm were able to track the
spatiotemporal dynamics of calcium signaling pat- terns. We begin by briefly
reviewing the mathematics of the optical flow algorithm, and then describe how
to solve for the displacement vectors and how to measure their reliability. We
then compare computed flow vectors with manually estimated vectors for the
progression of a calcium signal recorded from representative astrocyte
cultures. Finally, we applied the algorithm to preparations of primary
astrocytes and hippocampal neurons and to the rMC-1 Muller glial cell line in
order to illustrate the capability of the algorithm for capturing different
types of spatiotemporal calcium activity. We discuss the imaging requirements,
parameter selection and threshold selection for reliable measurements, and
offer perspectives on uses of the vector data.Comment: 23 pages, 5 figures. Peer reviewed accepted version in press in
Annals of Biomedical Engineerin
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
- …