562,371 research outputs found
Flow Lookup and Biological Motion Perception
Optical flow in monocular video can serve as a key for recognizing and tracking the three-dimensional pose of human subjects. In comparison with prior work using silhouettes as a key for pose lookup, flow data contains richer information and in experiments can successfully track more difficult sequences. Furthermore, flow recognition is powerful enough to model human abilities in perceiving biological motion from sparse input. The experiments described herein show that a tracker using flow moment lookup can reconstruct a common biological motion (walking) from images containing only point light sources attached to the joints of the moving subject
Vision-Based Navigation III: Pose and Motion from Omnidirectional Optical Flow and a Digital Terrain Map
An algorithm for pose and motion estimation using corresponding features in
omnidirectional images and a digital terrain map is proposed. In previous
paper, such algorithm for regular camera was considered. Using a Digital
Terrain (or Digital Elevation) Map (DTM/DEM) as a global reference enables
recovering the absolute position and orientation of the camera. In order to do
this, the DTM is used to formulate a constraint between corresponding features
in two consecutive frames. In this paper, these constraints are extended to
handle non-central projection, as is the case with many omnidirectional
systems. The utilization of omnidirectional data is shown to improve the
robustness and accuracy of the navigation algorithm. The feasibility of this
algorithm is established through lab experimentation with two kinds of
omnidirectional acquisition systems. The first one is polydioptric cameras
while the second is catadioptric camera.Comment: 6 pages, 9 figure
Flowing ConvNets for Human Pose Estimation in Videos
The objective of this work is human pose estimation in videos, where multiple
frames are available. We investigate a ConvNet architecture that is able to
benefit from temporal context by combining information across the multiple
frames using optical flow.
To this end we propose a network architecture with the following novelties:
(i) a deeper network than previously investigated for regressing heatmaps; (ii)
spatial fusion layers that learn an implicit spatial model; (iii) optical flow
is used to align heatmap predictions from neighbouring frames; and (iv) a final
parametric pooling layer which learns to combine the aligned heatmaps into a
pooled confidence map.
We show that this architecture outperforms a number of others, including one
that uses optical flow solely at the input layers, one that regresses joint
coordinates directly, and one that predicts heatmaps without spatial fusion.
The new architecture outperforms the state of the art by a large margin on
three video pose estimation datasets, including the very challenging Poses in
the Wild dataset, and outperforms other deep methods that don't use a graphical
model on the single-image FLIC benchmark (and also Chen & Yuille and Tompson et
al. in the high precision region).Comment: ICCV'1
- …