18 research outputs found
Regularized pointwise map recovery from functional correspondence
The concept of using functional maps for representing dense correspondences between deformable shapes has proven to be extremely effective in many applications. However, despite the impact of this framework, the problem of recovering the point-to-point correspondence from a given functional map has received surprisingly little interest. In this paper, we analyse the aforementioned problem and propose a novel method for reconstructing pointwise correspondences from a given functional map. The proposed algorithm phrases the matching problem as a regularized alignment problem of the spectral embeddings of the two shapes. Opposed to established methods, our approach does not require the input shapes to be nearly-isometric, and easily extends to recovering the point-to-point correspondence in part-to-whole shape matching problems. Our numerical experiments demonstrate that the proposed approach leads to a significant improvement in accuracy in several challenging cases
Probabilistic Motion Estimation Based on Temporal Coherence
We develop a theory for the temporal integration of visual motion motivated
by psychophysical experiments. The theory proposes that input data are
temporally grouped and used to predict and estimate the motion flows in the
image sequence. This temporal grouping can be considered a generalization of
the data association techniques used by engineers to study motion sequences.
Our temporal-grouping theory is expressed in terms of the Bayesian
generalization of standard Kalman filtering. To implement the theory we derive
a parallel network which shares some properties of cortical networks. Computer
simulations of this network demonstrate that our theory qualitatively accounts
for psychophysical experiments on motion occlusion and motion outliers.Comment: 40 pages, 7 figure
Integrated 2-D Optical Flow Sensor
I present a new focal-plane analog VLSI sensor that estimates optical flow in two visual dimensions. The chip significantly improves previous approaches both with respect to the applied model of optical flow estimation as well as the actual hardware implementation. Its distributed computational architecture consists of an array of locally connected motion units that collectively solve for the unique optimal optical flow estimate. The novel gradient-based motion model assumes visual motion to be translational, smooth and biased. The model guarantees that the estimation problem is computationally well-posed regardless of the visual input. Model parameters can be globally adjusted, leading to a rich output behavior. Varying the smoothness strength, for example, can provide a continuous spectrum of motion estimates, ranging from normal to global optical flow. Unlike approaches that rely on the explicit matching of brightness edges in space or time, the applied gradient-based model assures spatiotemporal continuity on visual information. The non-linear coupling of the individual motion units improves the resulting optical flow estimate because it reduces spatial smoothing across large velocity differences. Extended measurements of a 30x30 array prototype sensor under real-world conditions demonstrate the validity of the model and the robustness and functionality of the implementation