22,691 research outputs found

    SO(3)-invariant asymptotic observers for dense depth field estimation based on visual data and known camera motion

    Full text link
    In this paper, we use known camera motion associated to a video sequence of a static scene in order to estimate and incrementally refine the surrounding depth field. We exploit the SO(3)-invariance of brightness and depth fields dynamics to customize standard image processing techniques. Inspired by the Horn-Schunck method, we propose a SO(3)-invariant cost to estimate the depth field. At each time step, this provides a diffusion equation on the unit Riemannian sphere that is numerically solved to obtain a real time depth field estimation of the entire field of view. Two asymptotic observers are derived from the governing equations of dynamics, respectively based on optical flow and depth estimations: implemented on noisy sequences of synthetic images as well as on real data, they perform a more robust and accurate depth estimation. This approach is complementary to most methods employing state observers for range estimation, which uniquely concern single or isolated feature points.Comment: Submitte

    Reducing "Structure From Motion": a General Framework for Dynamic Vision - Part 1: Modeling

    Get PDF
    The literature on recursive estimation of structure and motion from monocular image sequences comprises a large number of different models and estimation techniques. We propose a framework that allows us to derive and compare all models by following the idea of dynamical system reduction. The "natural" dynamic model, derived by the rigidity constraint and the perspective projection, is first reduced by explicitly decoupling structure (depth) from motion. Then implicit decoupling techniques are explored, which consist of imposing that some function of the unknown parameters is held constant. By appropriately choosing such a function, not only can we account for all models seen so far in the literature, but we can also derive novel ones

    Reducing “Structure from Motion”: a general framework for dynamic vision. 1. Modeling

    Get PDF
    The literature on recursive estimation of structure and motion from monocular image sequences comprises a large number of apparently unrelated models and estimation techniques. We propose a framework that allows us to derive and compare all models by following the idea of dynamical system reduction. The “natural” dynamic model, derived from the rigidity constraint and the projection model, is first reduced by explicitly decoupling structure (depth) from motion. Then, implicit decoupling techniques are explored, which consist of imposing that some function of the unknown parameters is held constant. By appropriately choosing such a function, not only can we account for models seen so far in the literature, but we can also derive novel ones

    A sliding mode approach to visual motion estimation

    Get PDF
    The problem of estimating motion from a sequence of images has been a major research theme in machine vision for many years and remains one of the most challenging ones. In this work, we use sliding mode observers to estimate the motion of a moving body with the aid of a CCD camera. We consider a variety of dynamical systems which arise in machine vision applications and develop a novel identication procedure for the estimation of both constant and time varying parameters. The basic procedure introduced for parameter estimation is to recast image feature dynamics linearly in terms of unknown parameters and construct a sliding mode observer to produce asymptotically correct estimates of the observed image features, and then use “equivalent control” to explicitly compute parameters. Much of our analysis has been substantiated by computer simulations and real experiments

    Inertial-sensor bias estimation from brightness/depth images and based on SO(3)-invariant integro/partial-differential equations on the unit sphere

    Full text link
    Constant biases associated to measured linear and angular velocities of a moving object can be estimated from measurements of a static scene by embedded brightness and depth sensors. We propose here a Lyapunov-based observer taking advantage of the SO(3)-invariance of the partial differential equations satisfied by the measured brightness and depth fields. The resulting asymptotic observer is governed by a non-linear integro/partial differential system where the two independent scalar variables indexing the pixels live on the unit sphere of the 3D Euclidian space. The observer design and analysis are strongly simplified by coordinate-free differential calculus on the unit sphere equipped with its natural Riemannian structure. The observer convergence is investigated under C^1 regularity assumptions on the object motion and its scene. It relies on Ascoli-Arzela theorem and pre-compactness of the observer trajectories. It is proved that the estimated biases converge towards the true ones, if and only if, the scene admits no cylindrical symmetry. The observer design can be adapted to realistic sensors where brightness and depth data are only available on a subset of the unit sphere. Preliminary simulations with synthetic brightness and depth images (corrupted by noise around 10%) indicate that such Lyapunov-based observers should be robust and convergent for much weaker regularity assumptions.Comment: 30 pages, 6 figures, submitte

    MOMA: Visual Mobile Marker Odometry

    Full text link
    In this paper, we present a cooperative odometry scheme based on the detection of mobile markers in line with the idea of cooperative positioning for multiple robots [1]. To this end, we introduce a simple optimization scheme that realizes visual mobile marker odometry via accurate fixed marker-based camera positioning and analyse the characteristics of errors inherent to the method compared to classical fixed marker-based navigation and visual odometry. In addition, we provide a specific UAV-UGV configuration that allows for continuous movements of the UAV without doing stops and a minimal caterpillar-like configuration that works with one UGV alone. Finally, we present a real-world implementation and evaluation for the proposed UAV-UGV configuration

    A Neural Model of How the Brain Computes Heading from Optic Flow in Realistic Scenes

    Full text link
    Animals avoid obstacles and approach goals in novel cluttered environments using visual information, notably optic flow, to compute heading, or direction of travel, with respect to objects in the environment. We present a neural model of how heading is computed that describes interactions among neurons in several visual areas of the primate magnocellular pathway, from retina through V1, MT+, and MSTd. The model produces outputs which are qualitatively and quantitatively similar to human heading estimation data in response to complex natural scenes. The model estimates heading to within 1.5° in random dot or photo-realistically rendered scenes and within 3° in video streams from driving in real-world environments. Simulated rotations of less than 1 degree per second do not affect model performance, but faster simulated rotation rates deteriorate performance, as in humans. The model is part of a larger navigational system that identifies and tracks objects while navigating in cluttered environments.National Science Foundation (SBE-0354378, BCS-0235398); Office of Naval Research (N00014-01-1-0624); National-Geospatial Intelligence Agency (NMA201-01-1-2016

    Controlled Interaction: Strategies For Using Virtual Reality To Study Perception

    Get PDF
    Immersive virtual reality systems employing head-mounted displays offer great promise for the investigation of perception and action, but there are well-documented limitations to most virtual reality systems. In the present article, we suggest strategies for studying perception/action interactions that try to depend on both scale-invariant metrics (such as power function exponents) and careful consideration of the requirements of the interactions under investigation. New data concerning the effect of pincushion distortion on the perception of surface orientation are presented, as well as data documenting the perception of dynamic distortions associated with head movements with uncorrected optics. A review of several successful uses of virtual reality to study the interaction of perception and action emphasizes scale-free analysis strategies that can achieve theoretical goals while minimizing assumptions about the accuracy of virtual simulations

    Cortical Dynamics of Navigation and Steering in Natural Scenes: Motion-Based Object Segmentation, Heading, and Obstacle Avoidance

    Full text link
    Visually guided navigation through a cluttered natural scene is a challenging problem that animals and humans accomplish with ease. The ViSTARS neural model proposes how primates use motion information to segment objects and determine heading for purposes of goal approach and obstacle avoidance in response to video inputs from real and virtual environments. The model produces trajectories similar to those of human navigators. It does so by predicting how computationally complementary processes in cortical areas MT-/MSTv and MT+/MSTd compute object motion for tracking and self-motion for navigation, respectively. The model retina responds to transients in the input stream. Model V1 generates a local speed and direction estimate. This local motion estimate is ambiguous due to the neural aperture problem. Model MT+ interacts with MSTd via an attentive feedback loop to compute accurate heading estimates in MSTd that quantitatively simulate properties of human heading estimation data. Model MT interacts with MSTv via an attentive feedback loop to compute accurate estimates of speed, direction and position of moving objects. This object information is combined with heading information to produce steering decisions wherein goals behave like attractors and obstacles behave like repellers. These steering decisions lead to navigational trajectories that closely match human performance.National Science Foundation (SBE-0354378, BCS-0235398); Office of Naval Research (N00014-01-1-0624); National Geospatial Intelligence Agency (NMA201-01-1-2016
    • …
    corecore