6 research outputs found

    A bio-plausible design for visual attitude stabilization

    Get PDF
    We consider the problem of attitude stabilization using exclusively visual sensory input, and we look for a solution which can satisfy the constraints of a "bio-plausible" computation. We obtain a PD controller which is a bilinear form of the goal image, and the current and delayed visual input. Moreover, this controller can be learned using classic neural networks algorithms. The structure of the resulting computation, derived from general principles by imposing a bilinear computation, has striking resemblances with existing models for visual information processing in insects (Reichardt Correlators and lobula plate tangential cells). We validate the algorithms using faithful simulations of the fruit fly visual input

    A bio-plausible design for visual attitude stabilization

    Get PDF
    We consider the problem of attitude stabilization using exclusively visual sensory input, and we look for a solution which can satisfy the constraints of a "bio-plausible" computation. We obtain a PD controller which is a bilinear form of the goal image, and the current and delayed visual input. Moreover, this controller can be learned using classic neural networks algorithms. The structure of the resulting computation, derived from general principles by imposing a bilinear computation, has striking resemblances with existing models for visual information processing in insects (Reichardt Correlators and lobula plate tangential cells). We validate the algorithms using faithful simulations of the fruit fly visual input

    SO(3)-invariant asymptotic observers for dense depth field estimation based on visual data and known camera motion

    Full text link
    In this paper, we use known camera motion associated to a video sequence of a static scene in order to estimate and incrementally refine the surrounding depth field. We exploit the SO(3)-invariance of brightness and depth fields dynamics to customize standard image processing techniques. Inspired by the Horn-Schunck method, we propose a SO(3)-invariant cost to estimate the depth field. At each time step, this provides a diffusion equation on the unit Riemannian sphere that is numerically solved to obtain a real time depth field estimation of the entire field of view. Two asymptotic observers are derived from the governing equations of dynamics, respectively based on optical flow and depth estimations: implemented on noisy sequences of synthetic images as well as on real data, they perform a more robust and accurate depth estimation. This approach is complementary to most methods employing state observers for range estimation, which uniquely concern single or isolated feature points.Comment: Submitte

    A hovercraft robot that uses insect-inspired visual autocorrelation for motion control in a corridor

    Get PDF
    Abstract-In this paper we are concerned with the challenge of flight control of computationally-constrained micro-aerial vehicles that must rely primarily on vision to navigate confined spaces. We turn to insects for inspiration. We demonstrate that it is possible to control a robot with inertial, flight-like dynamics in the plane using insect-inspired visual autocorrelators or "elementary motion detectors" (EMDs) to detect patterns of visual optic flow. The controller, which requires minimal computation, receives visual information from a small omnidirectional array of visual sensors and computes thrust outputs for a fan pair to stabilize motion along the centerline of a corridor. To design the controller, we provide a frequencydomain analysis of the response of an array of correlators to a flat moving wall. The model incorporates the effects of motion parallax and perspective and provides a means for computing appropriate inter-sensor angular spacing and visual blurring. The controller estimates the state of robot motion by decomposing the correlator response into harmonics, an analogous operation to that performed by tangential cells in the fly. This work constitutes the first-known demonstration of control of non-kinematic inertial dynamics using purely correlators

    A hovercraft robot that uses insect-inspired visual autocorrelation for motion control in a corridor

    Get PDF
    Abstract-In this paper we are concerned with the challenge of flight control of computationally-constrained micro-aerial vehicles that must rely primarily on vision to navigate confined spaces. We turn to insects for inspiration. We demonstrate that it is possible to control a robot with inertial, flight-like dynamics in the plane using insect-inspired visual autocorrelators or "elementary motion detectors" (EMDs) to detect patterns of visual optic flow. The controller, which requires minimal computation, receives visual information from a small omnidirectional array of visual sensors and computes thrust outputs for a fan pair to stabilize motion along the centerline of a corridor. To design the controller, we provide a frequencydomain analysis of the response of an array of correlators to a flat moving wall. The model incorporates the effects of motion parallax and perspective and provides a means for computing appropriate inter-sensor angular spacing and visual blurring. The controller estimates the state of robot motion by decomposing the correlator response into harmonics, an analogous operation to that performed by tangential cells in the fly. This work constitutes the first-known demonstration of control of non-kinematic inertial dynamics using purely correlators
    corecore