24 research outputs found

    Estimation of General Rigid Body Motion From a Long Sequence of Images

    Get PDF
    In estimating the 3-D rigid body motion and structure from time-varying images, most of previous approaches which exploit a large number of frames assume that the rotation, and the translation in some case, are constant. For a long sequence of images, this assumption in general is not valid. In this paper, we propose a new state estimation formulation for the general motion in which the 3-D translation and rotation are modeled as the polynomials of arbitrary order. Extended Kalman filter is used to find the estimates recursively from noisy images. A number of simulations including the Monte Carlo analysis are conducted to illustrate the performance of the proposed formulation

    Tracking Moving Objects by a Mobile Camera

    Get PDF
    A system for video tracking of a moving object by the robot-held camera is presented, and efficient tracking methods are proposed. We describe our initial implementation of a system which is capable of tracking a single moving object against highly textured background. A pyramid-based image processor, PVM-1 is employed to support some fast algorithms in locating the moving object from the difference image. Object tracking is accomplished in the static look-and-move mode by the translational motion of a CCD camera mounted on the robot arm. Discussion is given on the implementation of tracking filters and on the effective utilization of multi-resolution processing for the object searching. Finally a method for dynamic look-and-move tracking is proposed for the future improvement of tracking performance

    Implementing Tracking Error Control for Quadrotor UAV

    Get PDF

    A Forward Model of Optic Flow for Detecting External Forces

    Get PDF
    Robot positioning is an important function of autonomous intelligent robots. However, the application of external forces to a robot can disrupt its normal operation and cause localisation errors. We present a novel approach for detecting external disturbances based on optic flow without the use of egomotion information. Even though this research moderately validates the efficacy of the model we argue that its application is plausible to a large number of robotic systems

    Visual navigation using a single camera

    Get PDF
    We assess the usefulness of monocular recursive motion estimation techniques for vehicle navigation in the absence of a model for the environment. For this purpose we extend a recently proposed recursive motion estimator, the Essential filter, to handle scale estimation. We examine experimentally the accuracy with which the motion and position of the vehicle may be computed on an 8000 frame indoors sequence. The issues of sampling time frequency and number of necessary features in the environment are addressed systematically

    c ○ 2003 Kluwer Academic Publishers. Manufactured in The Netherlands. Understanding the Behavior of SFM Algorithms: A Geometric Approach

    Get PDF
    Abstract. We put forth in this paper a geometrically motivated motion error analysis which is capable of supporting investigation of global effect such as inherent ambiguities. This is in contrast with the usual statistical kinds of motion error analyses which can only deal with local effect such as noise perturbations, and where much of the results regarding global ambiguities are empirical in nature. The error expression that we derive allows us to predict the exact conditions likely to cause ambiguities and how these ambiguities vary with motion types such as lateral or forward motion. Given the erroneous 3-D motion estimates caused by the inherent ambiguities, it is also important to study the behavior of the resultant distortion in depth recovered under different motion-scene configurations. Such an investigation may alert us to the occurrence of ambiguities under different conditions and be more careful in picking the solution. Our formulation, though geometrically motivated, was also put to use in modeling the effect of noise and in revealing the strong influence of feature distribution. Experiments on both synthetic and real image sequences were conducted to verify the various theoretical predictions. Keywords: structure from motion, error analysis, epipolar constraint, inherent ambiguity, depth distortio

    Collision Avoidance for UAVs Using Optic Flow Measurement with Line of Sight Rate Equalization and Looming

    Get PDF
    A series of simplified scenarios is investigated whereby an optical flow balancing guidance law is used to avoid obstacles by steering an air vehicle between fixed objects/obstacles. These obstacles are registered as specific points that can be representative of features in a scene. The obstacles appear in the field of view of a single forward looking camera. First a 2-D analysis is presented where the rate of the line of sight from the vehicle to each of the obstacles to be avoided is measured. The analysis proceeds by initially using no field of view (FOV) limitations, then applying FOV restrictions, and adding features or obstacles in the scene. These analyses show that using a guidance law that equalizes the line of sight rates with no FOV limitations, actually results in the vehicle being steered into one of the objects for all initial conditions. The research next develops an obstacle avoidance strategy based on equilibrating the optic flow generated by the obstacles and presents an analysis that leads to a different conclusion in which balancing the optic flows does avoid the obstacles. The paper then describes a set of guidance methods that with real FOV limitations create a favorable result. Finally, the looming of an object in the camera\u27s FOV can be measured and used for synthesizing a collision avoidance guidance law. For the simple 2-D case, looming is quantified as an increase in LOS between two features on a wall in front of the air vehicle. The 2-D guidance law for equalizing the optic flow and looming detection is then extended into the 3-D case. Then a set of 3-D scenarios are further explored using a decoupled two channel approach. In addition, a comparison of two image segmentation techniques that are used to find optic flow vectors is presented

    Computational Imaging for Shape Understanding

    Get PDF
    Geometry is the essential property of real-world scenes. Understanding the shape of the object is critical to many computer vision applications. In this dissertation, we explore using computational imaging approaches to recover the geometry of real-world scenes. Computational imaging is an emerging technique that uses the co-designs of image hardware and computational software to expand the capacity of traditional cameras. To tackle face recognition in the uncontrolled environment, we study 2D color image and 3D shape to deal with body movement and self-occlusion. Especially, we use multiple RGB-D cameras to fuse the varying pose and register the front face in a unified coordinate system. The deep color feature and geodesic distance feature have been used to complete face recognition. To handle the underwater image application, we study the angular-spatial encoding and polarization state encoding of light rays using computational imaging devices. Specifically, we use the light field camera to tackle the challenging problem of underwater 3D reconstruction. We leverage the angular sampling of the light field for robust depth estimation. We also develop a fast ray marching algorithm to improve the efficiency of the algorithm. To deal with arbitrary reflectance, we investigate polarimetric imaging and develop polarimetric Helmholtz stereopsis that uses reciprocal polarimetric image pairs for high-fidelity 3D surface reconstruction. We formulate new reciprocity and diffuse/specular polarimetric constraints to recover surface depths and normals using an optimization framework. To recover the 3D shape in the unknown and uncontrolled natural illumination, we use two circularly polarized spotlights to boost the polarization cues corrupted by the environment lighting, as well as to provide photometric cues. To mitigate the effect of uncontrolled environment light in photometric constraints, we estimate a lighting proxy map and iteratively refine the normal and lighting estimation. Through expensive experiments on the simulated and real images, we demonstrate that our proposed computational imaging methods outperform traditional imaging approaches

    Depth perception from motion under viewpoint distortion

    Get PDF
    Master'sMASTER OF ENGINEERIN
    corecore