1,015 research outputs found

    Euclidean position estimation of static features using a moving uncalibrated camera

    Get PDF
    In this paper, a novel Euclidean position estimation technique using a single uncalibrated camera mounted on amoving platform is developed to asymptotically recover the 3-D Euclidean position of static object features. The position of the moving platform is assumed to be measurable, and a second object with known 3-D Euclidean coordinates relative to theworld frame is considered to be available a priori. To account for the unknown camera calibration parameters and to estimate the unknown 3-D Euclidean coordinates, an adaptive least squares estimation strategy is employed based on prediction error formulations and a Lyapunovtype stability analysis. The developed estimator is shown to recover the 3-D Euclidean position of the unknown object features despite the lack of knowledge of the camera calibration parameters. Numerical simulation results along with experimental results are presented to illustrate the effectiveness of the proposed algorithm. © 2011 IEEE.DOE and Honda Corporatio

    Nonlinear Control and Estimation Techniques with Applications to Vision-based and Biomedical Systems

    Get PDF
    This dissertation is divided into four self-contained chapters. In Chapter 1, a new estimator using a single calibrated camera mounted on a moving platform is developed to asymptotically recover the range and the three-dimensional (3D) Euclidean position of a static object feature. The estimator also recovers the constant 3D Euclidean coordinates of the feature relative to the world frame as a byproduct. The position and orientation of the camera is assumed to be measurable unlike existing observers where velocity measurements are assumed to be known. To estimate the unknown range variable, an adaptive least squares estimation strategy is employed based on a novel prediction error formulation. A Lyapunov stability analysis is used to prove the convergence properties of the estimator. The developed estimator has a simple mathematical structure and can be used to identify range and 3D Euclidean coordinates of multiple features. These properties of the estimator make it suitable for use with robot navigation algorithms where position measurements are readily available. Numerical simulation results along with experimental results are presented to illustrate the effectiveness of the proposed algorithm. In Chapter 2, a novel Euclidean position estimation technique using a single uncalibrated camera mounted on a moving platform is developed to asymptotically recover the three-dimensional (3D) Euclidean position of static object features. The position of the moving platform is assumed to be measurable, and a second object with known 3D Euclidean coordinates relative to the world frame is considered to be available a priori. To account for the unknown camera calibration parameters and to estimate the unknown 3D Euclidean coordinates, an adaptive least squares estimation strategy is employed based on prediction error formulations and a Lyapunov-type stability analysis. The developed estimator is shown to recover the 3D Euclidean position of the unknown object features despite the lack of knowledge of the camera calibration parameters. Numerical simulation results along with experimental results are presented to illustrate the effectiveness of the proposed algorithm. In Chapter 3, a new range identification technique for a calibrated paracatadioptric system mounted on a moving platform is developed to recover the range information and the three-dimensional (3D) Euclidean coordinates of a static object feature. The position of the moving platform is assumed to be measurable. To identify the unknown range, first, a function of the projected pixel coordinates is related to the unknown 3D Euclidean coordinates of an object feature. This function is nonlinearly parameterized (i.e., the unknown parameters appear nonlinearly in the parameterized model). An adaptive estimator based on a min-max algorithm is then designed to estimate the unknown 3D Euclidean coordinates of an object feature relative to a fixed reference frame which facilitates the identification of range. A Lyapunov-type stability analysis is used to show that the developed estimator provides an estimation of the unknown parameters within a desired precision. Numerical simulation results are presented to illustrate the effectiveness of the proposed range estimation technique. In Chapter 4, optimization of antiangiogenic therapy for tumor management is considered as a nonlinear control problem. A new technique is developed to optimize antiangiogenic therapy which minimizes the volume of a tumor and prevents it from growing using an optimum drug dose. To this end, an optimum desired trajectory is designed to minimize a performance index. Two controllers are then presented that drive the tumor volume to its optimum value. The first controller is proven to yield exponential results given exact model knowledge. The second controller is developed under the assumption of parameteric uncertainties in the system model. A least-squares estimation strategy based on a prediction error formulation and a Lyapunov-type stability analysis is developed to estimate the unknown parameters of the performance index. An adaptive controller is then designed to track the desired optimum trajectory. The proposed tumor minimization scheme is shown to minimize the tumor volume with an optimum drug dose despite the lack of knowledge of system parameters. Numerical simulation results are presented to illustrate the effectiveness of the proposed technique. An extension of the developed technique for a mathematical model which accounts for pharmacodynamics and pharmacokinetics is also presented. Futhermore, a technique for the estimation of the carrying capacity of endothelial cells is also presented

    Robust visual servoing in 3d reaching tasks

    Get PDF
    This paper describes a novel approach to the problem of reaching an object in space under visual guidance. The approach is characterized by a great robustness to calibration errors, such that virtually no calibration is required. Servoing is based on binocular vision: a continuous measure of the end-effector motion field, derived from real-time computation of the binocular optical flow over the stereo images, is compared with the actual position of the target and the relative error in the end-effector trajectory is continuously corrected. The paper outlines the general framework of the approach, shows how visual measures are obtained and discusses the synthesis of the controller along with its stability analysis. Real-time experiments are presented to show the applicability of the approach in real 3-D applications

    A Comparative Study between Analytic and Estimated Image Jacobian by Using a Stereoscopic System of Cameras

    Full text link
    This paper describes a comparative study of performance between the estimated image Jacobian that come from taking into account the epipolar geometry in a system of two cameras, and the well known analytic image Jacobian that is utilized for most applications in visual servoing. Image Based Visual Servoing architecture is used for controlling a 3 DOF articular system using two cameras in eye to hand configuration. Tests in static and dynamic cases were carried out, and showed that the performance of estimated Jacobian by using the properties of the epipolar geometry is such as good and robust against noise as the analytic Jacobian. This fact is considered as an advantage because the estimated Jacobian does not need laborious previous work prior to control task in contrast to the analytic Jacobian does

    Image Based Visual Servoing: Estimated Image Jacobian by Using Fundamental Matrix VS Analytic Jacobian

    Get PDF
    This paper describes a comparative study of performance between the estimated image Jacobian that come from taking into account the geometry epipolar of a system of two cameras, and the well known analytic image Jacobian that is utilized for most applications in visual servoing. Image Based Visual Servoing architecture is used for controlling a 3 d.o.f. articular system using two cameras in eye to hand configuration. Tests in static and dynamic cases were carried out, and showed that the performance of estimated Jacobian by using the properties of the epipolar geometry is such as good and robust against noise as the analytic Jacobian. This fact is considered as an advantage because the estimated Jacobian does not need laborious previous work prior the control task in contrast to the analytic Jacobian does

    Monitoring Activities from Multiple Video Streams: Establishing a Common Coordinate Frame

    Get PDF
    Passive monitoring of large sites typically requires coordination between multiple cameras, which in turn requires methods for automatically relating events between distributed cameras. This paper tackles the problem of self-calibration of multiple cameras which are very far apart, using feature correspondences to determine the camera geometry. The key problem is finding such correspondences. Since the camera geometry and photometric characteristics vary considerably between images, one cannot use brightness and/or proximity constraints. Instead we apply planar geometric constraints to moving objects in the scene in order to align the scene"s ground plane across multiple views. We do not assume synchronized cameras, and we show that enforcing geometric constraints enables us to align the tracking data in time. Once we have recovered the homography which aligns the planar structure in the scene, we can compute from the homography matrix the 3D position of the plane and the relative camera positions. This in turn enables us to recover a homography matrix which maps the images to an overhead view. We demonstrate this technique in two settings: a controlled lab setting where we test the effects of errors in internal camera calibration, and an uncontrolled, outdoor setting in which the full procedure is applied to external camera calibration and ground plane recovery. In spite of noise in the internal camera parameters and image data, the system successfully recovers both planar structure and relative camera positions in both settings

    Dynamic Estimation of Rigid Motion from Perspective Views via Recursive Identification of Exterior Differential Systems with Parameters on a Topological Manifold

    Get PDF
    We formulate the problem of estimating the motion of a rigid object viewed under perspective projection as the identification of a dynamic model in Exterior Differential form with parameters on a topological manifold. We first describe a general method for recursive identification of nonlinear implicit systems using prediction error criteria. The parameters are allowed to move slowly on some topological (not necessarily smooth) manifold. The basic recursion is solved in two different ways: one is based on a simple extension of the traditional Kalman Filter to nonlinear and implicit measurement constraints, the other may be regarded as a generalized "Gauss-Newton" iteration, akin to traditional Recursive Prediction Error Method techniques in linear identification. A derivation of the "Implicit Extended Kalman Filter" (IEKF) is reported in the appendix. The ID framework is then applied to solving the visual motion problem: it indeed is possible to characterize it in terms of identification of an Exterior Differential System with parameters living on a C0 topological manifold, called the "essential manifold". We consider two alternative estimation paradigms. The first is in the local coordinates of the essential manifold: we estimate the state of a nonlinear implicit model on a linear space. The second is obtained by a linear update on the (linear) embedding space followed by a projection onto the essential manifold. These schemes proved successful in performing the motion estimation task, as we show in experiments on real and noisy synthetic image sequences

    Constrained Bundle Adjustment for Structure From Motion Using Uncalibrated Multi-Camera Systems

    Full text link
    Structure from motion using uncalibrated multi-camera systems is a challenging task. This paper proposes a bundle adjustment solution that implements a baseline constraint respecting that these cameras are static to each other. We assume these cameras are mounted on a mobile platform, uncalibrated, and coarsely synchronized. To this end, we propose the baseline constraint that is formulated for the scenario in which the cameras have overlapping views. The constraint is incorporated in the bundle adjustment solution to keep the relative motion of different cameras static. Experiments were conducted using video frames of two collocated GoPro cameras mounted on a vehicle with no system calibration. These two cameras were placed capturing overlapping contents. We performed our bundle adjustment using the proposed constraint and then produced 3D dense point clouds. Evaluations were performed by comparing these dense point clouds against LiDAR reference data. We showed that, as compared to traditional bundle adjustment, our proposed method achieved an improvement of 29.38%.Comment: to be published in ISPRS Congress 202
    corecore