39 research outputs found

    Conferring robustness to path-planning for image-based control

    Get PDF
    Path-planning has been proposed in visual servoing for reaching the desired location while fulfilling various constraints. Unfortunately, the real trajectory can be significantly different from the reference trajectory due to the presence of uncertainties on the model used, with the consequence that some constraints may not be fulfilled hence leading to a failure of the visual servoing task. This paper proposes a new strategy for addressing this problem, where the idea consists of conferring robustness to the path-planning scheme by considering families of admissible models. In order to obtain these families, uncertainty in the form of random variables is introduced on the available image points and intrinsic parameters. Two families are considered, one by generating a given number of admissible models corresponding to extreme values of the uncertainty, and one by estimating the extreme values of the components of the admissible models. Each model of these families identifies a reference trajectory, which is parametrized by design variables that are common to all the models. The design variables are hence determined by imposing that all the reference trajectories fulfill the required constraints. Discussions on the convergence and robustness of the proposed strategy are provided, in particular showing that the satisfaction of the visibility and workspace constraints for the second family ensures the satisfaction of these constraints for all models bounded by this family. The proposed strategy is illustrated through simulations and experiments. © 2011 IEEE.published_or_final_versio

    Designing image trajectories in the presence of uncertain data for robust visual servoing path-planning

    Get PDF
    Path-planning allows one to steer a camera to a desired location while taking into account the presence of constraints such as visibility, workspace, and joint limits. Unfortunately, the planned path can be significantly different from the real path due to the presence of uncertainty on the available data, with the consequence that some constraints may be not fulfilled by the real path even if they are satisfied by the planned path. In this paper we address the problem of performing robust path-planning, i.e. computing a path that satisfies the required constraints not only for the nominal model as in traditional path-planning but rather for a family of admissible models. Specifically, we consider an uncertain model where the point correspondences between the initial and desired views and the camera intrinsic parameters are affected by unknown random uncertainties with known bounds. The difficulty we have to face is that traditional path-planning schemes applied to different models lead to different paths rather than to a common and robust path. To solve this problem we propose a technique based on polynomial optimization where the required constraints are imposed on a number of trajectories corresponding to admissible camera poses and parameterized by a common design variable. The planned image trajectory is then followed by using an IBVS controller. Simulations carried out with all typical uncertainties that characterize a real experiment illustrate the proposed strategy and provide promising results. © 2009 IEEE.published_or_final_versio

    A visual servoing path-planning strategy for cameras obeying the unified model

    Get PDF
    Part of 2010 IEEE Multi-Conference on Systems and ControlRecently, a unified camera model has been introduced in visual control systems in order to describe through a unique mathematical model conventional perspective cameras, fisheye cameras, and catadioptric systems. In this paper, a path-planning strategy for visual servoing is proposed for any camera obeying this unified model. The proposed strategy is based on the projection onto a virtual plane of the available image projections. This has two benefits. First, it allows one to perform camera pose estimation and 3D object reconstruction by using methods for conventional camera that are not valid for other cameras. Second, it allows one to perform image pathplanning for multi-constraint satisfaction by using a simplified but equivalent projection model, that in this paper is addressed by introducing polynomial parametrizations of the rotation and translation. The planned image trajectory is hence tracked by using an IBVS controller. The proposed strategy is validated through simulations with image noise and calibration errors typical of real experiments. It is worth remarking that visual servoing path-planning for non conventional perspective cameras has not been proposed yet in the literature. © 2010 IEEE.published_or_final_versionThe 2010 IEEE International Symposium on Computer-Aided Control System Design (CACSD), Yokohama, Japan, 8-10 September 2010. In Proceedings of CACSD, 2010, p. 1795-180

    Deeper understanding of the homography decomposition for vision-based control

    Get PDF
    The displacement of a calibrated camera between two images of a planar object can be estimated by decomposing a homography matrix. The aim of this document is to propose a new method for solving the homography decomposition problem. This new method provides analytical expressions for the solutions of the problem, instead of the traditional numerical procedures. As a result, expressions of the translation vector, rotation matrix and object-plane normal are explicitly expressed as a function of the entries of the homography matrix. The main advantage of this method is that it will provide a deeper understanding on the homography decomposition problem. For instance, it allows to obtain the relations among the possible solutions of the problem. Thus, new vision-based robot control laws can be designed. For example the control schemes proposed in this report combine the two final solutions of the problem (only one of them being the true one) assuming that there is no a priori knowledge for discerning among them

    Theory, Design, and Implementation of Landmark Promotion Cooperative Simultaneous Localization and Mapping

    Get PDF
    Simultaneous Localization and Mapping (SLAM) is a challenging problem in practice, the use of multiple robots and inexpensive sensors poses even more demands on the designer. Cooperative SLAM poses specific challenges in the areas of computational efficiency, software/network performance, and robustness to errors. New methods in image processing, recursive filtering, and SLAM have been developed to implement practical algorithms for cooperative SLAM on a set of inexpensive robots. The Consolidated Unscented Mixed Recursive Filter (CUMRF) is designed to handle non-linear systems with non-Gaussian noise. This is accomplished using the Unscented Transform combined with Gaussian Mixture Models. The Robust Kalman Filter is an extension of the Kalman Filter algorithm that improves the ability to remove erroneous observations using Principal Component Analysis (PCA) and the X84 outlier rejection rule. Forgetful SLAM is a local SLAM technique that runs in nearly constant time relative to the number of visible landmarks and improves poor performing sensors through sensor fusion and outlier rejection. Forgetful SLAM correlates all measured observations, but stops the state from growing over time. Hierarchical Active Ripple SLAM (HAR-SLAM) is a new SLAM architecture that breaks the traditional state space of SLAM into a chain of smaller state spaces, allowing multiple robots, multiple sensors, and multiple updates to occur in linear time with linear storage with respect to the number of robots, landmarks, and robots poses. This dissertation presents explicit methods for closing-the-loop, joining multiple robots, and active updates. Landmark Promotion SLAM is a hierarchy of new SLAM methods, using the Robust Kalman Filter, Forgetful SLAM, and HAR-SLAM. Practical aspects of SLAM are a focus of this dissertation. LK-SURF is a new image processing technique that combines Lucas-Kanade feature tracking with Speeded-Up Robust Features to perform spatial and temporal tracking. Typical stereo correspondence techniques fail at providing descriptors for features, or fail at temporal tracking. Several calibration and modeling techniques are also covered, including calibrating stereo cameras, aligning stereo cameras to an inertial system, and making neural net system models. These methods are important to improve the quality of the data and images acquired for the SLAM process

    Vision-based control of a differential-algebraic quaternion camera model

    Get PDF
    This work deals with the image-based feedback control of a camera, providing an optimization-based control design method for a controller that positions the projection of an external point (feature) at a specified display coordinate. Working with a DifferentialAlgebraic Representation (DAR) of the camera dynamics modeled in terms of quaternions, a static output feedback (SOF) controller that uses the error between the desired and current image is determined to generate a torque input for the system. From the Lyapunov method for stability analysis, the problem is converted into an optimization problem subject to constraints in the form of Bilinear Matrix Inequalities (BMI), which is solved through an iterative process. The results with DAR are compared to a similar process using a Quasi-Linear Parameter-Varying (Quasi-LPV) representation, which is developed in parallel along the text. Numerical results are provided to demonstrate the practicability of the method and show that a feasible solution achieves the objective of making the error asymptotically approach zero.Este trabalho lida com controle baseado em realimentação da imagem de uma câmera, fornecendo um método de projeto de controlador baseado em otimização, para um controlador que posiciona a projeção de um ponto externo em uma coordenada de exibição especificada. Trabalhando com uma Representação Diferencial Algébrica (DAR) da dinâ- mica da câmera modelada em termos de quaternions, um controlador com realimentação estática de saída (SOF) que usa o erro entre a imagem desejada e a atual é determinado para gerar uma entrada de torque para o sistema. A partir do método de análise de estabilidade de Lyapunov, o problema é convertido em um problema de otimização sujeito a restrições sob a forma de Desigualdades Matriciais Bilineares (BMI), o qual é resolvido através de um processo iterativo. Os resultados com DAR são comparados a um processo semelhante usando uma representação de Variação Paramétrica Quase Linear (Quasi-LPV), a qual é desenvolvida em paralelo ao longo do texto. Resultados numéricos são fornecidos para demonstrar a praticidade do método e mostrar que uma solução viável atinge o objetivo de fazer com que o erro se aproxime assintoticamente a zero

    Methods, Models, and Datasets for Visual Servoing and Vehicle Localisation

    Get PDF
    Machine autonomy has become a vibrant part of industrial and commercial aspirations. A growing demand exists for dexterous and intelligent machines that can work in unstructured environments without any human assistance. An autonomously operating machine should sense its surroundings, classify different kinds of observed objects, and interpret sensory information to perform necessary operations. This thesis summarizes original methods aimed at enhancing machine’s autonomous operation capability. These methods and the corresponding results are grouped into two main categories. The first category consists of research works that focus on improving visual servoing systems for robotic manipulators to accurately position workpieces. We start our investigation with the hand-eye calibration problem that focuses on calibrating visual sensors with a robotic manipulator. We thoroughly investigate the problem from various perspectives and provide alternative formulations of the problem and error objectives. The experimental results demonstrate that the proposed methods are robust and yield accurate solutions when tested on real and simulated data. The work package is bundled as a toolkit and available online for public use. In an extension, we proposed a constrained multiview pose estimation approach for robotic manipulators. The approach exploits the available geometric constraints on the robotic system and infuses them directly into the pose estimation method. The empirical results demonstrate higher accuracy and significantly higher precision compared to other studies. In the second part of this research, we tackle problems pertaining to the field of autonomous vehicles and its related applications. First, we introduce a pose estimation and mapping scheme to extend the application of visual Simultaneous Localization and Mapping to unstructured dynamic environments. We identify, extract, and discard dynamic entities from the pose estimation step. Moreover, we track the dynamic entities and actively update the map based on changes in the environment. Upon observing the limitations of the existing datasets during our earlier work, we introduce FinnForest, a novel dataset for testing and validating the performance of visual odometry and Simultaneous Localization and Mapping methods in an un-structured environment. We explored an environment with a forest landscape and recorded data with multiple stereo cameras, an IMU, and a GNSS receiver. The dataset offers unique challenges owing to the nature of the environment, variety of trajectories, and changes in season, weather, and daylight conditions. Building upon the future works proposed in FinnForest Dataset, we introduce a novel scheme that can localize an observer with extreme perspective changes. More specifically, we tailor the problem for autonomous vehicles such that they can recognize a previously visited place irrespective of the direction it previously traveled the route. To the best of our knowledge, this is the first study that accomplishes bi-directional loop closure on monocular images with a nominal field of view. To solve the localisation problem, we segregate the place identification from the pose regression by using deep learning in two steps. We demonstrate that bi-directional loop closure on monocular images is indeed possible when the problem is posed correctly, and the training data is adequately leveraged. All methodological contributions of this thesis are accompanied by extensive empirical analysis and discussions demonstrating the need, novelty, and improvement in performance over existing methods for pose estimation, odometry, mapping, and place recognition

    Simultaneous identification, tracking control and disturbance rejection of uncertain nonlinear dynamics systems: A unified neural approach

    Get PDF
    Previous works of traditional zeroing neural networks (or termed Zhang neural networks, ZNN) show great success for solving specific time-variant problems of known systems in an ideal environment. However, it is still a challenging issue for the ZNN to effectively solve time-variant problems for uncertain systems without the prior knowledge. Simultaneously, the involvement of external disturbances in the neural network model makes it even hard for time-variant problem solving due to the intensively computational burden and low accuracy. In this paper, a unified neural approach of simultaneous identification, tracking control and disturbance rejection in the framework of the ZNN is proposed to address the time-variant tracking control of uncertain nonlinear dynamics systems (UNDS). The neural network model derived by the proposed approach captures hidden relations between inputs and outputs of the UNDS. The proposed model shows outstanding tracking performance even under the influences of uncertainties and disturbances. Then, the continuous-time model is discretized via Euler forward formula (EFF). The corresponding discrete algorithm and block diagram are also presented for the convenience of implementation. Theoretical analyses on the convergence property and discretization accuracy are presented to verify the performance of the neural network model. Finally, numerical studies, robot applications, performance comparisons and tests demonstrate the effectiveness and advantages of the proposed neural network model for the time-variant tracking control of UNDS
    corecore