31 research outputs found

    Conferring robustness to path-planning for image-based control

    Get PDF
    Path-planning has been proposed in visual servoing for reaching the desired location while fulfilling various constraints. Unfortunately, the real trajectory can be significantly different from the reference trajectory due to the presence of uncertainties on the model used, with the consequence that some constraints may not be fulfilled hence leading to a failure of the visual servoing task. This paper proposes a new strategy for addressing this problem, where the idea consists of conferring robustness to the path-planning scheme by considering families of admissible models. In order to obtain these families, uncertainty in the form of random variables is introduced on the available image points and intrinsic parameters. Two families are considered, one by generating a given number of admissible models corresponding to extreme values of the uncertainty, and one by estimating the extreme values of the components of the admissible models. Each model of these families identifies a reference trajectory, which is parametrized by design variables that are common to all the models. The design variables are hence determined by imposing that all the reference trajectories fulfill the required constraints. Discussions on the convergence and robustness of the proposed strategy are provided, in particular showing that the satisfaction of the visibility and workspace constraints for the second family ensures the satisfaction of these constraints for all models bounded by this family. The proposed strategy is illustrated through simulations and experiments. © 2011 IEEE.published_or_final_versio

    Simultaneous identification, tracking control and disturbance rejection of uncertain nonlinear dynamics systems: A unified neural approach

    Get PDF
    Previous works of traditional zeroing neural networks (or termed Zhang neural networks, ZNN) show great success for solving specific time-variant problems of known systems in an ideal environment. However, it is still a challenging issue for the ZNN to effectively solve time-variant problems for uncertain systems without the prior knowledge. Simultaneously, the involvement of external disturbances in the neural network model makes it even hard for time-variant problem solving due to the intensively computational burden and low accuracy. In this paper, a unified neural approach of simultaneous identification, tracking control and disturbance rejection in the framework of the ZNN is proposed to address the time-variant tracking control of uncertain nonlinear dynamics systems (UNDS). The neural network model derived by the proposed approach captures hidden relations between inputs and outputs of the UNDS. The proposed model shows outstanding tracking performance even under the influences of uncertainties and disturbances. Then, the continuous-time model is discretized via Euler forward formula (EFF). The corresponding discrete algorithm and block diagram are also presented for the convenience of implementation. Theoretical analyses on the convergence property and discretization accuracy are presented to verify the performance of the neural network model. Finally, numerical studies, robot applications, performance comparisons and tests demonstrate the effectiveness and advantages of the proposed neural network model for the time-variant tracking control of UNDS

    Theory, Design, and Implementation of Landmark Promotion Cooperative Simultaneous Localization and Mapping

    Get PDF
    Simultaneous Localization and Mapping (SLAM) is a challenging problem in practice, the use of multiple robots and inexpensive sensors poses even more demands on the designer. Cooperative SLAM poses specific challenges in the areas of computational efficiency, software/network performance, and robustness to errors. New methods in image processing, recursive filtering, and SLAM have been developed to implement practical algorithms for cooperative SLAM on a set of inexpensive robots. The Consolidated Unscented Mixed Recursive Filter (CUMRF) is designed to handle non-linear systems with non-Gaussian noise. This is accomplished using the Unscented Transform combined with Gaussian Mixture Models. The Robust Kalman Filter is an extension of the Kalman Filter algorithm that improves the ability to remove erroneous observations using Principal Component Analysis (PCA) and the X84 outlier rejection rule. Forgetful SLAM is a local SLAM technique that runs in nearly constant time relative to the number of visible landmarks and improves poor performing sensors through sensor fusion and outlier rejection. Forgetful SLAM correlates all measured observations, but stops the state from growing over time. Hierarchical Active Ripple SLAM (HAR-SLAM) is a new SLAM architecture that breaks the traditional state space of SLAM into a chain of smaller state spaces, allowing multiple robots, multiple sensors, and multiple updates to occur in linear time with linear storage with respect to the number of robots, landmarks, and robots poses. This dissertation presents explicit methods for closing-the-loop, joining multiple robots, and active updates. Landmark Promotion SLAM is a hierarchy of new SLAM methods, using the Robust Kalman Filter, Forgetful SLAM, and HAR-SLAM. Practical aspects of SLAM are a focus of this dissertation. LK-SURF is a new image processing technique that combines Lucas-Kanade feature tracking with Speeded-Up Robust Features to perform spatial and temporal tracking. Typical stereo correspondence techniques fail at providing descriptors for features, or fail at temporal tracking. Several calibration and modeling techniques are also covered, including calibrating stereo cameras, aligning stereo cameras to an inertial system, and making neural net system models. These methods are important to improve the quality of the data and images acquired for the SLAM process

    Commande référencée vision pour drones à décollages et atterrissages verticaux

    Get PDF
    La miniaturisation des calculateurs a permis le développement des drones, engins volants capable de se déplacer de façon autonome et de rendre des services, comme se rendre clans des lieux peu accessibles ou remplacer l'homme dans des missions pénibles. Un enjeu essentiel dans ce cadre est celui de l'information qu'ils doivent utiliser pour se déplacer, et donc des capteurs à exploiter pour obtenir cette information. Or nombre de ces capteurs présentent des inconvénients (risques de brouillage ou de masquage en particulier). L'utilisation d'une caméra vidéo dans ce contexte offre une perspective intéressante. L'objet de cette thèse était l'étude de l'utilisation d'une telle caméra dans un contexte capteur minimaliste: essentiellement l'utilisation des données visuelles et inertielles. Elle a porté sur le développement de lois de commande offrant au système ainsi bouclé des propriétés de stabilité et de robustesse. En particulier, une des difficultés majeures abordées vient de la connaissance très limitée de l'environnement dans lequel le drone évolue. La thèse a tout d'abord étudié le problème de stabilisation du drone sous l'hypothèse de petits déplacements (hypothèse de linéarité). Dans un second temps, on a montré comment relâcher l'hypothèse de petits déplacements via la synthèse de commandes non linéaires. Le cas du suivi de trajectoire a ensuite été considéré, en s'appuyant sur la définition d'un cadre générique de mesure d'erreur de position par rapport à un point de référence inconnu. Enfin, la validation expérimentale de ces résultats a été entamée pendant la thèse, et a permis de valider bon nombre d'étapes et de défis associés à leur mise en œuvre en conditions réelles. La thèse se conclut par des perspectives pour poursuivre les travaux.The computers miniaturization has paved the way for the conception of Unmanned Aerial vehicles - "UAVs"- that is: flying vehicles embedding computers to make them partially or fully automated for such missions as e.g. cluttered environments exploration or replacement of humanly piloted vehicles for hazardous or painful missions. A key challenge for the design of such vehicles is that of the information they need to find in order to move, and, thus, the sensors to be used in order to get such information. A number of such sensors have flaws (e.g. the risk of being jammed). In this context, the use of a videocamera offers interesting prospectives. The goal of this PhD work was to study the use of such a videocamera in a minimal sensors setting: essentially the use of visual and inertial data. The work has been focused on the development of control laws offering the closed loop system stability and robustness properties. In particular, one of the major difficulties we faced came from the limited knowledge of the UAV environment. First we have studied this question under a small displacements assumption (linearity assumption). A control law has been defined, which took performance criteria into account. Second, we have showed how the small displacements assumption could be given up through nonlinear control design. The case of a trajectory following has then been considered, with the use of a generic error vector modelling with respect to an unknown reference point. Finally, an experimental validation of this work has been started and helped validate a number of steps and challenges associated to real conditions experiments. The work was concluded with prospectives for future work.TOULOUSE-ISAE (315552318) / SudocSudocFranceF

    Vision-based control of a differential-algebraic quaternion camera model

    Get PDF
    This work deals with the image-based feedback control of a camera, providing an optimization-based control design method for a controller that positions the projection of an external point (feature) at a specified display coordinate. Working with a DifferentialAlgebraic Representation (DAR) of the camera dynamics modeled in terms of quaternions, a static output feedback (SOF) controller that uses the error between the desired and current image is determined to generate a torque input for the system. From the Lyapunov method for stability analysis, the problem is converted into an optimization problem subject to constraints in the form of Bilinear Matrix Inequalities (BMI), which is solved through an iterative process. The results with DAR are compared to a similar process using a Quasi-Linear Parameter-Varying (Quasi-LPV) representation, which is developed in parallel along the text. Numerical results are provided to demonstrate the practicability of the method and show that a feasible solution achieves the objective of making the error asymptotically approach zero.Este trabalho lida com controle baseado em realimentação da imagem de uma câmera, fornecendo um método de projeto de controlador baseado em otimização, para um controlador que posiciona a projeção de um ponto externo em uma coordenada de exibição especificada. Trabalhando com uma Representação Diferencial Algébrica (DAR) da dinâ- mica da câmera modelada em termos de quaternions, um controlador com realimentação estática de saída (SOF) que usa o erro entre a imagem desejada e a atual é determinado para gerar uma entrada de torque para o sistema. A partir do método de análise de estabilidade de Lyapunov, o problema é convertido em um problema de otimização sujeito a restrições sob a forma de Desigualdades Matriciais Bilineares (BMI), o qual é resolvido através de um processo iterativo. Os resultados com DAR são comparados a um processo semelhante usando uma representação de Variação Paramétrica Quase Linear (Quasi-LPV), a qual é desenvolvida em paralelo ao longo do texto. Resultados numéricos são fornecidos para demonstrar a praticidade do método e mostrar que uma solução viável atinge o objetivo de fazer com que o erro se aproxime assintoticamente a zero

    Proceedings of the International Micro Air Vehicles Conference and Flight Competition 2017 (IMAV 2017)

    Get PDF
    The IMAV 2017 conference has been held at ISAE-SUPAERO, Toulouse, France from Sept. 18 to Sept. 21, 2017. More than 250 participants coming from 30 different countries worldwide have presented their latest research activities in the field of drones. 38 papers have been presented during the conference including various topics such as Aerodynamics, Aeroacoustics, Propulsion, Autopilots, Sensors, Communication systems, Mission planning techniques, Artificial Intelligence, Human-machine cooperation as applied to drones

    Methods, Models, and Datasets for Visual Servoing and Vehicle Localisation

    Get PDF
    Machine autonomy has become a vibrant part of industrial and commercial aspirations. A growing demand exists for dexterous and intelligent machines that can work in unstructured environments without any human assistance. An autonomously operating machine should sense its surroundings, classify different kinds of observed objects, and interpret sensory information to perform necessary operations. This thesis summarizes original methods aimed at enhancing machine’s autonomous operation capability. These methods and the corresponding results are grouped into two main categories. The first category consists of research works that focus on improving visual servoing systems for robotic manipulators to accurately position workpieces. We start our investigation with the hand-eye calibration problem that focuses on calibrating visual sensors with a robotic manipulator. We thoroughly investigate the problem from various perspectives and provide alternative formulations of the problem and error objectives. The experimental results demonstrate that the proposed methods are robust and yield accurate solutions when tested on real and simulated data. The work package is bundled as a toolkit and available online for public use. In an extension, we proposed a constrained multiview pose estimation approach for robotic manipulators. The approach exploits the available geometric constraints on the robotic system and infuses them directly into the pose estimation method. The empirical results demonstrate higher accuracy and significantly higher precision compared to other studies. In the second part of this research, we tackle problems pertaining to the field of autonomous vehicles and its related applications. First, we introduce a pose estimation and mapping scheme to extend the application of visual Simultaneous Localization and Mapping to unstructured dynamic environments. We identify, extract, and discard dynamic entities from the pose estimation step. Moreover, we track the dynamic entities and actively update the map based on changes in the environment. Upon observing the limitations of the existing datasets during our earlier work, we introduce FinnForest, a novel dataset for testing and validating the performance of visual odometry and Simultaneous Localization and Mapping methods in an un-structured environment. We explored an environment with a forest landscape and recorded data with multiple stereo cameras, an IMU, and a GNSS receiver. The dataset offers unique challenges owing to the nature of the environment, variety of trajectories, and changes in season, weather, and daylight conditions. Building upon the future works proposed in FinnForest Dataset, we introduce a novel scheme that can localize an observer with extreme perspective changes. More specifically, we tailor the problem for autonomous vehicles such that they can recognize a previously visited place irrespective of the direction it previously traveled the route. To the best of our knowledge, this is the first study that accomplishes bi-directional loop closure on monocular images with a nominal field of view. To solve the localisation problem, we segregate the place identification from the pose regression by using deep learning in two steps. We demonstrate that bi-directional loop closure on monocular images is indeed possible when the problem is posed correctly, and the training data is adequately leveraged. All methodological contributions of this thesis are accompanied by extensive empirical analysis and discussions demonstrating the need, novelty, and improvement in performance over existing methods for pose estimation, odometry, mapping, and place recognition
    corecore