6 research outputs found

    Educational hands-on testbed using Lego robot for learning guidance, navigation, and control

    Get PDF
    The aim of this paper is to propose an educational hands-on testbed using inexpensive systems composed of a Lego Mindstorms NXT robot and a webcam and easy-to-deal-with tools especially for learning and testing guidance, navigation, and control as well as search and obstacle mapping, however the extendibility and applicability of the proposed approach is not limited to only the educational purpose. In order to provide navigation information of the Lego robot in an indoor environment, an vision navigation system is proposed based on a colour marker detection robust to brightness change and an Extended Kalman filter. Furthermore, a spiral-like search, a command-to-line-of-sight guidance, a motor control, and two-dimensional Splinegon approximation are applied to sensing and mapping of a complex-shaped obstacle. The experimental result shows that the proposed testbed can be viewed as an efficient tool for the education of image processing and estimation as well as guidance, navigation, and control with a minimum burden of time and cost. © 2011 IFAC

    A Localized Autonomous Control Algorithm For Robots With Heterogeneous Capabilities In A Multi-Tier Architecture

    Get PDF
    This dissertation makes two contributions to the use of the Blackboard Architecture for command. The use of boundary nodes for data abstraction is introduced and the use of a solver-based blackboard system with pruning is proposed. It also makes contributions advancing the engineering design process in the area of command system selection for heterogeneous robotic systems. It presents and analyzes data informing decision making between centralized and distributed command systems and also characterizes the efficacy of pruning across different experimental scenarios, demonstrating when it is effective or not. Finally, it demonstrates the operations of the system, raising the technology readiness level (TRL) of the technology towards a level suitable for actual mission use. The context for this work is a multi-tier mission architecture, based on prior work by Fink on a “tier scalable” architecture. This work took a top-down approach where the superior tiers (in terms of scope of visibility) send specific commands to craft in lower tiers. While benefitting from the use of a large centralized processing center, this approach is limited in responding to failures and interference. The work presented herein has involved developing and comparatively characterizing centralized and decentralized (where superior nodes provide information and goals to the lower-level craft, but decisions are made locally) Blackboard Architecture based command systems. Blackboard Architecture advancements (a solver, pruning, boundary nodes) have been made and tested under multiple experimental conditions

    Indoor UAV Control Using Multi-Camera Visual Feedback

    Get PDF
    This paper presents the control of an indoor unmanned aerial vehicle (UAV) using multi-camera visual feedback. For the autonomous flight of the indoor UAV, instead of using onboard sensor information, visual feedback concept is employed by the development of an indoor flight test-bed. The indoor test-bed consists of four major components: the multi-camera system, ground computer, onboard color marker set, and quad-rotor UAV. Since the onboard markers are attached to the pre-defined location, position and attitude of the UAV can be estimated by marker detection algorithm and triangulation method. Additionally, this study introduces a filter algorithm to obtain the full 6-degree of freedom (DOF) pose estimation including velocities and angular rates. The filter algorithm also enhances the performance of the vision system by making up for the weakness of low cost cameras such as poor resolution and large noise. Moreover, for the pose estimation of multiple vehicles, data association algorithm using the geometric relation between cameras is proposed in this paper. The control system is designed based on the classical proportional-integral-derivative (PID) control, which uses the position, velocity and attitude from the vision system and the angular rate from the rate gyro sensor. This paper concludes with both ground and flight test results illustrating the performance and properties of the proposed indoor flight test-bed and the control system using the multi-camera visual feedbackclos

    Indoor UAV control using multi-camera visual feedback

    No full text

    Commande référencée vision pour drones à décollages et atterrissages verticaux

    Get PDF
    La miniaturisation des calculateurs a permis le développement des drones, engins volants capable de se déplacer de façon autonome et de rendre des services, comme se rendre clans des lieux peu accessibles ou remplacer l'homme dans des missions pénibles. Un enjeu essentiel dans ce cadre est celui de l'information qu'ils doivent utiliser pour se déplacer, et donc des capteurs à exploiter pour obtenir cette information. Or nombre de ces capteurs présentent des inconvénients (risques de brouillage ou de masquage en particulier). L'utilisation d'une caméra vidéo dans ce contexte offre une perspective intéressante. L'objet de cette thèse était l'étude de l'utilisation d'une telle caméra dans un contexte capteur minimaliste: essentiellement l'utilisation des données visuelles et inertielles. Elle a porté sur le développement de lois de commande offrant au système ainsi bouclé des propriétés de stabilité et de robustesse. En particulier, une des difficultés majeures abordées vient de la connaissance très limitée de l'environnement dans lequel le drone évolue. La thèse a tout d'abord étudié le problème de stabilisation du drone sous l'hypothèse de petits déplacements (hypothèse de linéarité). Dans un second temps, on a montré comment relâcher l'hypothèse de petits déplacements via la synthèse de commandes non linéaires. Le cas du suivi de trajectoire a ensuite été considéré, en s'appuyant sur la définition d'un cadre générique de mesure d'erreur de position par rapport à un point de référence inconnu. Enfin, la validation expérimentale de ces résultats a été entamée pendant la thèse, et a permis de valider bon nombre d'étapes et de défis associés à leur mise en œuvre en conditions réelles. La thèse se conclut par des perspectives pour poursuivre les travaux.The computers miniaturization has paved the way for the conception of Unmanned Aerial vehicles - "UAVs"- that is: flying vehicles embedding computers to make them partially or fully automated for such missions as e.g. cluttered environments exploration or replacement of humanly piloted vehicles for hazardous or painful missions. A key challenge for the design of such vehicles is that of the information they need to find in order to move, and, thus, the sensors to be used in order to get such information. A number of such sensors have flaws (e.g. the risk of being jammed). In this context, the use of a videocamera offers interesting prospectives. The goal of this PhD work was to study the use of such a videocamera in a minimal sensors setting: essentially the use of visual and inertial data. The work has been focused on the development of control laws offering the closed loop system stability and robustness properties. In particular, one of the major difficulties we faced came from the limited knowledge of the UAV environment. First we have studied this question under a small displacements assumption (linearity assumption). A control law has been defined, which took performance criteria into account. Second, we have showed how the small displacements assumption could be given up through nonlinear control design. The case of a trajectory following has then been considered, with the use of a generic error vector modelling with respect to an unknown reference point. Finally, an experimental validation of this work has been started and helped validate a number of steps and challenges associated to real conditions experiments. The work was concluded with prospectives for future work.TOULOUSE-ISAE (315552318) / SudocSudocFranceF
    corecore