71 research outputs found

    Commande référencée vision pour drones à décollages et atterrissages verticaux

    Get PDF
    La miniaturisation des calculateurs a permis le développement des drones, engins volants capable de se déplacer de façon autonome et de rendre des services, comme se rendre clans des lieux peu accessibles ou remplacer l'homme dans des missions pénibles. Un enjeu essentiel dans ce cadre est celui de l'information qu'ils doivent utiliser pour se déplacer, et donc des capteurs à exploiter pour obtenir cette information. Or nombre de ces capteurs présentent des inconvénients (risques de brouillage ou de masquage en particulier). L'utilisation d'une caméra vidéo dans ce contexte offre une perspective intéressante. L'objet de cette thèse était l'étude de l'utilisation d'une telle caméra dans un contexte capteur minimaliste: essentiellement l'utilisation des données visuelles et inertielles. Elle a porté sur le développement de lois de commande offrant au système ainsi bouclé des propriétés de stabilité et de robustesse. En particulier, une des difficultés majeures abordées vient de la connaissance très limitée de l'environnement dans lequel le drone évolue. La thèse a tout d'abord étudié le problème de stabilisation du drone sous l'hypothèse de petits déplacements (hypothèse de linéarité). Dans un second temps, on a montré comment relâcher l'hypothèse de petits déplacements via la synthèse de commandes non linéaires. Le cas du suivi de trajectoire a ensuite été considéré, en s'appuyant sur la définition d'un cadre générique de mesure d'erreur de position par rapport à un point de référence inconnu. Enfin, la validation expérimentale de ces résultats a été entamée pendant la thèse, et a permis de valider bon nombre d'étapes et de défis associés à leur mise en œuvre en conditions réelles. La thèse se conclut par des perspectives pour poursuivre les travaux.The computers miniaturization has paved the way for the conception of Unmanned Aerial vehicles - "UAVs"- that is: flying vehicles embedding computers to make them partially or fully automated for such missions as e.g. cluttered environments exploration or replacement of humanly piloted vehicles for hazardous or painful missions. A key challenge for the design of such vehicles is that of the information they need to find in order to move, and, thus, the sensors to be used in order to get such information. A number of such sensors have flaws (e.g. the risk of being jammed). In this context, the use of a videocamera offers interesting prospectives. The goal of this PhD work was to study the use of such a videocamera in a minimal sensors setting: essentially the use of visual and inertial data. The work has been focused on the development of control laws offering the closed loop system stability and robustness properties. In particular, one of the major difficulties we faced came from the limited knowledge of the UAV environment. First we have studied this question under a small displacements assumption (linearity assumption). A control law has been defined, which took performance criteria into account. Second, we have showed how the small displacements assumption could be given up through nonlinear control design. The case of a trajectory following has then been considered, with the use of a generic error vector modelling with respect to an unknown reference point. Finally, an experimental validation of this work has been started and helped validate a number of steps and challenges associated to real conditions experiments. The work was concluded with prospectives for future work.TOULOUSE-ISAE (315552318) / SudocSudocFranceF

    Asymptotic Vision-Based Tracking Control of the Quadrotor Aerial Vehicle

    Get PDF

    Asymptotic Vision-Based Tracking Control of the Quadrotor Aerial Vehicle

    Get PDF
    This paper proposes an image-based visual servo (IBVS) controller for the 3D translational motion of the quadrotor unmanned aerial vehicle (UAV). The main purpose of this paper is to provide asymptotic stability for vision-based tracking control of the quadrotor in the presence of uncertainty in the dynamic model of the system. The aim of the paper also includes the use of ow of image features as the velocity information to compensate for the unreliable linear velocity data measured by accelerometers. For this purpose, the mathematical model of the quadrotor is presented based on the optic ow of image features which provides the possibility of designing a velocity-free IBVS controller with considering the dynamics of the robot. The image features are de ned from a suitable combination of perspective image moments without using the model of the object. This property allows the application of the proposed controller in unknown places. The controller is robust with respect to the uncertainties in the transla- tional dynamics of the system associated with the target motion, image depth and external disturbances. Simulation results and a comparison study are presented which demonstrate the e ectiveness of the proposed approach

    Asymptotic Vision-Based Tracking Control of the Quadrotor Aerial Vehicle

    Get PDF
    This paper proposes an image-based visual servo (IBVS) controller for the 3D translational motion of the quadrotor unmanned aerial vehicle (UAV). The main purpose of this paper is to provide asymptotic stability for vision-based tracking control of the quadrotor in the presence of uncertainty in the dynamic model of the system. The aim of the paper also includes the use of ow of image features as the velocity information to compensate for the unreliable linear velocity data measured by accelerometers. For this purpose, the mathematical model of the quadrotor is presented based on the optic ow of image features which provides the possibility of designing a velocity-free IBVS controller with considering the dynamics of the robot. The image features are de ned from a suitable combination of perspective image moments without using the model of the object. This property allows the application of the proposed controller in unknown places. The controller is robust with respect to the uncertainties in the transla- tional dynamics of the system associated with the target motion, image depth and external disturbances. Simulation results and a comparison study are presented which demonstrate the e ectiveness of the proposed approach

    Perception Based Navigation for Underactuated Robots.

    Full text link
    Robot autonomous navigation is a very active field of robotics. In this thesis we propose a hierarchical approach to a class of underactuated robots by composing a collection of local controllers with well understood domains of attraction. We start by addressing the problem of robot navigation with nonholonomic motion constraints and perceptual cues arising from onboard visual servoing in partially engineered environments. We propose a general hybrid procedure that adapts to the constrained motion setting the standard feedback controller arising from a navigation function in the fully actuated case. This is accomplished by switching back and forth between moving "down" and "across" the associated gradient field toward the stable manifold it induces in the constrained dynamics. Guaranteed to avoid obstacles in all cases, we provide conditions under which the new procedure brings initial configurations to within an arbitrarily small neighborhood of the goal. We summarize with simulation results on a sample of visual servoing problems with a few different perceptual models. We document the empirical effectiveness of the proposed algorithm by reporting the results of its application to outdoor autonomous visual registration experiments with the robot RHex guided by engineered beacons. Next we explore the possibility of adapting the resulting first order hybrid feedback controller to its dynamical counterpart by introducing tunable damping terms in the control law. Just as gradient controllers for standard quasi-static mechanical systems give rise to generalized "PD-style" controllers for dynamical versions of those standard systems, we show that it is possible to construct similar "lifts" in the presence of non-holonomic constraints notwithstanding the necessary absence of point attractors. Simulation results corroborate the proposed lift. Finally we present an implementation of a fully autonomous navigation application for a legged robot. The robot adapts its leg trajectory parameters by recourse to a discrete gradient descent algorithm, while managing its experiments and outcome measurements autonomously via the navigation visual servoing algorithms proposed in this thesis.Ph.D.Electrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/58412/1/glopes_1.pd

    Review of sliding mode control application in autonomous underwater vehicles

    Get PDF
    973-984This paper presents a review of sliding mode control for autonomous underwater vehicles (AUVs). The AUVs are used under water operating in the presence of uncertainties (due to hydrodynamics coefficients) and external disturbances (due to water currents, waves, etc.). Sliding mode controller is one of the nonlinear robust controllers which is robust towards uncertainties, parameter variations and external disturbances. The evolution of sliding mode control in motion control studies of autonomous underwater vehicles is summarized throughout for the last three decades. The performance of the controller is examined based on the chattering reduction, accuracy (steady state error reduction), and robustness against perturbation. The review on sliding mode control for AUVs provides insights for readers to design new techniques and algorithms, to enhance the existing family of sliding mode control strategies into a new one or to merge and re-supervise the control techniques with other control strategies, in which, the aim is to obtain good controller design for AUVs in terms of great performance, stability and robustness

    Design and modeling of a stair climber smart mobile robot (MSRox)

    Full text link

    Robust Image-Based Visual Servo Control of an Uncertain Missile Airframe

    Get PDF
    A nonlinear vision-based guidance law is presented for a missile-target scenario in the presence of model uncertainty and unknown target evasive maneuvers. To ease the readability of this thesis, detailed explanations of any relevant mathematical tools are provided, including stability definitions, the procedure of Lyapunov-based stability analysis, sliding mode control fundamentals, basics on visual servo control, and other basic nonlinear control tools. To develop the vision-based guidance law, projective geometric relationships are utilized to combine the image kinematics with the missile dynamics in an integrated visual dynamic system. The guidance law is designed using an image-based visual servo control method in conjunction with a sliding-mode control strategy, which is shown to achieve asymptotic target interception in the presence of the aforementioned uncertainties. A Lyapunov-based stability analysis is presented to prove the theoretical result, and numerical simulation results are provided to demonstrate the performance of the proposed robust controller for both stationary and non-stationary targets
    corecore