13 research outputs found

    Planning and Control of Mobile Robots in Image Space from Overhead Cameras

    Get PDF
    In this work, we present a framework for the development of a planar mobile robot controller based on image plane feedback. We show that the design of such a motion controller can be accomplished in the image plane by making use of a subset of the parameters that relate the image plane to the ground plane, while still leveraging the simplifications offered by modeling the system as a differentially flat system. Our method relies on a waypoint-based trajectory generator, with all the waypoints specified in the image, as seen by an overhead observer. We present some results from simulation as well as from experiments that validate the ideas presented in this work and discuss some ideas for future wor

    Research on Visual Servo Grasping of Household Objects for Nonholonomic Mobile Manipulator

    Get PDF
    This paper focuses on the problem of visual servo grasping of household objects for nonholonomic mobile manipulator. Firstly, a new kind of artificial object mark based on QR (Quick Response) Code is designed, which can be affixed to the surface of household objects. Secondly, after summarizing the vision-based autonomous mobile manipulation system as a generalized manipulator, the generalized manipulator’s kinematic model is established, the analytical inverse kinematic solutions of the generalized manipulator are acquired, and a novel active vision based camera calibration method is proposed to determine the hand-eye relationship. Finally, a visual servo switching control law is designed to control the service robot to finish object grasping operation. Experimental results show that QR Code-based artificial object mark can overcome the difficulties brought by household objects’ variety and operation complexity, and the proposed visual servo scheme makes it possible for service robot to grasp and deliver objects efficiently

    Robotic manipulation: planning and control for dexterous grasp

    Get PDF

    Commande référencée vision pour drones à décollages et atterrissages verticaux

    Get PDF
    La miniaturisation des calculateurs a permis le développement des drones, engins volants capable de se déplacer de façon autonome et de rendre des services, comme se rendre clans des lieux peu accessibles ou remplacer l'homme dans des missions pénibles. Un enjeu essentiel dans ce cadre est celui de l'information qu'ils doivent utiliser pour se déplacer, et donc des capteurs à exploiter pour obtenir cette information. Or nombre de ces capteurs présentent des inconvénients (risques de brouillage ou de masquage en particulier). L'utilisation d'une caméra vidéo dans ce contexte offre une perspective intéressante. L'objet de cette thèse était l'étude de l'utilisation d'une telle caméra dans un contexte capteur minimaliste: essentiellement l'utilisation des données visuelles et inertielles. Elle a porté sur le développement de lois de commande offrant au système ainsi bouclé des propriétés de stabilité et de robustesse. En particulier, une des difficultés majeures abordées vient de la connaissance très limitée de l'environnement dans lequel le drone évolue. La thèse a tout d'abord étudié le problème de stabilisation du drone sous l'hypothèse de petits déplacements (hypothèse de linéarité). Dans un second temps, on a montré comment relâcher l'hypothèse de petits déplacements via la synthèse de commandes non linéaires. Le cas du suivi de trajectoire a ensuite été considéré, en s'appuyant sur la définition d'un cadre générique de mesure d'erreur de position par rapport à un point de référence inconnu. Enfin, la validation expérimentale de ces résultats a été entamée pendant la thèse, et a permis de valider bon nombre d'étapes et de défis associés à leur mise en œuvre en conditions réelles. La thèse se conclut par des perspectives pour poursuivre les travaux.The computers miniaturization has paved the way for the conception of Unmanned Aerial vehicles - "UAVs"- that is: flying vehicles embedding computers to make them partially or fully automated for such missions as e.g. cluttered environments exploration or replacement of humanly piloted vehicles for hazardous or painful missions. A key challenge for the design of such vehicles is that of the information they need to find in order to move, and, thus, the sensors to be used in order to get such information. A number of such sensors have flaws (e.g. the risk of being jammed). In this context, the use of a videocamera offers interesting prospectives. The goal of this PhD work was to study the use of such a videocamera in a minimal sensors setting: essentially the use of visual and inertial data. The work has been focused on the development of control laws offering the closed loop system stability and robustness properties. In particular, one of the major difficulties we faced came from the limited knowledge of the UAV environment. First we have studied this question under a small displacements assumption (linearity assumption). A control law has been defined, which took performance criteria into account. Second, we have showed how the small displacements assumption could be given up through nonlinear control design. The case of a trajectory following has then been considered, with the use of a generic error vector modelling with respect to an unknown reference point. Finally, an experimental validation of this work has been started and helped validate a number of steps and challenges associated to real conditions experiments. The work was concluded with prospectives for future work.TOULOUSE-ISAE (315552318) / SudocSudocFranceF

    Map-Based Localization for Unmanned Aerial Vehicle Navigation

    Get PDF
    Unmanned Aerial Vehicles (UAVs) require precise pose estimation when navigating in indoor and GNSS-denied / GNSS-degraded outdoor environments. The possibility of crashing in these environments is high, as spaces are confined, with many moving obstacles. There are many solutions for localization in GNSS-denied environments, and many different technologies are used. Common solutions involve setting up or using existing infrastructure, such as beacons, Wi-Fi, or surveyed targets. These solutions were avoided because the cost should be proportional to the number of users, not the coverage area. Heavy and expensive sensors, for example a high-end IMU, were also avoided. Given these requirements, a camera-based localization solution was selected for the sensor pose estimation. Several camera-based localization approaches were investigated. Map-based localization methods were shown to be the most efficient because they close loops using a pre-existing map, thus the amount of data and the amount of time spent collecting data are reduced as there is no need to re-observe the same areas multiple times. This dissertation proposes a solution to address the task of fully localizing a monocular camera onboard a UAV with respect to a known environment (i.e., it is assumed that a 3D model of the environment is available) for the purpose of navigation for UAVs in structured environments. Incremental map-based localization involves tracking a map through an image sequence. When the map is a 3D model, this task is referred to as model-based tracking. A by-product of the tracker is the relative 3D pose (position and orientation) between the camera and the object being tracked. State-of-the-art solutions advocate that tracking geometry is more robust than tracking image texture because edges are more invariant to changes in object appearance and lighting. However, model-based trackers have been limited to tracking small simple objects in small environments. An assessment was performed in tracking larger, more complex building models, in larger environments. A state-of-the art model-based tracker called ViSP (Visual Servoing Platform) was applied in tracking outdoor and indoor buildings using a UAVs low-cost camera. The assessment revealed weaknesses at large scales. Specifically, ViSP failed when tracking was lost, and needed to be manually re-initialized. Failure occurred when there was a lack of model features in the cameras field of view, and because of rapid camera motion. Experiments revealed that ViSP achieved positional accuracies similar to single point positioning solutions obtained from single-frequency (L1) GPS observations standard deviations around 10 metres. These errors were considered to be large, considering the geometric accuracy of the 3D model used in the experiments was 10 to 40 cm. The first contribution of this dissertation proposes to increase the performance of the localization system by combining ViSP with map-building incremental localization, also referred to as simultaneous localization and mapping (SLAM). Experimental results in both indoor and outdoor environments show sub-metre positional accuracies were achieved, while reducing the number of tracking losses throughout the image sequence. It is shown that by integrating model-based tracking with SLAM, not only does SLAM improve model tracking performance, but the model-based tracker alleviates the computational expense of SLAMs loop closing procedure to improve runtime performance. Experiments also revealed that ViSP was unable to handle occlusions when a complete 3D building model was used, resulting in large errors in its pose estimates. The second contribution of this dissertation is a novel map-based incremental localization algorithm that improves tracking performance, and increases pose estimation accuracies from ViSP. The novelty of this algorithm is the implementation of an efficient matching process that identifies corresponding linear features from the UAVs RGB image data and a large, complex, and untextured 3D model. The proposed model-based tracker improved positional accuracies from 10 m (obtained with ViSP) to 46 cm in outdoor environments, and improved from an unattainable result using VISP to 2 cm positional accuracies in large indoor environments. The main disadvantage of any incremental algorithm is that it requires the camera pose of the first frame. Initialization is often a manual process. The third contribution of this dissertation is a map-based absolute localization algorithm that automatically estimates the camera pose when no prior pose information is available. The method benefits from vertical line matching to accomplish a registration procedure of the reference model views with a set of initial input images via geometric hashing. Results demonstrate that sub-metre positional accuracies were achieved and a proposed enhancement of conventional geometric hashing produced more correct matches - 75% of the correct matches were identified, compared to 11%. Further the number of incorrect matches was reduced by 80%

    Distributed scene reconstruction from multiple mobile platforms

    Get PDF
    Recent research on mobile robotics has produced new designs that provide house-hold robots with omnidirectional motion. The image sensor embedded in these devices motivates the application of 3D vision techniques on them for navigation and mapping purposes. In addition to this, distributed cheapsensing systems acting as unitary entity have recently been discovered as an efficient alternative to expensive mobile equipment. In this work we present an implementation of a visual reconstruction method, structure from motion (SfM), on a low-budget, omnidirectional mobile platform, and extend this method to distributed 3D scene reconstruction with several instances of such a platform. Our approach overcomes the challenges yielded by the plaform. The unprecedented levels of noise produced by the image compression typical of the platform is processed by our feature filtering methods, which ensure suitable feature matching populations for epipolar geometry estimation by means of a strict quality-based feature selection. The robust pose estimation algorithms implemented, along with a novel feature tracking system, enable our incremental SfM approach to novelly deal with ill-conditioned inter-image configurations provoked by the omnidirectional motion. The feature tracking system developed efficiently manages the feature scarcity produced by noise and outputs quality feature tracks, which allow robust 3D mapping of a given scene even if - due to noise - their length is shorter than what it is usually assumed for performing stable 3D reconstructions. The distributed reconstruction from multiple instances of SfM is attained by applying loop-closing techniques. Our multiple reconstruction system merges individual 3D structures and resolves the global scale problem with minimal overlaps, whereas in the literature 3D mapping is obtained by overlapping stretches of sequences. The performance of this system is demonstrated in the 2-session case. The management of noise, the stability against ill-configurations and the robustness of our SfM system is validated on a number of experiments and compared with state-of-the-art approaches. Possible future research areas are also discussed

    Mobile Robots Navigation

    Get PDF
    Mobile robots navigation includes different interrelated activities: (i) perception, as obtaining and interpreting sensory information; (ii) exploration, as the strategy that guides the robot to select the next direction to go; (iii) mapping, involving the construction of a spatial representation by using the sensory information perceived; (iv) localization, as the strategy to estimate the robot position within the spatial map; (v) path planning, as the strategy to find a path towards a goal location being optimal or not; and (vi) path execution, where motor actions are determined and adapted to environmental changes. The book addresses those activities by integrating results from the research work of several authors all over the world. Research cases are documented in 32 chapters organized within 7 categories next described

    3D reconstruction of coronary arteries from angiographic sequences for interventional assistance

    Get PDF
    Introduction -- Review of literature -- Research hypothesis and objectives -- Methodology -- Results and discussion -- Conclusion and future perspectives
    corecore