96 research outputs found

    Survey of Visual and Force/Tactile Control of Robots for Physical Interaction in Spain

    Get PDF
    Sensors provide robotic systems with the information required to perceive the changes that happen in unstructured environments and modify their actions accordingly. The robotic controllers which process and analyze this sensory information are usually based on three types of sensors (visual, force/torque and tactile) which identify the most widespread robotic control strategies: visual servoing control, force control and tactile control. This paper presents a detailed review on the sensor architectures, algorithmic techniques and applications which have been developed by Spanish researchers in order to implement these mono-sensor and multi-sensor controllers which combine several sensors

    Trajectory optimization and motion planning for quadrotors in unstructured environments

    Get PDF
    Trajectory optimization and motion planning for quadrotors in unstructured environments Coming out from university labs robots perform tasks usually navigating through unstructured environment. The realization of autonomous motion in such type of environments poses a number of challenges compared to highly controlled laboratory spaces. In unstructured environments robots cannot rely on complete knowledge of their sorroundings and they have to continously acquire information for decision making. The challenges presented are a consequence of the high-dimensionality of the state-space and of the uncertainty introduced by modeling and perception. This is even more true for aerial-robots that has a complex nonlinear dynamics a can move freely in 3D-space. To avoid this complexity a robot have to select a small set of relevant features, reason on a reduced state space and plan trajectories on short-time horizon. This thesis is a contribution towards the autonomous navigation of aerial robots (quadrotors) in real-world unstructured scenarios. The first three chapters present a contribution towards an implementation of Receding Time Horizon Optimal Control. The optimization problem for a model based trajectory generation in environments with obstacles is set, using an approach based on variational calculus and modeling the robots in the SE(3) Lie Group of 3D space transformations. The fourth chapter explores the problem of using minimal information and sensing to generate motion towards a goal in an indoor bulding-like scenario. The fifth chapter investigate the problem of extracting visual features from the environment to control the motion in an indoor corridor-like scenario. The last chapter deals with the problem of spatial reasoning and motion planning using atomic proposition in a multi-robot environments with obstacles

    Task space control for on-orbit space robotics using a new ROS-based framework

    Get PDF
    This paper proposes several task space control approaches for complex on-orbit high degrees of freedom robots. These approaches include redundancy resolution and take the non-linear dynamic model of the on-orbit robotic systems into account. The suitability of the proposed task space control approaches is explored in several on-orbit servicing operations requiring visual servoing tasks of complex humanoid robots. A unified open-source framework for space-robotics simulations, called OnOrbitROS, is used to evaluate the proposed control systems and compare their behaviour with state-of-the-art existing ones. The adopted framework is based on ROS and includes and reproduces the principal environmental conditions that eventual space robots and manipulators could experience in an on-orbit servicing scenario. The architecture of the different software modules developed and their application on complex space robotic systems is presented. Efficient real-time implementations are achieved using the proposed OnOrbitROS framework. The proposed controllers are applied to perform the guidance of a humanoid robot. The robot dynamics are integrated into the definition of the controllers and an analysis of the results and practical properties are described in the results section

    Visual guidance of unmanned aerial manipulators

    Get PDF
    The ability to fly has greatly expanded the possibilities for robots to perform surveillance, inspection or map generation tasks. Yet it was only in recent years that research in aerial robotics was mature enough to allow active interactions with the environment. The robots responsible for these interactions are called aerial manipulators and usually combine a multirotor platform and one or more robotic arms. The main objective of this thesis is to formalize the concept of aerial manipulator and present guidance methods, using visual information, to provide them with autonomous functionalities. A key competence to control an aerial manipulator is the ability to localize it in the environment. Traditionally, this localization has required external infrastructure of sensors (e.g., GPS or IR cameras), restricting the real applications. Furthermore, localization methods with on-board sensors, exported from other robotics fields such as simultaneous localization and mapping (SLAM), require large computational units becoming a handicap in vehicles where size, load, and power consumption are important restrictions. In this regard, this thesis proposes a method to estimate the state of the vehicle (i.e., position, orientation, velocity and acceleration) by means of on-board, low-cost, light-weight and high-rate sensors. With the physical complexity of these robots, it is required to use advanced control techniques during navigation. Thanks to their redundancy on degrees-of-freedom, they offer the possibility to accomplish not only with mobility requirements but with other tasks simultaneously and hierarchically, prioritizing them depending on their impact to the overall mission success. In this work we present such control laws and define a number of these tasks to drive the vehicle using visual information, guarantee the robot integrity during flight, and improve the platform stability or increase arm operability. The main contributions of this research work are threefold: (1) Present a localization technique to allow autonomous navigation, this method is specifically designed for aerial platforms with size, load and computational burden restrictions. (2) Obtain control commands to drive the vehicle using visual information (visual servo). (3) Integrate the visual servo commands into a hierarchical control law by exploiting the redundancy of the robot to accomplish secondary tasks during flight. These tasks are specific for aerial manipulators and they are also provided. All the techniques presented in this document have been validated throughout extensive experimentation with real robotic platforms.La capacitat de volar ha incrementat molt les possibilitats dels robots per a realitzar tasques de vigilància, inspecció o generació de mapes. Tot i això, no és fins fa pocs anys que la recerca en robòtica aèria ha estat prou madura com per començar a permetre interaccions amb l’entorn d’una manera activa. Els robots per a fer-ho s’anomenen manipuladors aeris i habitualment combinen una plataforma multirotor i un braç robòtic. L’objectiu d’aquesta tesi és formalitzar el concepte de manipulador aeri i presentar mètodes de guiatge, utilitzant informació visual, per dotar d’autonomia aquest tipus de vehicles. Una competència clau per controlar un manipulador aeri és la capacitat de localitzar-se en l’entorn. Tradicionalment aquesta localització ha requerit d’infraestructura sensorial externa (GPS, càmeres IR, etc.), limitant així les aplicacions reals. Pel contrari, sistemes de localització exportats d’altres camps de la robòtica basats en sensors a bord, com per exemple mètodes de localització i mapejat simultànis (SLAM), requereixen de gran capacitat de còmput, característica que penalitza molt en vehicles on la mida, pes i consum elèctric son grans restriccions. En aquest sentit, aquesta tesi proposa un mètode d’estimació d’estat del robot (posició, velocitat, orientació i acceleració) a partir de sensors instal·lats a bord, de baix cost, baix consum computacional i que proporcionen mesures a alta freqüència. Degut a la complexitat física d’aquests robots, és necessari l’ús de tècniques de control avançades. Gràcies a la seva redundància de graus de llibertat, aquests robots ens ofereixen la possibilitat de complir amb els requeriments de mobilitat i, simultàniament, realitzar tasques de manera jeràrquica, ordenant-les segons l’impacte en l’acompliment de la missió. En aquest treball es presenten aquestes lleis de control, juntament amb la descripció de tasques per tal de guiar visualment el vehicle, garantir la integritat del robot durant el vol, millorar de l’estabilitat del vehicle o augmentar la manipulabilitat del braç. Aquesta tesi es centra en tres aspectes fonamentals: (1) Presentar una tècnica de localització per dotar d’autonomia el robot. Aquest mètode està especialment dissenyat per a plataformes amb restriccions de capacitat computacional, mida i pes. (2) Obtenir les comandes de control necessàries per guiar el vehicle a partir d’informació visual. (3) Integrar aquestes accions dins una estructura de control jeràrquica utilitzant la redundància del robot per complir altres tasques durant el vol. Aquestes tasques son específiques per a manipuladors aeris i també es defineixen en aquest document. Totes les tècniques presentades en aquesta tesi han estat avaluades de manera experimental amb plataformes robòtiques real

    Vision-based methods for state estimation and control of robotic systems with application to mobile and surgical robots

    Get PDF
    For autonomous systems that need to perceive the surrounding environment for the accomplishment of a given task, vision is a highly informative exteroceptive sensory source. When gathering information from the available sensors, in fact, the richness of visual data allows to provide a complete description of the environment, collecting geometrical and semantic information (e.g., object pose, distances, shapes, colors, lights). The huge amount of collected data allows to consider both methods exploiting the totality of the data (dense approaches), or a reduced set obtained from feature extraction procedures (sparse approaches). This manuscript presents dense and sparse vision-based methods for control and sensing of robotic systems. First, a safe navigation scheme for mobile robots, moving in unknown environments populated by obstacles, is presented. For this task, dense visual information is used to perceive the environment (i.e., detect ground plane and obstacles) and, in combination with other sensory sources, provide an estimation of the robot motion with a linear observer. On the other hand, sparse visual data are extrapolated in terms of geometric primitives, in order to implement a visual servoing control scheme satisfying proper navigation behaviours. This controller relies on visual estimated information and is designed in order to guarantee safety during navigation. In addition, redundant structures are taken into account to re-arrange the internal configuration of the robot and reduce its encumbrance when the workspace is highly cluttered. Vision-based estimation methods are relevant also in other contexts. In the field of surgical robotics, having reliable data about unmeasurable quantities is of great importance and critical at the same time. In this manuscript, we present a Kalman-based observer to estimate the 3D pose of a suturing needle held by a surgical manipulator for robot-assisted suturing. The method exploits images acquired by the endoscope of the robot platform to extrapolate relevant geometrical information and get projected measurements of the tool pose. This method has also been validated with a novel simulator designed for the da Vinci robotic platform, with the purpose to ease interfacing and employment in ideal conditions for testing and validation. The Kalman-based observers mentioned above are classical passive estimators, whose system inputs used to produce the proper estimation are theoretically arbitrary. This does not provide any possibility to actively adapt input trajectories in order to optimize specific requirements on the performance of the estimation. For this purpose, active estimation paradigm is introduced and some related strategies are presented. More specifically, a novel active sensing algorithm employing visual dense information is described for a typical Structure-from-Motion (SfM) problem. The algorithm generates an optimal estimation of a scene observed by a moving camera, while minimizing the maximum uncertainty of the estimation. This approach can be applied to any robotic platforms and has been validated with a manipulator arm equipped with a monocular camera

    Robot Assisted Object Manipulation for Minimally Invasive Surgery

    Get PDF
    Robotic systems have an increasingly important role in facilitating minimally invasive surgical treatments. In robot-assisted minimally invasive surgery, surgeons remotely control instruments from a console to perform operations inside the patient. However, despite the advanced technological status of surgical robots, fully autonomous systems, with decision-making capabilities, are not yet available. In 2017, a structure to classify the research efforts toward autonomy achievable with surgical robots was proposed by Yang et al. Six different levels were identified: no autonomy, robot assistance, task autonomy, conditional autonomy, high autonomy, and full autonomy. All the commercially available platforms in robot-assisted surgery is still in level 0 (no autonomy). Despite increasing the level of autonomy remains an open challenge, its adoption could potentially introduce multiple benefits, such as decreasing surgeons’ workload and fatigue and pursuing a consistent quality of procedures. Ultimately, allowing the surgeons to interpret the ample and intelligent information from the system will enhance the surgical outcome and positively reflect both on patients and society. Three main aspects are required to introduce automation into surgery: the surgical robot must move with high precision, have motion planning capabilities and understand the surgical scene. Besides these main factors, depending on the type of surgery, there could be other aspects that might play a fundamental role, to name some compliance, stiffness, etc. This thesis addresses three technological challenges encountered when trying to achieve the aforementioned goals, in the specific case of robot-object interaction. First, how to overcome the inaccuracy of cable-driven systems when executing fine and precise movements. Second, planning different tasks in dynamically changing environments. Lastly, how the understanding of a surgical scene can be used to solve more than one manipulation task. To address the first challenge, a control scheme relying on accurate calibration is implemented to execute the pick-up of a surgical needle. Regarding the planning of surgical tasks, two approaches are explored: one is learning from demonstration to pick and place a surgical object, and the second is using a gradient-based approach to trigger a smoother object repositioning phase during intraoperative procedures. Finally, to improve scene understanding, this thesis focuses on developing a simulation environment where multiple tasks can be learned based on the surgical scene and then transferred to the real robot. Experiments proved that automation of the pick and place task of different surgical objects is possible. The robot was successfully able to autonomously pick up a suturing needle, position a surgical device for intraoperative ultrasound scanning and manipulate soft tissue for intraoperative organ retraction. Despite automation of surgical subtasks has been demonstrated in this work, several challenges remain open, such as the capabilities of the generated algorithm to generalise over different environment conditions and different patients

    Vision-Based Control of Flexible Robot Systems

    Get PDF
    This thesis covers the controlling of flexible robot systems by using a camera as a measurement device. To accomplish the purpose of the study, the estimation process of dynamic state variables of flexible link robot has been examined based on camera measurements. For the purpose of testing two application examples for flexible link have been applied, an algorithm for the dynamic state variables estimation is proposed. Flexible robots can have very complex dynamic behavior during their operations, which can lead to induced vibrations. Since the vibrations and its derivative are not all measurable, therefore the estimation of state variables plays a significant role in the state feedback control of flexible link robots. A vision sensor (i.e. camera) realizing a contact-less measurement sensor can be used to measure the deflection of flexible robot arm. Using a vision sensor, however, would generate new effects such as limited accuracy and time delay, which are the main inherent problems of the application of vision sensors within the context. These effects and related compensation approaches are studied in this thesis. An indirect method for link deflection (i.e. system states) sensing is presented. It uses a vision system consisting of a CCD camera and an image processing unit. The main purpose of this thesis is to develop an estimation approach combining suitable measurement devices which are easy to realize with improved reliability. It includes designing two state estimators; the first one for the traditional sensor type (negligible noise and time delay) and the second one is for the camera measurement which account for the dynamic error due to the time delay. The estimation approach is applied first using a single link flexible robot; the dynamic model of the flexible link is derived using a finite element method. Based on the suggested estimation approach, the first observer estimates the vibrations using strain gauge (fast and complete dynamics), and the second observer estimates the vibrations using vision data (slow dynamical parts). In order to achieve an optimal estimation, a proper combination process of the two estimated dynamical parts of the system dynamics is described. The simulation results for the estimations based on vision measurements show that the slow dynamical states can be estimated and the observer can compensate the time delay dynamic errors. It is also observed that an optimal estimation can be attained by combining slow dynamical estimated states with those of fast observer-based on strain gauge measurement. Based on suggested estimation approach a vision-based control for elastic shipmounted crane is designed to regulate the motion of the payload. For the observers and the controller design, a linear dynamic model of elastic-ship mounted crane incorporating a finite element technique for modeling flexible link is employed. In order to estimate the dynamic states variables and the unknown disturbance two state observers are designed. The first one estimates the state variables using camera measurement (augmented Kalman filter). The second one used potentiometers measurement (PI-Observer). To realize a multi-model approach of elastic-ship mounted crane, a variable gain controller and variable gain observers are designed. The variable gain controller is used to generate the required damping to control the system based on the estimated states and the roll angle. Simulation results show that the variable gain observers can adequately estimate the states and the unknown disturbance acting on the payload. It is further observed that the variable gain controller can effectively reduce the payload pendulations. Experiments are conducted using the camera to measure the link deflection of scaled elastic ship-mounted crane system. The results shown that the variable gain controller based on the combined states observers mitigated the vibrations of the system and the swinging of the payload. The presented material above is embedded into an interrelated thesis. A concise introduction to the vision-based control and state estimation problems is attached in the first chapter. An extensive survey of available visual servoing algorithms that include the rigid robot system and the flexible robot system is also presented. The conclusions of the work and suggestions for the future research are provided at the last chapter of this thesis

    Autonomous Target Tracking Of A Quadrotor UAV Using Monocular Visual-Inertial Odometry

    Get PDF
    Unmanned Aerial Vehicle (UAV) has been finding its ways into different applications. Hence, recent years witness extensive research towards achieving higher autonomy in UAV. Computer Vision (CV) algorithms replace Global Navigation Satellite System (GNSS), which is not reliable when the weather is bad, inside buildings or at secluded areas in performing real-time pose estimation. Thecontroller later uses the pose to navigate the UAV. This project presents a simulation of UAV, in MATLAB & SIMULINK, capable of autonomously detecting and tracking a designed visual marker. Referring to and improving the state-of-the-art CV algorithms, there is a newly formulated approach to detect the designed visual marker. The combination of data from the monocular camera with that from Inertial Measurement Unit (IMU) and sonar sensor enables the pose estimation of the UAV relative to the designed visual marker. A Proportional-Integral-Derivative (PID) controller later uses the pose of the UAV to navigate itself to be always following the target of interest

    Vision based navigation in a dynamic environment

    Get PDF
    Cette thèse s'intéresse au problème de la navigation autonome au long cours de robots mobiles à roues dans des environnements dynamiques. Elle s'inscrit dans le cadre du projet FUI Air-Cobot. Ce projet, porté par Akka Technologies, a vu collaborer plusieurs entreprises (Akka, Airbus, 2MORROW, Sterela) ainsi que deux laboratoires de recherche, le LAAS et Mines Albi. L'objectif est de développer un robot collaboratif (ou cobot) capable de réaliser l'inspection d'un avion avant le décollage ou en hangar. Différents aspects ont donc été abordés : le contrôle non destructif, la stratégie de navigation, le développement du système robotisé et de son instrumentation, etc. Cette thèse répond au second problème évoqué, celui de la navigation. L'environnement considéré étant aéroportuaire, il est hautement structuré et répond à des normes de déplacement très strictes (zones interdites, etc.). Il peut être encombré d'obstacles statiques (attendus ou non) et dynamiques (véhicules divers, piétons, ...) qu'il conviendra d'éviter pour garantir la sécurité des biens et des personnes. Cette thèse présente deux contributions. La première porte sur la synthèse d'un asservissement visuel permettant au robot de se déplacer sur de longues distances (autour de l'avion ou en hangar) grâce à une carte topologique et au choix de cibles dédiées. De plus, cet asservissement visuel exploite les informations fournies par toutes les caméras embarquées. La seconde contribution porte sur la sécurité et l'évitement d'obstacles. Une loi de commande basée sur les spirales équiangulaires exploite seulement les données sensorielles fournies par les lasers embarqués. Elle est donc purement référencée capteur et permet de contourner tout obstacle, qu'il soit fixe ou mobile. Il s'agit donc d'une solution générale permettant de garantir la non collision. Enfin, des résultats expérimentaux, réalisés au LAAS et sur le site d'Airbus à Blagnac, montrent l'efficacité de la stratégie développée.This thesis is directed towards the autonomous long range navigation of wheeled robots in dynamic environments. It takes place within the Air-Cobot project. This project aims at designing a collaborative robot (cobot) able to perform the preflight inspection of an aircraft. The considered environment is then highly structured (airport runway and hangars) and may be cluttered with both static and dynamic unknown obstacles (luggage or refueling trucks, pedestrians, etc.). Our navigation framework relies on previous works and is based on the switching between different control laws (go to goal controller, visual servoing, obstacle avoidance) depending on the context. Our contribution is twofold. First of all, we have designed a visual servoing controller able to make the robot move over a long distance thanks to a topological map and to the choice of suitable targets. In addition, multi-camera visual servoing control laws have been built to benefit from the image data provided by the different cameras which are embedded on the Air-Cobot system. The second contribution is related to obstacle avoidance. A control law based on equiangular spirals has been designed to guarantee non collision. This control law, based on equiangular spirals, is fully sensor-based, and allows to avoid static and dynamic obstacles alike. It then provides a general solution to deal efficiently with the collision problem. Experimental results, performed both in LAAS and in Airbus hangars and runways, show the efficiency of the developed techniques
    corecore