30 research outputs found

    Haptic Guidance with a Soft Exoskeleton Reduces Error in Drone Teleoperation

    Get PDF
    Haptic guidance has been shown to improve performance in many fields as it can give additional information without overloading other sensory channels such as vision or audition. Our group is investigating new intuitive ways to interact with robots, and we developed a suit to control drones with upper body movement, called the FlyJacket. In this paper, we present the integration of a cable-driven haptic guidance in the FlyJacket. The aim of the device is to apply a force relative to the distance between the drone and a predetermined trajectory to correct user torso orientation and improve the flight precision. Participants (n=10) flying a simulated fixed-wing drone controlled with torso movements tested four different guidance profiles (three linear profiles with different stiffness and one quadratic). Our results show that a quadratically shaped guidance, which gives a weak force when the error is small and a strong force when the error becomes significant, was the most effective guidance to improve the performance. All participants also reported through questionnaires that the haptic guidance was useful for flight control

    Haptic Guidance for Teleoperation: Optimizing Performance and User Experience

    Get PDF
    Haptic guidance in teleoperation (e.g. of robotic systems) is a pioneering approach to successfully combine automation and human competencies. In the current user study, various forms of haptic guidance were evaluated in terms of user performance and experience. Twenty-six participants completed an obstacle avoidance task and a peg-in-hole task in a virtual environment using a seven DoF force feedback device. Three types of haptic guidance (translational, rotational, combination of both, i.e. 6 DoF) and three guidance forces and torques (stiffnesses) were compared. Moreover, a secondary task paradigm was utilized to explore the effects of additional cognitive load. The results show that haptic guidance significantly improves performance (i.e. completion times, collision forces). Best results were obtained when the guidance forces were set to a medium or high value. Additionally, feelings of control were significantly increased during higher cognitive load conditions when being supported by translational haptic guidance

    The use of modern tools for modelling and simulation of UAV with Haptic

    Get PDF
    Unmanned Aerial Vehicle (UAV) is a research field in robotics which is in high demand in recent years, although there still exist many unanswered questions. In contrast, to the human operated aerial vehicles, it is still far less used to the fact that people are dubious about flying in or flying an unmanned vehicle. It is all about giving the control right to the computer (which is the Artificial Intelligence) for making decisions based on the situation like human do but this has not been easy to make people understand that it’s safe and to continue the enhancement on it. These days there are many types of UAVs available in the market for consumer use, for applications like photography to play games, to map routes, to monitor buildings, for security purposes and much more. Plus, these UAVs are also being widely used by the military for surveillance and for security reasons. One of the most commonly used consumer product is a quadcopter or quadrotor. The research carried out used modern tools (i.e., SolidWorks, Java Net Beans and MATLAB/Simulink) to model controls system for Quadcopter UAV with haptic control system to control the quadcopter in a virtual simulation environment and in real time environment. A mathematical model for the controlling the quadcopter in simulations and real time environments were introduced. Where, the design methodology for the quadcopter was defined. This methodology was then enhanced to develop a virtual simulation and real time environments for simulations and experiments. Furthermore, the haptic control was then implemented with designed control system to control the quadcopter in virtual simulation and real time experiments. By using the mathematical model of quadcopter, PID & PD control techniques were used to model the control setup for the quadcopter altitude and motion controls as work progressed. Firstly, the dynamic model is developed using a simple set of equations which evolves further by using complex control & mathematical model with precise function of actuators and aerodynamic coefficients Figure5-7. The presented results are satisfying and shows that flight experiments and simulations of the quadcopter control using haptics is a novel area of research which helps perform operations more successfully and give more control to the operator when operating in difficult environments. By using haptic accidents can be minimised and the functional performance of the operator and the UAV will be significantly enhanced. This concept and area of research of haptic control can be further developed accordingly to the needs of specific applications

    The use of modern tools for modelling and simulation of UAV with Haptic

    Get PDF
    Unmanned Aerial Vehicle (UAV) is a research field in robotics which is in high demand in recent years, although there still exist many unanswered questions. In contrast, to the human operated aerial vehicles, it is still far less used to the fact that people are dubious about flying in or flying an unmanned vehicle. It is all about giving the control right to the computer (which is the Artificial Intelligence) for making decisions based on the situation like human do but this has not been easy to make people understand that it’s safe and to continue the enhancement on it. These days there are many types of UAVs available in the market for consumer use, for applications like photography to play games, to map routes, to monitor buildings, for security purposes and much more. Plus, these UAVs are also being widely used by the military for surveillance and for security reasons. One of the most commonly used consumer product is a quadcopter or quadrotor. The research carried out used modern tools (i.e., SolidWorks, Java Net Beans and MATLAB/Simulink) to model controls system for Quadcopter UAV with haptic control system to control the quadcopter in a virtual simulation environment and in real time environment. A mathematical model for the controlling the quadcopter in simulations and real time environments were introduced. Where, the design methodology for the quadcopter was defined. This methodology was then enhanced to develop a virtual simulation and real time environments for simulations and experiments. Furthermore, the haptic control was then implemented with designed control system to control the quadcopter in virtual simulation and real time experiments. By using the mathematical model of quadcopter, PID & PD control techniques were used to model the control setup for the quadcopter altitude and motion controls as work progressed. Firstly, the dynamic model is developed using a simple set of equations which evolves further by using complex control & mathematical model with precise function of actuators and aerodynamic coefficients Figure5-7. The presented results are satisfying and shows that flight experiments and simulations of the quadcopter control using haptics is a novel area of research which helps perform operations more successfully and give more control to the operator when operating in difficult environments. By using haptic accidents can be minimised and the functional performance of the operator and the UAV will be significantly enhanced. This concept and area of research of haptic control can be further developed accordingly to the needs of specific applications

    The Shape of Damping: Optimizing Damping Coefficients to Improve Transparency on Bilateral Telemanipulation

    Get PDF
    This thesis presents a novel optimization-based passivity control algorithm for hapticenabled bilateral teleoperation systems involving multiple degrees of freedom. In particular, in the context of energy-bounding control, the contribution focuses on the implementation of a passivity layer for an existing time-domain scheme, ensuring optimal transparency of the interaction along subsets of the environment space which are preponderant for the given task, while preserving the energy bounds required for passivity. The involved optimization problem is convex and amenable to real-time implementation. The effectiveness of the proposed design is validated via an experiment performed on a virtual teleoperated environment. The interplay between transparency and stability is a critical aspect in haptic-enabled bilateral teleoperation control. While it is important to present the user with the true impedance of the environment, destabilizing factors such as time delays, stiff environments, and a relaxed grasp on the master device may compromise the stability and safety of the system. Passivity has been exploited as one of the the main tools for providing sufficient conditions for stable teleoperation in several controller design approaches, such as the scattering algorithm, timedomain passivity control, energy bounding algorithm, and passive set position modulation. In this work it is presented an innovative energy-based approach, which builds upon existing time-domain passivity controllers, improving and extending their effectiveness and functionality. The set of damping coefficients are prioritized in each degree of freedom, the resulting transparency presents a realistic force feedback in comparison to the other directions. Thus, the prioritization takes effect using a quadratic programming algorithm to find the optimal values for the damping. Finally, the energy tanks approach on passivity control is a solution used to ensure stability in a system for robotics bilateral manipulation. The bilateral telemanipulation must maintain the principle of passivity in all moments to preserve the system\u2019s stability. This work presents a brief introduction to haptic devices as a master component on the telemanipulation chain; the end effector in the slave side is a representation of an interactive object within an environment having a force sensor as feedback signal. The whole interface is designed into a cross-platform framework named ROS, where the user interacts with the system. Experimental results are presented

    Safe local aerial manipulation for the installation of devices on power lines: Aerial-core first year results and designs

    Get PDF
    Article number 6220The power grid is an essential infrastructure in any country, comprising thousands of kilometers of power lines that require periodic inspection and maintenance, carried out nowadays by human operators in risky conditions. To increase safety and reduce time and cost with respect to conventional solutions involving manned helicopters and heavy vehicles, the AERIAL-CORE project proposes the development of aerial robots capable of performing aerial manipulation operations to assist human operators in power lines inspection and maintenance, allowing the installation of devices, such as bird flight diverters or electrical spacers, and the fast delivery and retrieval of tools. This manuscript describes the goals and functionalities to be developed for safe local aerial manipulation, presenting the preliminary designs and experimental results obtained in the first year of the project.European Union (UE). H2020 871479Ministerio de Ciencia, Innovación y Universidades de España FPI 201

    Aerial Robotics for Inspection and Maintenance

    Get PDF
    Aerial robots with perception, navigation, and manipulation capabilities are extending the range of applications of drones, allowing the integration of different sensor devices and robotic manipulators to perform inspection and maintenance operations on infrastructures such as power lines, bridges, viaducts, or walls, involving typically physical interactions on flight. New research and technological challenges arise from applications demanding the benefits of aerial robots, particularly in outdoor environments. This book collects eleven papers from different research groups from Spain, Croatia, Italy, Japan, the USA, the Netherlands, and Denmark, focused on the design, development, and experimental validation of methods and technologies for inspection and maintenance using aerial robots

    Télé-opération Corps Complet de Robots Humanoïdes

    Get PDF
    This thesis aims to investigate systems and tools for teleoperating a humanoid robot. Robotteleoperation is crucial to send and control robots in environments that are dangerous or inaccessiblefor humans (e.g., disaster response scenarios, contaminated environments, or extraterrestrialsites). The term teleoperation most commonly refers to direct and continuous control of a robot.In this case, the human operator guides the motion of the robot with her/his own physical motionor through some physical input device. One of the main challenges is to control the robot in a waythat guarantees its dynamical balance while trying to follow the human references. In addition,the human operator needs some feedback about the state of the robot and its work site through remotesensors in order to comprehend the situation or feel physically present at the site, producingeffective robot behaviors. Complications arise when the communication network is non-ideal. Inthis case the commands from human to robot together with the feedback from robot to human canbe delayed. These delays can be very disturbing for the human operator, who cannot teleoperatetheir robot avatar in an effective way.Another crucial point to consider when setting up a teleoperation system is the large numberof parameters that have to be tuned to effectively control the teleoperated robots. Machinelearning approaches and stochastic optimizers can be used to automate the learning of some of theparameters.In this thesis, we proposed a teleoperation system that has been tested on the humanoid robotiCub. We used an inertial-technology-based motion capture suit as input device to control thehumanoid and a virtual reality headset connected to the robot cameras to get some visual feedback.We first translated the human movements into equivalent robot ones by developping a motionretargeting approach that achieves human-likeness while trying to ensure the feasibility of thetransferred motion. We then implemented a whole-body controller to enable the robot to trackthe retargeted human motion. The controller has been later optimized in simulation to achieve agood tracking of the whole-body reference movements, by recurring to a multi-objective stochasticoptimizer, which allowed us to find robust solutions working on the real robot in few trials.To teleoperate walking motions, we implemented a higher-level teleoperation mode in whichthe user can use a joystick to send reference commands to the robot. We integrated this setting inthe teleoperation system, which allows the user to switch between the two different modes.A major problem preventing the deployment of such systems in real applications is the presenceof communication delays between the human input and the feedback from the robot: evena few hundred milliseconds of delay can irremediably disturb the operator, let alone a few seconds.To overcome these delays, we introduced a system in which a humanoid robot executescommands before it actually receives them, so that the visual feedback appears to be synchronizedto the operator, whereas the robot executed the commands in the past. To do so, the robot continuouslypredicts future commands by querying a machine learning model that is trained on pasttrajectories and conditioned on the last received commands.Cette thĂšse vise Ă  Ă©tudier des systĂšmes et des outils pour la tĂ©lĂ©-opĂ©ration d’un robot humanoĂŻde.La tĂ©lĂ©opĂ©ration de robots est cruciale pour envoyer et contrĂŽler les robots dans des environnementsdangereux ou inaccessibles pour les humains (par exemple, des scĂ©narios d’interventionen cas de catastrophe, des environnements contaminĂ©s ou des sites extraterrestres). Le terme tĂ©lĂ©opĂ©rationdĂ©signe le plus souvent le contrĂŽle direct et continu d’un robot. Dans ce cas, l’opĂ©rateurhumain guide le mouvement du robot avec son propre mouvement physique ou via un dispositifde contrĂŽle. L’un des principaux dĂ©fis est de contrĂŽler le robot de maniĂšre Ă  garantir son Ă©quilibredynamique tout en essayant de suivre les rĂ©fĂ©rences humaines. De plus, l’opĂ©rateur humain abesoin d’un retour d’information sur l’état du robot et de son site via des capteurs Ă  distance afind’apprĂ©hender la situation ou de se sentir physiquement prĂ©sent sur le site, produisant des comportementsde robot efficaces. Des complications surviennent lorsque le rĂ©seau de communicationn’est pas idĂ©al. Dans ce cas, les commandes de l’homme au robot ainsi que la rĂ©troaction du robotĂ  l’homme peuvent ĂȘtre retardĂ©es. Ces dĂ©lais peuvent ĂȘtre trĂšs gĂȘnants pour l’opĂ©rateur humain,qui ne peut pas tĂ©lĂ©-opĂ©rer efficacement son avatar robotique.Un autre point crucial Ă  considĂ©rer lors de la mise en place d’un systĂšme de tĂ©lĂ©-opĂ©rationest le grand nombre de paramĂštres qui doivent ĂȘtre rĂ©glĂ©s pour contrĂŽler efficacement les robotstĂ©lĂ©-opĂ©rĂ©s. Des approches d’apprentissage automatique et des optimiseurs stochastiques peuventĂȘtre utilisĂ©s pour automatiser l’apprentissage de certains paramĂštres.Dans cette thĂšse, nous avons proposĂ© un systĂšme de tĂ©lĂ©-opĂ©ration qui a Ă©tĂ© testĂ© sur le robothumanoĂŻde iCub. Nous avons utilisĂ© une combinaison de capture de mouvement basĂ©e sur latechnologie inertielle comme pĂ©riphĂ©rique de contrĂŽle pour l’humanoĂŻde et un casque de rĂ©alitĂ©virtuelle connectĂ© aux camĂ©ras du robot pour obtenir un retour visuel. Nous avons d’abord traduitles mouvements humains en mouvements robotiques Ă©quivalents en dĂ©veloppant une approchede retargeting de mouvement qui atteint la ressemblance humaine tout en essayant d’assurer lafaisabilitĂ© du mouvement transfĂ©rĂ©. Nous avons ensuite implĂ©mentĂ© un contrĂŽleur du corps entierpour permettre au robot de suivre le mouvement humain reciblĂ©. Le contrĂŽleur a ensuite Ă©tĂ©optimisĂ© en simulation pour obtenir un bon suivi des mouvements de rĂ©fĂ©rence du corps entier,en recourant Ă  un optimiseur stochastique multi-objectifs, ce qui nous a permis de trouver dessolutions robustes fonctionnant sur le robot rĂ©el en quelques essais.Pour tĂ©lĂ©-opĂ©rer les mouvements de marche, nous avons implĂ©mentĂ© un mode de tĂ©lĂ©-opĂ©rationde niveau supĂ©rieur dans lequel l’utilisateur peut utiliser un joystick pour envoyer des commandesde rĂ©fĂ©rence au robot. Nous avons intĂ©grĂ© ce paramĂštre dans le systĂšme de tĂ©lĂ©-opĂ©ration, ce quipermet Ă  l’utilisateur de basculer entre les deux modes diffĂ©rents.Un problĂšme majeur empĂȘchant le dĂ©ploiement de tels systĂšmes dans des applications rĂ©ellesest la prĂ©sence de retards de communication entre l’entrĂ©e humaine et le retour du robot: mĂȘmequelques centaines de millisecondes de retard peuvent irrĂ©mĂ©diablement perturber l’opĂ©rateur,encore plus quelques secondes. Pour surmonter ces retards, nous avons introduit un systĂšme danslequel un robot humanoĂŻde exĂ©cute des commandes avant de les recevoir, de sorte que le retourvisuel semble ĂȘtre synchronisĂ© avec l’opĂ©rateur, alors que le robot exĂ©cutait les commandes dansle passĂ©. Pour ce faire, le robot prĂ©dit en permanence les commandes futures en interrogeant unmodĂšle d’apprentissage automatique formĂ© sur les trajectoires passĂ©es et conditionnĂ© aux derniĂšrescommandes reçues
    corecore