50 research outputs found

    Robustness analysis and controller synthesis for bilateral teleoperation systems via IQCs

    Get PDF

    SynthÚse et commande de dispositifs haptiques pour la communication à distance : application à une interface robotique anthropomorphe pour la poignée de main

    Get PDF
    Les systĂšmes de communication Ă  distance entre les individus ont Ă©voluĂ© significativement au cours des derniĂšres annĂ©es, de concert avec les innovations technologiques caractĂ©risant notre sociĂ©tĂ©. Afin de rĂ©aliser une communication rĂ©aliste et intuitive, le systĂšme doit ĂȘtre capable de stimuler les sens qui sont habituellement impliquĂ©s dans l’interaction entre deux personnes, tels que l’ouĂŻe, la vision et le toucher. Le tĂ©lĂ©phone a reprĂ©sentĂ© une innovation importante dans les communications en permettant enfin de pouvoir parler avec son interlocuteur directement, sans devoir employer un signal codĂ© comme le code Morse. Cette communication a Ă©tĂ© amĂ©liorĂ©e en introduisant les appels vidĂ©o, lesquels permettent non seulement d’entendre l’interlocuteur mais aussi de le voir. Plusieurs recherches ont cependant dĂ©montrĂ© que le sens du toucher joue Ă©galement un rĂŽle trĂšs important dans les interactions entre individus. Une technologie relativement rĂ©cente, connue comme technologie haptique, aborde le problĂšme de la transmission du sens du toucher Ă  distance, dans le but de mettre en oeuvre une communication complĂšte et encore plus rĂ©aliste. Cette technologie a Ă©galement d’autres applications tout aussi importantes. À titre d’exemple, l’haptique est utilisĂ©e dans le domaine de la rĂ©adaptation et de l’apprentissage guidĂ© de personnes ayant des dĂ©ficiences motrices. Cette thĂšse porte sur le dĂ©veloppement de la technologie haptique pour la communication Ă  distance entre deux individus. L’objectif final est la rĂ©alisation d’un systĂšme permettant aux deux utilisateurs de se serrer la main Ă  distance. Afin d’atteindre cet objectif, nous devons aborder deux problĂšmes diffĂ©rents, tels que la conception d’une interface capable de reproduire le mouvement dĂ©sirĂ© et l’implĂ©mentation d’une loi de commande garantissant le comportement correct de cette interface. Toujours dans le cadre de l’interaction Ă  distance par le biais d’un dispositif haptique, une interface pour l’apprentissage de l’écriture manuelle est Ă©galement prĂ©sentĂ©e. Cette application permet de dĂ©montrer, entre autres, l’importance d’un signal haptique dans l’interaction humain-humain et son influence sur les utilisateurs.Remote communication systems have significantly improved in the course of the recent years, in concert with technological innovations of our society. In order to realize a realistic and intuitive communication, the system must activate the part of the sensory system that is normally stimulated in an interaction between two people, i.e., the auditory system, the visual system and the haptic perception system, which concerns touch. The telephone represented an innovating communication system. It allowed to directly talk to the interlocutor without any need for a coded signal such as the Morse code. Remote communications have been further improved with the introduction of the video calls, which allow people not only to talk to but also to see each other. Several researches proved that the sense of touch plays a crucial role in social interactions. Haptic technology, which is relatively recent, approaches the problems related to the perception and the transmission of touch. One objective is to render remote communications even more complete and realistic. Haptic technology is also used in other important applications such as, for instance, rehabilitation and guided learning process of people with movement impairments. This thesis concerns the development of the haptic technology devoted to the implementation of remote communication systems. The final objective is to realize a teleoperation system which allows two users to remotely shake hands. In order to achieve this objective, two main issues must be faced : the design of a haptic interface capable of reproducing the required movement and the implementation of a control law which guarantees the proper response of such an interface. In the framework of a remote interaction via a haptic device, an interface for the training and assessment of handwriting capabilities is also presented. The latter application demonstrates the importance of haptic signals in a remote human-human interaction and its influence on the users

    Investigating User Experiences Through Animation-based Sketching

    Get PDF

    Robotic learning of force-based industrial manipulation tasks

    Get PDF
    Even with the rapid technological advancements, robots are still not the most comfortable machines to work with. Firstly, due to the separation of the robot and human workspace which imposes an additional financial burden. Secondly, due to the significant re-programming cost in case of changing products, especially in Small and Medium-sized Enterprises (SMEs). Therefore, there is a significant need to reduce the programming efforts required to enable robots to perform various tasks while sharing the same space with a human operator. Hence, the robot must be equipped with a cognitive and perceptual capabilities that facilitate human-robot interaction. Humans use their various senses to perform tasks such as vision, smell and taste. One sensethat plays a significant role in human activity is ’touch’ or ’force’. For example, holding a cup of tea, or making fine adjustments while inserting a key requires haptic information to achieve the task successfully. In all these examples, force and torque data are crucial for the successful completion of the activity. Also, this information implicitly conveys data about contact force, object stiffness, and many others. Hence, a deep understanding of the execution of such events can bridge the gap between humans and robots. This thesis is being directed to equip an industrial robot with the ability to deal with force perceptions and then learn force-based tasks using Learning from Demonstration (LfD).To learn force-based tasks using LfD, it is essential to extract task-relevant features from the force information. Then, knowledge must be extracted and encoded form the task-relevant features. Hence, the captured skills can be reproduced in a new scenario. In this thesis, these elements of LfD were achieved using different approaches based on the demonstrated task. In this thesis, four robotics problems were addressed using LfD framework. The first challenge was to filter out robots’ internal forces (irrelevant signals) using data-driven approach. The second robotics challenge was the recognition of the Contact State (CS) during assembly tasks. To tackle this challenge, a symbolic based approach was proposed, in which a force/torque signals; during demonstrated assembly, the task was encoded as a sequence of symbols. The third challenge was to learn a human-robot co-manipulation task based on LfD. In this case, an ensemble machine learning approach was proposed to capture such a skill. The last challenge in this thesis, was to learn an assembly task by demonstration with the presents of parts geometrical variation. Hence, a new learning approach based on Artificial Potential Field (APF) to learn a Peg-in-Hole (PiH) assembly task which includes no-contact and contact phases. To sum up, this thesis focuses on the use of data-driven approaches to learning force based task in an industrial context. Hence, different machine learning approaches were implemented, developed and evaluated in different scenarios. Then, the performance of these approaches was compared with mathematical modelling based approaches.</div

    Generative Models for Learning Robot Manipulation Skills from Humans

    Get PDF
    A long standing goal in artificial intelligence is to make robots seamlessly interact with humans in performing everyday manipulation skills. Learning from demonstrations or imitation learning provides a promising route to bridge this gap. In contrast to direct trajectory learning from demonstrations, many problems arise in interactive robotic applications that require higher contextual level understanding of the environment. This requires learning invariant mappings in the demonstrations that can generalize across different environmental situations such as size, position, orientation of objects, viewpoint of the observer, etc. In this thesis, we address this challenge by encapsulating invariant patterns in the demonstrations using probabilistic learning models for acquiring dexterous manipulation skills. We learn the joint probability density function of the demonstrations with a hidden semi-Markov model, and smoothly follow the generated sequence of states with a linear quadratic tracking controller. The model exploits the invariant segments (also termed as sub-goals, options or actions) in the demonstrations and adapts the movement in accordance with the external environmental situations such as size, position and orientation of the objects in the environment using a task-parameterized formulation. We incorporate high-dimensional sensory data for skill acquisition by parsimoniously representing the demonstrations using statistical subspace clustering methods and exploit the coordination patterns in latent space. To adapt the models on the fly and/or teach new manipulation skills online with the streaming data, we formulate a non-parametric scalable online sequence clustering algorithm with Bayesian non-parametric mixture models to avoid the model selection problem while ensuring tractability under small variance asymptotics. We exploit the developed generative models to perform manipulation skills with remotely operated vehicles over satellite communication in the presence of communication delays and limited bandwidth. A set of task-parameterized generative models are learned from the demonstrations of different manipulation skills provided by the teleoperator. The model captures the intention of teleoperator on one hand and provides assistance in performing remote manipulation tasks on the other hand under varying environmental situations. The assistance is formulated under time-independent shared control, where the model continuously corrects the remote arm movement based on the current state of the teleoperator; and/or time-dependent autonomous control, where the model synthesizes the movement of the remote arm for autonomous skill execution. Using the proposed methodology with the two-armed Baxter robot as a mock-up for semi-autonomous teleoperation, we are able to learn manipulation skills such as opening a valve, pick-and-place an object by obstacle avoidance, hot-stabbing (a specialized underwater task akin to peg-in-a-hole task), screw-driver target snapping, and tracking a carabiner in as few as 4 - 8 demonstrations. Our study shows that the proposed manipulation assistance formulations improve the performance of the teleoperator by reducing the task errors and the execution time, while catering for the environmental differences in performing remote manipulation tasks with limited bandwidth and communication delays
    corecore