2,009 research outputs found

    Early Turn-taking Prediction with Spiking Neural Networks for Human Robot Collaboration

    Full text link
    Turn-taking is essential to the structure of human teamwork. Humans are typically aware of team members' intention to keep or relinquish their turn before a turn switch, where the responsibility of working on a shared task is shifted. Future co-robots are also expected to provide such competence. To that end, this paper proposes the Cognitive Turn-taking Model (CTTM), which leverages cognitive models (i.e., Spiking Neural Network) to achieve early turn-taking prediction. The CTTM framework can process multimodal human communication cues (both implicit and explicit) and predict human turn-taking intentions in an early stage. The proposed framework is tested on a simulated surgical procedure, where a robotic scrub nurse predicts the surgeon's turn-taking intention. It was found that the proposed CTTM framework outperforms the state-of-the-art turn-taking prediction algorithms by a large margin. It also outperforms humans when presented with partial observations of communication cues (i.e., less than 40% of full actions). This early prediction capability enables robots to initiate turn-taking actions at an early stage, which facilitates collaboration and increases overall efficiency.Comment: Submitted to IEEE International Conference on Robotics and Automation (ICRA) 201

    Dyadic behavior in co-manipulation :from humans to robots

    Get PDF
    To both decrease the physical toll on a human worker, and increase a robot’s environment perception, a human-robot dyad may be used to co-manipulate a shared object. From the premise that humans are efficient working together, this work’s approach is to investigate human-human dyads co-manipulating an object. The co-manipulation is evaluated from motion capture data, surface electromyography (EMG) sensors, and custom contact sensors for qualitative performance analysis. A human-human dyadic co-manipulation experiment is designed in which every human is instructed to behave as a leader, as a follower or neither, acting as naturally as possible. The experiment data analysis revealed that humans modulate their arm mechanical impedance depending on their role during the co-manipulation. In order to emulate the human behavior during a co-manipulation task, an admittance controller with varying stiffness is presented. The desired stiffness is continuously varied based on a scalar and smooth function that assigns a degree of leadership to the robot. Furthermore, the controller is analyzed through simulations, its stability is analyzed by Lyapunov. The resulting object trajectories greatly resemble the patterns seen in the human-human dyad experiment.Para tanto diminuir o esforço físico de um humano, quanto aumentar a percepção de um ambiente por um robô, um díade humano-robô pode ser usado para co-manipulação de um objeto compartilhado. Partindo da premissa de que humanos são eficientes trabalhando juntos, a abordagem deste trabalho é a de investigar díades humano-humano co-manipulando um objeto compartilhado. A co-manipulação é avaliada a partir de dados de um sistema de captura de movimentos, sinais de eletromiografia (EMG), e de sensores de contato customizados para análise qualitativa de desempenho. Um experimento de co-manipulação com díades humano-humano foi projetado no qual cada humano é instruído a se comportar como um líder, um seguidor, ou simplesmente agir tão naturalmente quanto possível. A análise de dados do experimento revelou que os humanos modulam a rigidez mecânica do braço a depender de que tipo de comportamento eles foram designados antes da co-manipulação. Para emular o comportamento humano durante uma tarefa de co-manipulação, um controle por admitância com rigidez variável é apresentado neste trabalho. A rigidez desejada é continuamente variada com base em uma função escalar suave que define o grau de liderança do robô. Além disso, o controlador é analisado por meio de simulações, e sua estabilidade é analisada pela teoria de Lyapunov. As trajetórias resultantes do uso do controlador mostraram um padrão de comportamento muito parecido ao do experimento com díades humano-humano

    Sensory Communication

    Get PDF
    Contains table of contents for Section 2 and reports on five research projects.National Institutes of Health Contract 2 R01 DC00117National Institutes of Health Contract 1 R01 DC02032National Institutes of Health Contract 2 P01 DC00361National Institutes of Health Contract N01 DC22402National Institutes of Health Grant R01-DC001001National Institutes of Health Grant R01-DC00270National Institutes of Health Grant 5 R01 DC00126National Institutes of Health Grant R29-DC00625U.S. Navy - Office of Naval Research Grant N00014-88-K-0604U.S. Navy - Office of Naval Research Grant N00014-91-J-1454U.S. Navy - Office of Naval Research Grant N00014-92-J-1814U.S. Navy - Naval Air Warfare Center Training Systems Division Contract N61339-94-C-0087U.S. Navy - Naval Air Warfare Center Training System Division Contract N61339-93-C-0055U.S. Navy - Office of Naval Research Grant N00014-93-1-1198National Aeronautics and Space Administration/Ames Research Center Grant NCC 2-77

    Robotic learning of force-based industrial manipulation tasks

    Get PDF
    Even with the rapid technological advancements, robots are still not the most comfortable machines to work with. Firstly, due to the separation of the robot and human workspace which imposes an additional financial burden. Secondly, due to the significant re-programming cost in case of changing products, especially in Small and Medium-sized Enterprises (SMEs). Therefore, there is a significant need to reduce the programming efforts required to enable robots to perform various tasks while sharing the same space with a human operator. Hence, the robot must be equipped with a cognitive and perceptual capabilities that facilitate human-robot interaction. Humans use their various senses to perform tasks such as vision, smell and taste. One sensethat plays a significant role in human activity is ’touch’ or ’force’. For example, holding a cup of tea, or making fine adjustments while inserting a key requires haptic information to achieve the task successfully. In all these examples, force and torque data are crucial for the successful completion of the activity. Also, this information implicitly conveys data about contact force, object stiffness, and many others. Hence, a deep understanding of the execution of such events can bridge the gap between humans and robots. This thesis is being directed to equip an industrial robot with the ability to deal with force perceptions and then learn force-based tasks using Learning from Demonstration (LfD).To learn force-based tasks using LfD, it is essential to extract task-relevant features from the force information. Then, knowledge must be extracted and encoded form the task-relevant features. Hence, the captured skills can be reproduced in a new scenario. In this thesis, these elements of LfD were achieved using different approaches based on the demonstrated task. In this thesis, four robotics problems were addressed using LfD framework. The first challenge was to filter out robots’ internal forces (irrelevant signals) using data-driven approach. The second robotics challenge was the recognition of the Contact State (CS) during assembly tasks. To tackle this challenge, a symbolic based approach was proposed, in which a force/torque signals; during demonstrated assembly, the task was encoded as a sequence of symbols. The third challenge was to learn a human-robot co-manipulation task based on LfD. In this case, an ensemble machine learning approach was proposed to capture such a skill. The last challenge in this thesis, was to learn an assembly task by demonstration with the presents of parts geometrical variation. Hence, a new learning approach based on Artificial Potential Field (APF) to learn a Peg-in-Hole (PiH) assembly task which includes no-contact and contact phases. To sum up, this thesis focuses on the use of data-driven approaches to learning force based task in an industrial context. Hence, different machine learning approaches were implemented, developed and evaluated in different scenarios. Then, the performance of these approaches was compared with mathematical modelling based approaches.</div

    Towards an Architecture for Semiautonomous Robot Telecontrol Systems.

    Get PDF
    The design and development of a computational system to support robot–operator collaboration is a challenging task, not only because of the overall system complexity, but furthermore because of the involvement of different technical and scientific disciplines, namely, Software Engineering, Psychology and Artificial Intelligence, among others. In our opinion the approach generally used to face this type of project is based on system architectures inherited from the development of autonomous robots and therefore fails to incorporate explicitly the role of the operator, i.e. these architectures lack a view that help the operator to see him/herself as an integral part of the system. The goal of this paper is to provide a human-centered paradigm that makes it possible to create this kind of view of the system architecture. This architectural description includes the definition of the role of operator and autonomous behaviour of the robot, it identifies the shared knowledge, and it helps the operator to see the robot as an intentional being as himself/herself

    Human-Machine Interfaces using Distributed Sensing and Stimulation Systems

    Get PDF
    As the technology moves towards more natural human-machine interfaces (e.g. bionic limbs, teleoperation, virtual reality), it is necessary to develop a sensory feedback system in order to foster embodiment and achieve better immersion in the control system. Contemporary feedback interfaces presented in research use few sensors and stimulation units to feedback at most two discrete feedback variables (e.g. grasping force and aperture), whereas the human sense of touch relies on a distributed network of mechanoreceptors providing a wide bandwidth of information. To provide this type of feedback, it is necessary to develop a distributed sensing system that could extract a wide range of information during the interaction between the robot and the environment. In addition, a distributed feedback interface is needed to deliver such information to the user. This thesis proposes the development of a distributed sensing system (e-skin) to acquire tactile sensation, a first integration of distributed sensing system on a robotic hand, the development of a sensory feedback system that compromises the distributed sensing system and a distributed stimulation system, and finally the implementation of deep learning methods for the classification of tactile data. It\u2019s core focus addresses the development and testing of a sensory feedback system, based on the latest distributed sensing and stimulation techniques. To this end, the thesis is comprised of two introductory chapters that describe the state of art in the field, the objectives, and the used methodology and contributions; as well as six studies that tackled the development of human-machine interfaces

    Image-Guided Robot-Assisted Techniques with Applications in Minimally Invasive Therapy and Cell Biology

    Get PDF
    There are several situations where tasks can be performed better robotically rather than manually. Among these are situations (a) where high accuracy and robustness are required, (b) where difficult or hazardous working conditions exist, and (c) where very large or very small motions or forces are involved. Recent advances in technology have resulted in smaller size robots with higher accuracy and reliability. As a result, robotics is fi nding more and more applications in Biomedical Engineering. Medical Robotics and Cell Micro-Manipulation are two of these applications involving interaction with delicate living organs at very di fferent scales.Availability of a wide range of imaging modalities from ultrasound and X-ray fluoroscopy to high magni cation optical microscopes, makes it possible to use imaging as a powerful means to guide and control robot manipulators. This thesis includes three parts focusing on three applications of Image-Guided Robotics in biomedical engineering, including: Vascular Catheterization: a robotic system was developed to insert a catheter through the vasculature and guide it to a desired point via visual servoing. The system provides shared control with the operator to perform a task semi-automatically or through master-slave control. The system provides control of a catheter tip with high accuracy while reducing X-ray exposure to the clinicians and providing a more ergonomic situation for the cardiologists. Cardiac Catheterization: a master-slave robotic system was developed to perform accurate control of a steerable catheter to touch and ablate faulty regions on the inner walls of a beating heart in order to treat arrhythmia. The system facilitates touching and making contact with a target point in a beating heart chamber through master-slave control with coordinated visual feedback. Live Neuron Micro-Manipulation: a microscope image-guided robotic system was developed to provide shared control over multiple micro-manipulators to touch cell membranes in order to perform patch clamp electrophysiology. Image-guided robot-assisted techniques with master-slave control were implemented for each case to provide shared control between a human operator and a robot. The results show increased accuracy and reduced operation time in all three cases

    Human-Robot Collaborations in Industrial Automation

    Get PDF
    Technology is changing the manufacturing world. For example, sensors are being used to track inventories from the manufacturing floor up to a retail shelf or a customer’s door. These types of interconnected systems have been called the fourth industrial revolution, also known as Industry 4.0, and are projected to lower manufacturing costs. As industry moves toward these integrated technologies and lower costs, engineers will need to connect these systems via the Internet of Things (IoT). These engineers will also need to design how these connected systems interact with humans. The focus of this Special Issue is the smart sensors used in these human–robot collaborations

    Haptics: Science, Technology, Applications

    Get PDF
    This open access book constitutes the proceedings of the 12th International Conference on Human Haptic Sensing and Touch Enabled Computer Applications, EuroHaptics 2020, held in Leiden, The Netherlands, in September 2020. The 60 papers presented in this volume were carefully reviewed and selected from 111 submissions. The were organized in topical sections on haptic science, haptic technology, and haptic applications. This year's focus is on accessibility
    • …
    corecore