6 research outputs found

    Using the Waseda Bioinstrumentation System WB-1R to analyze Surgeon’s performance during laparoscopy - towards the development of a global performance index -

    Get PDF
    Minimally invasive surgery (MIS) has become very common in recent years, thanks to the many advantages it provides for patients. Since it is difficult for surgeons to learn and master this technique, several training methods and metrics have been proposed, both to improve the surgeon's abilities and also to assess his/her skills. This paper presents the use of the WB-1R (Waseda Bioinstrumentation system no.1. Refined), which was developed at Waseda University, Tokyo, to investigate and analyze a surgeon's movements and performance. Specifically, the system can measure the movements of the head, the arms, and the hands, as well as several physiological parameters. In this paper we present our experiment to evaluate a surgeon's ability to handle surgical instruments and his/her depth perception using a laparoscopic view. Our preliminary analysis of a subset of the acquired data (i.e. comfort of the subjects; the amount of time it took o complete each exercise; and respiration) clearly shows that the expert surgeon and the group of medical students perform very differently. Therefore, WB-1R (or, better, a newer version tailored specifically for use in the operating room) could provide important additional information to help assess the experience and performance of surgeons, thus leading to the development of a Global Performance Index for surgeons during MIS. These analyses and modeling, moreover, are an important step towards the automatization and the robotic assistance of the surgical gesture

    Construction of a memory model using neural networks and its application to a humanoid robot

    Get PDF
    制度:新 ; 文部省報告番号:甲2424号 ; 学位の種類:博士(工学) ; 授与年月日:2007/3/1 ; 早大学位記番号:新451

    Applications of Affective Computing in Human-Robot Interaction: state-of-art and challenges for manufacturing

    Get PDF
    The introduction of collaborative robots aims to make production more flexible, promoting a greater interaction between humans and robots also from physical point of view. However, working closely with a robot may lead to the creation of stressful situations for the operator, which can negatively affect task performance. In Human-Robot Interaction (HRI), robots are expected to be socially intelligent, i.e., capable of understanding and reacting accordingly to human social and affective clues. This ability can be exploited implementing affective computing, which concerns the development of systems able to recognize, interpret, process, and simulate human affects. Social intelligence is essential for robots to establish a natural interaction with people in several contexts, including the manufacturing sector with the emergence of Industry 5.0. In order to take full advantage of the human-robot collaboration, the robotic system should be able to perceive the psycho-emotional and mental state of the operator through different sensing modalities (e.g., facial expressions, body language, voice, or physiological signals) and to adapt its behaviour accordingly. The development of socially intelligent collaborative robots in the manufacturing sector can lead to a symbiotic human-robot collaboration, arising several research challenges that still need to be addressed. The goals of this paper are the following: (i) providing an overview of affective computing implementation in HRI; (ii) analyzing the state-of-art on this topic in different application contexts (e.g., healthcare, service applications, and manufacturing); (iii) highlighting research challenges for the manufacturing sector

    Accessible Integration of Physiological Adaptation in Human-Robot Interaction

    Get PDF
    Technological advancements in creating and commercializing novel unobtrusive wearable physiological sensors have generated new opportunities to develop adaptive human-robot interaction (HRI). Detecting complex human states such as engagement and stress when interacting with social agents could bring numerous advantages to creating meaningful interactive experiences. Bodily signals have classically been used for post-interaction analysis in HRI. Despite this, real-time measurements of autonomic responses have been used in other research domains to develop physiologically adaptive systems with great success; increasing user-experience, task performance, and reducing cognitive workload. This thesis presents the HRI Physio Lib, a conceptual framework, and open-source software library to facilitate the development of physiologically adaptive HRI scenarios. Both the framework and architecture of the library are described in-depth, along with descriptions of additional software tools that were developed to make the inclusion of physiological signals easier for robotics frameworks. The framework is structured around four main components for designing physiologically adaptive experimental scenarios: signal acquisition, processing and analysis; social robot and communication; and scenario and adaptation. Open-source software tools have been developed to assist in the individual creation of each described component. To showcase our framework and test the software library, we developed, as a proof-of-concept, a simple scenario revolving around a physiologically aware exercise coach, that modulates the speed and intensity of the activity to promote an effective cardiorespiratory exercise. We employed the socially assistive QT robot for our exercise scenario, as it provides a comprehensive ROS interface, making prototyping of behavioral responses fast and simple. Our exercise routine was designed following guidelines by the American College of Sports Medicine. We describe our physiologically adaptive algorithm and propose an alternative second one with stochastic elements. Finally, a discussion about other HRI domains where the addition of a physiologically adaptive mechanism could result in novel advances in interaction quality is provided as future extensions for this work. From the literature, we identified improving engagement, providing deeper social connections, health care scenarios, and also applications for self-driving vehicles as promising avenues for future research where a physiologically adaptive social robot could improve user experience

    Using reinforcement learning for optimizing the reproduction of tasks in robot programming by demonstration

    Get PDF
    As robots start pervading human environments, the need for new interfaces that would simplify human-robot interaction has become more pressing. Robot Programming by Demonstration (RbD) develops intuitive ways of programming robots, taking inspiration in strategies used by humans to transmit knowledge to apprentices. The user-friendliness of RbD is meant to allow lay users with no prior knowledge in computer science, electronics or mechanics to train robots to accomplish tasks the same way as they would with a co-worker. When a trainer teaches a task to a robot, he/she shows a particular way of fulfilling the task. For a robot to be able to learn from observing the trainer, it must be able to learn what the task entails (i.e. answer the so-called "What-to-imitate?" question), by inferring the user's intentions. But most importantly, the robot must be able to adapt its own controller to fit at best the demonstration (the so-called "How-to-imitate?" question) despite different setups and embodiments. The latter is the question that interested us in this thesis. It relates to the problem of optimizing the reproduction of the task under environmental constraints. The "How-to-imitate?" question is subdivided into two problems. The first problem, also known as the "correspondence problem", relates to resolving the discrepancy between the human demonstrator and robot's body that prevent the robot from doing an identical reproduction of the task. Even though we helped ourselves by considering solely humanoid platforms, that is platforms that have a joint configuration similar to that of the human, discrepancies in the number of degrees of freedom and range of motion remained. We resolved these by exploiting the redundant information conveyed through the demonstrations by collecting data through different frames of reference. By exploiting these redundancies in an algorithm comparable to the damped least square algorithm, we are able to reproduce a trajectory that minimizes the error between the desired trajectory and the reproduced trajectory across each frame of reference. The second problem consists in reproducing a trajectory in an unknown setup while respecting the task constraints learned during training. When the information learned from the demonstration no longer suffice to generalize the task constraints to a new set-up, the robot must re-learn the task; this time through trial-and-error. Here we considered the combination of trial-and-error learning to complement RbD. By adding a trial-and-error module to the original Imitation Learning algorithm, the robot can find a solution that is more adapted to the context and to its embodiment than the solution found using RbD. Specifically, we compared Reinforcement Learning (RL) – to other classical optimization techniques. We show that the system is advantageous in that: a) learning is more robust to unexpected events that have not been encountered during the demonstrations and b) the robot is able to optimize its own model of the task according to its own embodiment
    corecore