206 research outputs found

    Early Turn-taking Prediction with Spiking Neural Networks for Human Robot Collaboration

    Full text link
    Turn-taking is essential to the structure of human teamwork. Humans are typically aware of team members' intention to keep or relinquish their turn before a turn switch, where the responsibility of working on a shared task is shifted. Future co-robots are also expected to provide such competence. To that end, this paper proposes the Cognitive Turn-taking Model (CTTM), which leverages cognitive models (i.e., Spiking Neural Network) to achieve early turn-taking prediction. The CTTM framework can process multimodal human communication cues (both implicit and explicit) and predict human turn-taking intentions in an early stage. The proposed framework is tested on a simulated surgical procedure, where a robotic scrub nurse predicts the surgeon's turn-taking intention. It was found that the proposed CTTM framework outperforms the state-of-the-art turn-taking prediction algorithms by a large margin. It also outperforms humans when presented with partial observations of communication cues (i.e., less than 40% of full actions). This early prediction capability enables robots to initiate turn-taking actions at an early stage, which facilitates collaboration and increases overall efficiency.Comment: Submitted to IEEE International Conference on Robotics and Automation (ICRA) 201

    A gaze-contingent framework for perceptually-enabled applications in healthcare

    Get PDF
    Patient safety and quality of care remain the focus of the smart operating room of the future. Some of the most influential factors with a detrimental effect are related to suboptimal communication among the staff, poor flow of information, staff workload and fatigue, ergonomics and sterility in the operating room. While technological developments constantly transform the operating room layout and the interaction between surgical staff and machinery, a vast array of opportunities arise for the design of systems and approaches, that can enhance patient safety and improve workflow and efficiency. The aim of this research is to develop a real-time gaze-contingent framework towards a "smart" operating suite, that will enhance operator's ergonomics by allowing perceptually-enabled, touchless and natural interaction with the environment. The main feature of the proposed framework is the ability to acquire and utilise the plethora of information provided by the human visual system to allow touchless interaction with medical devices in the operating room. In this thesis, a gaze-guided robotic scrub nurse, a gaze-controlled robotised flexible endoscope and a gaze-guided assistive robotic system are proposed. Firstly, the gaze-guided robotic scrub nurse is presented; surgical teams performed a simulated surgical task with the assistance of a robot scrub nurse, which complements the human scrub nurse in delivery of surgical instruments, following gaze selection by the surgeon. Then, the gaze-controlled robotised flexible endoscope is introduced; experienced endoscopists and novice users performed a simulated examination of the upper gastrointestinal tract using predominately their natural gaze. Finally, a gaze-guided assistive robotic system is presented, which aims to facilitate activities of daily living. The results of this work provide valuable insights into the feasibility of integrating the developed gaze-contingent framework into clinical practice without significant workflow disruptions.Open Acces

    Robotics and IoT: Interdisciplinary Applied Research in the RIoT Zone

    Get PDF
    Short Abstract: Robotics and the Internet of Things are intrinsically multi-disciplinary subjects that investigate the interaction between the physical and the cyber worlds and how they impact society. As a result, they not only demand careful consideration of digital and analog technologies, but also the human element. The “RIoT Zone” brings together disparate people and ideas to address intuitive autonomy. Full Abstract: Robotics and the Internet of Things are intrinsically multi-disciplinary subjects that investigate the interaction between the physical and the cyber worlds and how they impact society. As a result, they not only demand careful consideration of digital and analog technologies, but also the human element. The “RIoT Zone” brings together disparate people and ideas to address a human-centric form of intelligence we call “intuitive autonomy”. This talk will describe human/robot interaction and the programming of robots by human demonstration from the perspectives of Engineering Technology, Computer Information Technology, Industrial Engineering and Psychology

    Operating at a Distance-How a Teleoperated Surgical Robot Reconfigures Teamwork in the Operating Room

    Get PDF
    This paper investigates how a teleoperated surgical robot reconfigures teamwork in the operating room by spatially redistributing team members. We report on findings from two years of fieldwork at two hospitals, including interviews and video data. We find that while in non-robotic cases team members huddle together, physically touching, introduction of a surgical robot increases physical and sensory distance between team members. This spatial rearrangement has implications for both cognitive and affective dimensions of collaborative surgical work. Cognitive distance is increased, necessitating new efforts to maintain situation awareness and common ground. Moreover, affective distance is introduced, decreasing sensitivity to shared and non-shared affective states and leading to new practices aimed at restoring affective connection within the team. We describe new forms of physical, cognitive, and affective distance associated with teleoperated robotic surgery, and the effects these have on power distribution, practice, and collaborative experience within the surgical team

    An eye-tracking based robotic scrub nurse: proof of concept

    Get PDF
    Background Within surgery, assistive robotic devices (ARD) have reported improved patient outcomes. ARD can offer the surgical team a “third hand” to perform wider tasks and more degrees of motion in comparison with conventional laparoscopy. We test an eye-tracking based robotic scrub nurse (RSN) in a simulated operating room based on a novel real-time framework for theatre-wide 3D gaze localization in a mobile fashion. Methods Surgeons performed segmental resection of pig colon and handsewn end-to-end anastomosis while wearing eye-tracking glasses (ETG) assisted by distributed RGB-D motion sensors. To select instruments, surgeons (ST) fixed their gaze on a screen, initiating the RSN to pick up and transfer the item. Comparison was made between the task with the assistance of a human scrub nurse (HSNt) versus the task with the assistance of robotic and human scrub nurse (R&HSNt). Task load (NASA-TLX), technology acceptance (Van der Laan’s), metric data on performance and team communication were measured. Results Overall, 10 ST participated. NASA-TLX feedback for ST on HSNt vs R&HSNt usage revealed no significant difference in mental, physical or temporal demands and no change in task performance. ST reported significantly higher frustration score with R&HSNt. Van der Laan’s scores showed positive usefulness and satisfaction scores in using the RSN. No significant difference in operating time was observed. Conclusions We report initial findings of our eye-tracking based RSN. This enables mobile, unrestricted hands-free human–robot interaction intra-operatively. Importantly, this platform is deemed non-inferior to HSNt and accepted by ST and HSN test users

    Embodied interaction with visualization and spatial navigation in time-sensitive scenarios

    Get PDF
    Paraphrasing the theory of embodied cognition, all aspects of our cognition are determined primarily by the contextual information and the means of physical interaction with data and information. In hybrid human-machine systems involving complex decision making, continuously maintaining a high level of attention while employing a deep understanding concerning the task performed as well as its context are essential. Utilizing embodied interaction to interact with machines has the potential to promote thinking and learning according to the theory of embodied cognition proposed by Lakoff. Additionally, the hybrid human-machine system utilizing natural and intuitive communication channels (e.g., gestures, speech, and body stances) should afford an array of cognitive benefits outstripping the more static forms of interaction (e.g., computer keyboard). This research proposes such a computational framework based on a Bayesian approach; this framework infers operator\u27s focus of attention based on the physical expressions of the operators. Specifically, this work aims to assess the effect of embodied interaction on attention during the solution of complex, time-sensitive, spatial navigational problems. Toward the goal of assessing the level of operator\u27s attention, we present a method linking the operator\u27s interaction utility, inference, and reasoning. The level of attention was inferred through networks coined Bayesian Attentional Networks (BANs). BANs are structures describing cause-effect relationships between operator\u27s attention, physical actions and decision-making. The proposed framework also generated a representative BAN, called the Consensus (Majority) Model (CMM); the CMM consists of an iteratively derived and agreed graph among candidate BANs obtained by experts and by the automatic learning process. Finally, the best combinations of interaction modalities and feedback were determined by the use of particular utility functions. This methodology was applied to a spatial navigational scenario; wherein, the operators interacted with dynamic images through a series of decision making processes. Real-world experiments were conducted to assess the framework\u27s ability to infer the operator\u27s levels of attention. Users were instructed to complete a series of spatial-navigational tasks using an assigned pairing of an interaction modality out of five categories (vision-based gesture, glove-based gesture, speech, feet, or body balance) and a feedback modality out of two (visual-based or auditory-based). Experimental results have confirmed that physical expressions are a determining factor in the quality of the solutions in a spatial navigational problem. Moreover, it was found that the combination of foot gestures with visual feedback resulted in the best task performance (p\u3c .001). Results have also shown that embodied interaction-based multimodal interface decreased execution errors that occurred in the cyber-physical scenarios (p \u3c .001). Therefore we conclude that appropriate use of interaction and feedback modalities allows the operators maintain their focus of attention, reduce errors, and enhance task performance in solving the decision making problems

    A realist process evaluation of robot-assisted surgery: integration into routine practice and impacts on communication, collaboration and decision-making

    Get PDF
    YesBackground: The implementation of robot-assisted surgery (RAS) can be challenging, with reports of surgical robots being underused. This raises questions about differences compared with open and laparoscopic surgery and how best to integrate RAS into practice. Objectives: To (1) contribute to reporting of the ROLARR (RObotic versus LAparoscopic Resection for Rectal cancer) trial, by investigating how variations in the implementation of RAS and the context impact outcomes; (2) produce guidance on factors likely to facilitate successful implementation; (3) produce guidance on how to ensure effective teamwork; and (4) provide data to inform the development of tools for RAS. Design: Realist process evaluation alongside ROLARR. Phase 1 – a literature review identified theories concerning how RAS becomes embedded into practice and impacts on teamwork and decision-making. These were refined through interviews across nine NHS trusts with theatre teams. Phase 2 – a multisite case study was conducted across four trusts to test the theories. Data were collected using observation, video recording, interviews and questionnaires. Phase 3 – interviews were conducted in other surgical disciplines to assess the generalisability of the findings. Findings: The introduction of RAS is surgeon led but dependent on support at multiple levels. There is significant variation in the training provided to theatre teams. Contextual factors supporting the integration of RAS include the provision of whole-team training, the presence of handpicked dedicated teams and the availability of suitably sized operating theatres. RAS introduces challenges for teamwork that can impact operation duration, but, over time, teams develop strategies to overcome these challenges. Working with an experienced assistant supports teamwork, but experience of the procedure is insufficient for competence in RAS and experienced scrub practitioners are important in supporting inexperienced assistants. RAS can result in reduced distraction and increased concentration for the surgeon when he or she is supported by an experienced assistant or scrub practitioner. Conclusions: Our research suggests a need to pay greater attention to the training and skill mix of the team. To support effective teamwork, our research suggests that it is beneficial for surgeons to (1) encourage the team to communicate actions and concerns; (2) alert the attention of the assistant before issuing a request; and (3) acknowledge the scrub practitioner’s role in supporting inexperienced assistants. It is beneficial for the team to provide oral responses to the surgeon’s requests. Limitations: This study started after the trial, limiting impact on analysis of the trial. The small number of operations observed may mean that less frequent impacts of RAS were missed. Future work: Future research should include (1) exploring the transferability of guidance for effective teamwork to other surgical domains in which technology leads to the physical or perceptual separation of surgeon and team; (2) exploring the benefits and challenges of including realist methods in feasibility and pilot studies; (3) assessing the feasibility of using routine data to understand the impact of RAS on rare end points associated with patient safety; (4) developing and evaluating methods for whole-team training; and (5) evaluating the impact of different physical configurations of the robotic console and team members on teamwork.National Inst for Health Research (NIHR

    Eliciting context-mechanism-outcome configurations: Experiences from a realist evaluation investigating the impact of robotic surgery on teamwork in the operating theatre

    Get PDF
    This article recounts our experience of eliciting, cataloguing and prioritizing conjectured Context-Mechanism-Outcome configurations at the outset of a realist evaluation, to provide new insight into how Context-Mechanism-Outcome configurations can be generated and theorized. Our construction of Context-Mechanism-Outcome configurations centred on how, why and in what circumstances teamwork was impacted by robotic surgery, rather than how and why this technology improved surgical outcomes as intended. We found that, as well as offering resources, robotic surgery took away resources from the theatre team, by physically reconfiguring the operating theatre and redistributing the surgical task load, essentially changing the context in which teamwork was performed. We constructed Context-Mechanism-Outcome configurations that explain how teamwork mechanisms were both constrained by the contextual changes, and triggered in the new context through the use of informal strategies. We conclude by reflecting on our application of realist evaluation to understand the potential impacts of robotic surgery on teamwork
    • …
    corecore