4,672 research outputs found

    Integrating Constrained Experiments in Long-term Human-Robot Interaction using Task– and Scenario–based Prototyping

    Get PDF
    © 2015 The Author(s). Published with license by Taylor & Francis© Dag Sverre Syrdal, Kerstin Dautenhahn, Kheng Lee Koay, and Wan Ching Ho. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The moral rights of the named author(s) have been asserted. Permission is granted subject to the terms of the License under which the work was published. Please check the License conditions for the work which you wish to reuse. Full and appropriate attribution must be given. This permission does not cover any third party copyrighted material which may appear in the work requested.In order to investigate how the use of robots may impact everyday tasks, 12 participants interacted with a University of Hertfordshire Sunflower robot over a period of 8 weeks in the university’s Robot House.. Participants performed two constrained tasks, one physical and one cognitive , 4 times over this period. Participant responses were recorded using a variety of measures including the System Usability Scale and the NASA Task Load Index . The use of the robot had an impact on the experienced workload of the participants diïŹ€erently for the two tasks, and this eïŹ€ect changed over time. In the physical task, there was evidence of adaptation to the robot’s behaviour. For the cognitive task, the use of the robot was experienced as more frustrating in the later weeks.Peer reviewedFinal Published versio

    Towards long-term social child-robot interaction: using multi-activity switching to engage young users

    Get PDF
    Social robots have the potential to provide support in a number of practical domains, such as learning and behaviour change. This potential is particularly relevant for children, who have proven receptive to interactions with social robots. To reach learning and therapeutic goals, a number of issues need to be investigated, notably the design of an effective child-robot interaction (cHRI) to ensure the child remains engaged in the relationship and that educational goals are met. Typically, current cHRI research experiments focus on a single type of interaction activity (e.g. a game). However, these can suffer from a lack of adaptation to the child, or from an increasingly repetitive nature of the activity and interaction. In this paper, we motivate and propose a practicable solution to this issue: an adaptive robot able to switch between multiple activities within single interactions. We describe a system that embodies this idea, and present a case study in which diabetic children collaboratively learn with the robot about various aspects of managing their condition. We demonstrate the ability of our system to induce a varied interaction and show the potential of this approach both as an educational tool and as a research method for long-term cHRI

    Views from within a narrative : Evaluating long-term human-robot interaction in a naturalistic environment using open-ended scenarios

    Get PDF
    Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited. Date of acceptance: 16/06/2014This article describes the prototyping of human–robot interactions in the University of Hertfordshire (UH) Robot House. Twelve participants took part in a long-term study in which they interacted with robots in the UH Robot House once a week for a period of 10 weeks. A prototyping method using the narrative framing technique allowed participants to engage with the robots in episodic interactions that were framed using narrative to convey the impression of a continuous long-term interaction. The goal was to examine how participants responded to the scenarios and the robots as well as specific robot behaviours, such as agent migration and expressive behaviours. Evaluation of the robots and the scenarios were elicited using several measures, including the standardised System Usability Scale, an ad hoc Scenario Acceptance Scale, as well as single-item Likert scales, open-ended questionnaire items and a debriefing interview. Results suggest that participants felt that the use of this prototyping technique allowed them insight into the use of the robot, and that they accepted the use of the robot within the scenarioPeer reviewe

    Robotic Assisted Fracture Surgery

    Get PDF

    A Midsummer Night’s Dream (with flying robots)

    Get PDF
    Seven flying robot “fairies” joined human actors in the Texas A&M production of William Shakespeare’s A Midsummer Night’s Dream. The production was a collaboration between the departments of Computer Science and Engineering, Electrical and Computer Engineering, and Theater Arts. The collaboration was motivated by two assertions. First, that the performing arts have principles for creating believable agents that will transfer to robots. Second, the theater is a natural testbed for evaluating the response of untrained human groups (both actors and the audience) to robots interacting with humans in shared spaces, i.e., were believable agents created? The production used two types of unmanned aerial vehicles, an AirRobot 100-b quadrotor platform about the size of a large pizza pan, and six E-flite Blade MCX palm-sized toy helicopters. The robots were used as alter egos for fairies in the play; the robots did not replace any actors, instead they were paired with them. The insertion of robots into the production was not widely advertised so the audience was the typical theatergoing demographic, not one consisting of people solely interested technology. The use of radio-controlled unmanned aerial vehicles provides insights into what types of autonomy are needed to create appropriate affective interactions with untrained human groups. The observations from the four weeks of practice and eight performances contribute (1) a taxonomy and methods for creating affect exchanges between robots and untrained human groups, (2) the importance of improvisation within robot theater, (3) insights into how untrained human groups form expectations about robots, and (4) awareness of the importance of safety and reliability as a design constraint for public engagement with robot platforms. The taxonomy captures that apparent affect can be created without explicit affective behaviors by the robot, but requires talented actors to convey the situation or express reactions. The audience’s response to robot crashes was a function of whether they had the opportunity to observe how the actors reacted to robot crashes on stage, suggesting that pre-existing expectations must be taken into account in the design of autonomy. Furthermore, it appears that the public expect robots to be more reliable (an expectation of consumer product hardening) and safe (an expectation from product liability) than the current capabilities and this may be a major challenge or even legal barrier for introducing robots into shared public spaces. These contributions are expected to inform design strategies for increasing public engagement with robot platforms through affect, and shows the value of arts-based approaches to public encounters with robots both for generating design strategies and for evaluation

    A gaze-contingent framework for perceptually-enabled applications in healthcare

    Get PDF
    Patient safety and quality of care remain the focus of the smart operating room of the future. Some of the most influential factors with a detrimental effect are related to suboptimal communication among the staff, poor flow of information, staff workload and fatigue, ergonomics and sterility in the operating room. While technological developments constantly transform the operating room layout and the interaction between surgical staff and machinery, a vast array of opportunities arise for the design of systems and approaches, that can enhance patient safety and improve workflow and efficiency. The aim of this research is to develop a real-time gaze-contingent framework towards a "smart" operating suite, that will enhance operator's ergonomics by allowing perceptually-enabled, touchless and natural interaction with the environment. The main feature of the proposed framework is the ability to acquire and utilise the plethora of information provided by the human visual system to allow touchless interaction with medical devices in the operating room. In this thesis, a gaze-guided robotic scrub nurse, a gaze-controlled robotised flexible endoscope and a gaze-guided assistive robotic system are proposed. Firstly, the gaze-guided robotic scrub nurse is presented; surgical teams performed a simulated surgical task with the assistance of a robot scrub nurse, which complements the human scrub nurse in delivery of surgical instruments, following gaze selection by the surgeon. Then, the gaze-controlled robotised flexible endoscope is introduced; experienced endoscopists and novice users performed a simulated examination of the upper gastrointestinal tract using predominately their natural gaze. Finally, a gaze-guided assistive robotic system is presented, which aims to facilitate activities of daily living. The results of this work provide valuable insights into the feasibility of integrating the developed gaze-contingent framework into clinical practice without significant workflow disruptions.Open Acces

    Augmented reality for computer assisted orthopaedic surgery

    Get PDF
    In recent years, computer-assistance and robotics have established their presence in operating theatres and found success in orthopaedic procedures. Benefits of computer assisted orthopaedic surgery (CAOS) have been thoroughly explored in research, finding improvements in clinical outcomes, through increased control and precision over surgical actions. However, human-computer interaction in CAOS remains an evolving field, through emerging display technologies including augmented reality (AR) – a fused view of the real environment with virtual, computer-generated holograms. Interactions between clinicians and patient-specific data generated during CAOS are limited to basic 2D interactions on touchscreen monitors, potentially creating clutter and cognitive challenges in surgery. Work described in this thesis sought to explore the benefits of AR in CAOS through: an integration between commercially available AR and CAOS systems, creating a novel AR-centric surgical workflow to support various tasks of computer-assisted knee arthroplasty, and three pre–clinical studies exploring the impact of the new AR workflow on both existing and newly proposed quantitative and qualitative performance metrics. Early research focused on cloning the (2D) user-interface of an existing CAOS system onto a virtual AR screen and investigating any resulting impacts on usability and performance. An infrared-based registration system is also presented, describing a protocol for calibrating commercial AR headsets with optical trackers, calculating a spatial transformation between surgical and holographic coordinate frames. The main contribution of this thesis is a novel AR workflow designed to support computer-assisted patellofemoral arthroplasty. The reported workflow provided 3D in-situ holographic guidance for CAOS tasks including patient registration, pre-operative planning, and assisted-cutting. Pre-clinical experimental validation on a commercial system (NAVIO¼, Smith & Nephew) for these contributions demonstrates encouraging early-stage results showing successful deployment of AR to CAOS systems, and promising indications that AR can enhance the clinician’s interactions in the future. The thesis concludes with a summary of achievements, corresponding limitations and future research opportunities.Open Acces

    Contextualized Robot Navigation

    Get PDF
    In order to improve the interaction between humans and robots, robots need to be able to move about in a way that is appropriate to the complex environments around them. One way to investigate how the robots should move is through the lens of theatre, which provides us with ways to analyze the robot\u27s movements and the motivations for moving in particular ways. In particular, this has proven useful for improving robot navigation. By altering the costmaps used for path planning, robots can navigate around their environment in ways that incorporate additional contexts. Experimental results with user studies have shown altered costmaps to have a significant effect on the interaction, although the costmaps must be carefully tuned to get the desired effect. The new layered costmap algorithm builds on the established open-source navigation platform, creating a robust system that can be extended to handle a wide range of contextual situations
    • 

    corecore