80 research outputs found

    Telerobotic Pointing Gestures Shape Human Spatial Cognition

    Full text link
    This paper aimed to explore whether human beings can understand gestures produced by telepresence robots. If it were the case, they can derive meaning conveyed in telerobotic gestures when processing spatial information. We conducted two experiments over Skype in the present study. Participants were presented with a robotic interface that had arms, which were teleoperated by an experimenter. The robot could point to virtual locations that represented certain entities. In Experiment 1, the experimenter described spatial locations of fictitious objects sequentially in two conditions: speech condition (SO, verbal descriptions clearly indicated the spatial layout) and speech and gesture condition (SR, verbal descriptions were ambiguous but accompanied by robotic pointing gestures). Participants were then asked to recall the objects' spatial locations. We found that the number of spatial locations recalled in the SR condition was on par with that in the SO condition, suggesting that telerobotic pointing gestures compensated ambiguous speech during the process of spatial information. In Experiment 2, the experimenter described spatial locations non-sequentially in the SR and SO conditions. Surprisingly, the number of spatial locations recalled in the SR condition was even higher than that in the SO condition, suggesting that telerobotic pointing gestures were more powerful than speech in conveying spatial information when information was presented in an unpredictable order. The findings provide evidence that human beings are able to comprehend telerobotic gestures, and importantly, integrate these gestures with co-occurring speech. This work promotes engaging remote collaboration among humans through a robot intermediary.Comment: 27 pages, 7 figure

    Multimodality with Eye tracking and Haptics: A New Horizon for Serious Games?

    Get PDF
    The goal of this review is to illustrate the emerging use of multimodal virtual reality that can benefit learning-based games. The review begins with an introduction to multimodal virtual reality in serious games and we provide a brief discussion of why cognitive processes involved in learning and training are enhanced under immersive virtual environments. We initially outline studies that have used eye tracking and haptic feedback independently in serious games, and then review some innovative applications that have already combined eye tracking and haptic devices in order to provide applicable multimodal frameworks for learning-based games. Finally, some general conclusions are identified and clarified in order to advance current understanding in multimodal serious game production as well as exploring possible areas for new applications

    Efficiency of speech and iconic gesture integration for robotic and human communicators - a direct comparison

    Get PDF
    © 2015 IEEE. Co-verbal gestures are an important part of human communication, improving its efficiency for information conveyance. A key component of such improvement is the observer's ability to integrate information from the two communication channels, speech and gesture. Whether such integration also occurs when the multi-modal communication information is produced by a humanoid robot, and whether it is as efficient as for a human communicator, is an open question. Here, we present an experiment which, using a fully within subjects design, shows that for a range of iconic gestures, speech and gesture integration occurs with similar efficiency for human and for robot communicators. The gestures for this study were produced on an Aldebaran Robotics NAO robot platform with a Kinect based tele-operation system. We also show that our system is able to produce a range of iconic gestures that are understood by participants in unimodal (gesture only) communication, as well as being efficiently integrated with speech. Hence, we demonstrate the utility of iconic gestures for robotic communicators

    Influence of the shape and mass of a small robot when thrown to a dummy human head

    Get PDF
    Social robots have shown some efficacy in assisting children with autism and are now being considered as assistive tools for therapy. The physical proximity of a small companion social robot could become a source of harm to children with autism during aggressive physical interactions. A child exhibiting challenging behaviors could throw a small robot that could harm another child 0 s head upon impact. In this paper, we investigate the effects of the mass and the shape of objects thrown on impact at different impact velocities on the linear acceleration of a developed dummy head. This dummy head could be the head of another child or a caregiver in the room. A total of 27 main experiments were conducted based on Taguchi’s orthogonal array design. The data were then analyzed using ANOVA and signal-to-noise (S/N). Our results revealed that the two design factors considered (i.e. mass and shape) and the noise factor (i.e. impact velocities) affected the resultant response. Finally, confirmation runs at the optimal identified shape and mass (i.e. mass of 0.3 kg and shape of either cube or wedge) showed an overall reduction in the resultant peak linear acceleration of the dummy head as compared to the other conditions. These results have implications on the design and manufacturing of small social robots whereby minimizing the mass of the robots can aid in mitigating harm to the head due to impact

    Haptic Media Scenes

    Get PDF
    The aim of this thesis is to apply new media phenomenological and enactive embodied cognition approaches to explain the role of haptic sensitivity and communication in personal computer environments for productivity. Prior theory has given little attention to the role of haptic senses in influencing cognitive processes, and do not frame the richness of haptic communication in interaction design—as haptic interactivity in HCI has historically tended to be designed and analyzed from a perspective on communication as transmissions, sending and receiving haptic signals. The haptic sense may not only mediate contact confirmation and affirmation, but also rich semiotic and affective messages—yet this is a strong contrast between this inherent ability of haptic perception, and current day support for such haptic communication interfaces. I therefore ask: How do the haptic senses (touch and proprioception) impact our cognitive faculty when mediated through digital and sensor technologies? How may these insights be employed in interface design to facilitate rich haptic communication? To answer these questions, I use theoretical close readings that embrace two research fields, new media phenomenology and enactive embodied cognition. The theoretical discussion is supported by neuroscientific evidence, and tested empirically through case studies centered on digital art. I use these insights to develop the concept of the haptic figura, an analytical tool to frame the communicative qualities of haptic media. The concept gauges rich machine- mediated haptic interactivity and communication in systems with a material solution supporting active haptic perception, and the mediation of semiotic and affective messages that are understood and felt. As such the concept may function as a design tool for developers, but also for media critics evaluating haptic media. The tool is used to frame a discussion on opportunities and shortcomings of haptic interfaces for productivity, differentiating between media systems for the hand and the full body. The significance of this investigation is demonstrating that haptic communication is an underutilized element in personal computer environments for productivity and providing an analytical framework for a more nuanced understanding of haptic communication as enabling the mediation of a range of semiotic and affective messages, beyond notification and confirmation interactivity

    Human Machine Interfaces for Teleoperators and Virtual Environments

    Get PDF
    In Mar. 1990, a meeting organized around the general theme of teleoperation research into virtual environment display technology was conducted. This is a collection of conference-related fragments that will give a glimpse of the potential of the following fields and how they interplay: sensorimotor performance; human-machine interfaces; teleoperation; virtual environments; performance measurement and evaluation methods; and design principles and predictive models

    Telepresence and Transgenic Art

    Get PDF

    Task Analysis, Modeling, And Automatic Identification Of Elemental Tasks In Robot-Assisted Laparoscopic Surgery

    Get PDF
    Robotic microsurgery provides many advantages for surgical operations, including tremor filtration, an increase in dexterity, and smaller incisions. There is a growing need for a task analyses on robotic laparoscopic operations to understand better the tasks involved in robotic microsurgery cases. A few research groups have conducted task observations to help systems automatically identify surgeon skill based on task execution. Their gesture analyses, however, lacked depth and their class libraries were composed of ambiguous groupings of gestures that did not share contextual similarities. A Hierarchical Task Analysis was performed on a four-throw suturing task using a robotic microsurgical platform. Three skill levels were studied: attending surgeons, residents, and naïve participants. From this task analysis, a subtask library was created. The Hierarchical Task Analysis subtask library, a computer system was created that accurately identified surgeon subtasks based on surgeon hand gestures. An automatic classifier was trained on the subtasks identified during the Hierarchical Task Analysis of a four-throw suturing task and the motion signature recorded during task performance. Using principal component analysis and a J48 decision tree classifier, an average individual classification accuracy of 94.56% was achieved. This research lays the foundation for accurate and meaningful autonomous computer assistance in a surgical arena by creating a gesture library from a detailed Hierarchical Task Analysis. The results of this research will improve the surgeon-robot interface and enhance surgery performance. The classes used will eliminate human machine miscommunication by using an understandable and structured class library based on a Hierarchical Task Analysis. By enabling a robot to understand surgeon actions, intelligent contextual-based assistance could be provide to the surgeon by the robot. Limitations of this research included the small participant sample size used for this research which resulted in high subtask execution variability. Future work will include a larger participant population to address this limitation. Additionally, a Hidden Markov Model will be incorporated into the classification process to help increase the classification accuracy. Finally, a closer investigation of vestigial techniques will be conducted to study the effect of past learned laparoscopic techniques, which are no longer necessary in the robotic-assisted laparoscopic surgery arena
    corecore