85 research outputs found

    Telerobotic Pointing Gestures Shape Human Spatial Cognition

    Full text link
    This paper aimed to explore whether human beings can understand gestures produced by telepresence robots. If it were the case, they can derive meaning conveyed in telerobotic gestures when processing spatial information. We conducted two experiments over Skype in the present study. Participants were presented with a robotic interface that had arms, which were teleoperated by an experimenter. The robot could point to virtual locations that represented certain entities. In Experiment 1, the experimenter described spatial locations of fictitious objects sequentially in two conditions: speech condition (SO, verbal descriptions clearly indicated the spatial layout) and speech and gesture condition (SR, verbal descriptions were ambiguous but accompanied by robotic pointing gestures). Participants were then asked to recall the objects' spatial locations. We found that the number of spatial locations recalled in the SR condition was on par with that in the SO condition, suggesting that telerobotic pointing gestures compensated ambiguous speech during the process of spatial information. In Experiment 2, the experimenter described spatial locations non-sequentially in the SR and SO conditions. Surprisingly, the number of spatial locations recalled in the SR condition was even higher than that in the SO condition, suggesting that telerobotic pointing gestures were more powerful than speech in conveying spatial information when information was presented in an unpredictable order. The findings provide evidence that human beings are able to comprehend telerobotic gestures, and importantly, integrate these gestures with co-occurring speech. This work promotes engaging remote collaboration among humans through a robot intermediary.Comment: 27 pages, 7 figure

    Multimodality with Eye tracking and Haptics: A New Horizon for Serious Games?

    Get PDF
    The goal of this review is to illustrate the emerging use of multimodal virtual reality that can benefit learning-based games. The review begins with an introduction to multimodal virtual reality in serious games and we provide a brief discussion of why cognitive processes involved in learning and training are enhanced under immersive virtual environments. We initially outline studies that have used eye tracking and haptic feedback independently in serious games, and then review some innovative applications that have already combined eye tracking and haptic devices in order to provide applicable multimodal frameworks for learning-based games. Finally, some general conclusions are identified and clarified in order to advance current understanding in multimodal serious game production as well as exploring possible areas for new applications

    Efficiency of speech and iconic gesture integration for robotic and human communicators - a direct comparison

    Get PDF
    © 2015 IEEE. Co-verbal gestures are an important part of human communication, improving its efficiency for information conveyance. A key component of such improvement is the observer's ability to integrate information from the two communication channels, speech and gesture. Whether such integration also occurs when the multi-modal communication information is produced by a humanoid robot, and whether it is as efficient as for a human communicator, is an open question. Here, we present an experiment which, using a fully within subjects design, shows that for a range of iconic gestures, speech and gesture integration occurs with similar efficiency for human and for robot communicators. The gestures for this study were produced on an Aldebaran Robotics NAO robot platform with a Kinect based tele-operation system. We also show that our system is able to produce a range of iconic gestures that are understood by participants in unimodal (gesture only) communication, as well as being efficiently integrated with speech. Hence, we demonstrate the utility of iconic gestures for robotic communicators

    Securing teleoperated robot: Classifying human operator identity and emotion through motion-controlled robotic behaviors

    Get PDF
    Teleoperated robotic systems allow human operators to control robots from a distance, which mitigates the constraints of physical distance between the operators and offers invaluable applications in the real world. However, the security of these systems is a critical concern. System attacks and the potential impact of operators’ inappropriate emotions can result in misbehavior of the remote robots, which poses risks to the remote environment. These concerns become particularly serious when performing mission-critical tasks, such as nuclear cleaning. This thesis explored innovative security methods for the teleoperated robotic system. Common methods of security that can be used for teleoperated robots include encryption, robot misbehavior detection and user authentication. However, they have limitations for teleoperated robot systems. Encryption adds communication overheads to the systems. Robot misbehavior detection can only detect unusual signals on robot devices. The user authentication method secured the system primarily at the access point. To address this, we built motioncontrolled robot platforms that allow for robot teleoperation and proposed methods of performing user classification directly on remote-controlled robotic behavioral data to enhance security integrity throughout the operation. We discussed in Chapter 3 and conducted 4 experiments. Experiments 1 and 2 demonstrated the effectiveness of our approach, achieving user classification accuracy of 95% and 93% on two platforms respectively, using motion-controlled robotic end-effector trajectories. The results in experiment 3 further indicated that control system performance directly impacts user classification efficacy. Additionally, we deployed an AI agent to protect user biometric identities, ensuring the robot’s actions do not compromise user privacy in the remote environment in experiment 4. This chapter provided a foundation of methodology and experiment design for the next work. Additionally, Operators’ emotions could pose a security threat to the robot system. A remote robot operator’s emotions can significantly impact the resulting robot’s motions leading to unexpected consequences, even when the user follows protocol and performs permitted tasks. The recognition of a user operator’s emotions in remote robot control scenarios is, however, under-explored. Emotion signals mainly are physiological signals, semantic information, facial expressions and bodily movements. However, most physiological signals are electrical signals and are vulnerable to motion artifacts, which can not acquire the accurate signal and is not suitable for teleoperated robot systems. Semantic information and facial expressions are sometimes not accessible and involve high privacy issues and add additional sensors to the teleoperated systems. We proposed the methods of emotion recognition through the motion-controlled robotic behaviors in Chapter 4. This work demonstrated for the first time that the motioncontrolled robotic arm can inherit human operators’ emotions and emotions can be classified through robotic end-effector trajectories, achieving an 83.3% accuracy. We developed two emotion recognition algorithms using Dynamic Time Warping (DTW) and Convolutional Neural Network (CNN), deriving unique emotional features from the avatar’s end-effector motions and joint spatial-temporal characteristics. Additionally, we demonstrated through direct comparison that our approach is more appropriate for motion-based telerobotic applications than traditional ECG-based methods. Furthermore, we discussed the implications of this system on prominent current and future remote robot operations and emotional robotic contexts. By integrating user classification and emotion recognition into teleoperated robotic systems, this thesis lays the groundwork for a new security paradigm that enhances both the safety of remote operations. Recognizing users and their emotions allows for more contextually appropriate robot responses, potentially preventing harm and improving the overall quality of teleoperated interactions. These advancements contribute significantly to the development of more adaptive, intuitive, and human-centered HRI applications, setting a precedent for future research in the field

    Influence of the shape and mass of a small robot when thrown to a dummy human head

    Get PDF
    Social robots have shown some efficacy in assisting children with autism and are now being considered as assistive tools for therapy. The physical proximity of a small companion social robot could become a source of harm to children with autism during aggressive physical interactions. A child exhibiting challenging behaviors could throw a small robot that could harm another child 0 s head upon impact. In this paper, we investigate the effects of the mass and the shape of objects thrown on impact at different impact velocities on the linear acceleration of a developed dummy head. This dummy head could be the head of another child or a caregiver in the room. A total of 27 main experiments were conducted based on Taguchi’s orthogonal array design. The data were then analyzed using ANOVA and signal-to-noise (S/N). Our results revealed that the two design factors considered (i.e. mass and shape) and the noise factor (i.e. impact velocities) affected the resultant response. Finally, confirmation runs at the optimal identified shape and mass (i.e. mass of 0.3 kg and shape of either cube or wedge) showed an overall reduction in the resultant peak linear acceleration of the dummy head as compared to the other conditions. These results have implications on the design and manufacturing of small social robots whereby minimizing the mass of the robots can aid in mitigating harm to the head due to impact

    Haptic Media Scenes

    Get PDF
    The aim of this thesis is to apply new media phenomenological and enactive embodied cognition approaches to explain the role of haptic sensitivity and communication in personal computer environments for productivity. Prior theory has given little attention to the role of haptic senses in influencing cognitive processes, and do not frame the richness of haptic communication in interaction design—as haptic interactivity in HCI has historically tended to be designed and analyzed from a perspective on communication as transmissions, sending and receiving haptic signals. The haptic sense may not only mediate contact confirmation and affirmation, but also rich semiotic and affective messages—yet this is a strong contrast between this inherent ability of haptic perception, and current day support for such haptic communication interfaces. I therefore ask: How do the haptic senses (touch and proprioception) impact our cognitive faculty when mediated through digital and sensor technologies? How may these insights be employed in interface design to facilitate rich haptic communication? To answer these questions, I use theoretical close readings that embrace two research fields, new media phenomenology and enactive embodied cognition. The theoretical discussion is supported by neuroscientific evidence, and tested empirically through case studies centered on digital art. I use these insights to develop the concept of the haptic figura, an analytical tool to frame the communicative qualities of haptic media. The concept gauges rich machine- mediated haptic interactivity and communication in systems with a material solution supporting active haptic perception, and the mediation of semiotic and affective messages that are understood and felt. As such the concept may function as a design tool for developers, but also for media critics evaluating haptic media. The tool is used to frame a discussion on opportunities and shortcomings of haptic interfaces for productivity, differentiating between media systems for the hand and the full body. The significance of this investigation is demonstrating that haptic communication is an underutilized element in personal computer environments for productivity and providing an analytical framework for a more nuanced understanding of haptic communication as enabling the mediation of a range of semiotic and affective messages, beyond notification and confirmation interactivity

    Immersive Robotic Telepresence for Remote Educational Scenarios

    Get PDF
    [EN] : Social robots have an enormous potential for educational applications and allow for cognitive outcomes that are similar to those with human involvement. Remotely controlling a social robot to interact with students and peers in an immersive fashion opens up new possibilities for instructors and learners alike. Using immersive approaches can promote engagement and have beneficial effects on remote lesson delivery and participation. However, the performance and power consumption associated with the involved devices are often not sufficiently contemplated, despite being particularly important in light of sustainability considerations. The contributions of this research are thus twofold. On the one hand, we present telepresence solutions for a social robot’s location-independent operation using (a) a virtual reality headset with controllers and (b) a mobile augmented reality application. On the other hand, we perform a thorough analysis of their power consumption and system performance, discussing the impact of employing the various technologies. Using the QTrobot as a platform, direct and immersive control via different interaction modes, including motion, emotion, and voice output, is possible. By not focusing on individual subsystems or motor chains, but the cumulative energy consumption of an unaltered robot performing remote tasks, this research provides orientation regarding the actual cost of deploying immersive robotic telepresence solutions.S

    Human Machine Interfaces for Teleoperators and Virtual Environments

    Get PDF
    In Mar. 1990, a meeting organized around the general theme of teleoperation research into virtual environment display technology was conducted. This is a collection of conference-related fragments that will give a glimpse of the potential of the following fields and how they interplay: sensorimotor performance; human-machine interfaces; teleoperation; virtual environments; performance measurement and evaluation methods; and design principles and predictive models
    corecore