9 research outputs found

    Immersive Robotic Telepresence for Remote Educational Scenarios

    Get PDF
    [EN] : Social robots have an enormous potential for educational applications and allow for cognitive outcomes that are similar to those with human involvement. Remotely controlling a social robot to interact with students and peers in an immersive fashion opens up new possibilities for instructors and learners alike. Using immersive approaches can promote engagement and have beneficial effects on remote lesson delivery and participation. However, the performance and power consumption associated with the involved devices are often not sufficiently contemplated, despite being particularly important in light of sustainability considerations. The contributions of this research are thus twofold. On the one hand, we present telepresence solutions for a social robot’s location-independent operation using (a) a virtual reality headset with controllers and (b) a mobile augmented reality application. On the other hand, we perform a thorough analysis of their power consumption and system performance, discussing the impact of employing the various technologies. Using the QTrobot as a platform, direct and immersive control via different interaction modes, including motion, emotion, and voice output, is possible. By not focusing on individual subsystems or motor chains, but the cumulative energy consumption of an unaltered robot performing remote tasks, this research provides orientation regarding the actual cost of deploying immersive robotic telepresence solutions.S

    Mems sensors controlled haptic forefinger robotic aid

    Get PDF
    The ability to feel the world through the tools we hold is Haptic Touch. The concept of sensory elements transforming information into touch experience by interacting with things remotely is motivating and challenging. This paper deals with the design and implementation of fore finger direction based robot for physically challenged people, which follows the direction of the Forefinger. The path way of the robot may be either point-to-point or continuous. This sensor detects the direction of the forefinger and the output is transmitted via RF transmitter to the receiver unit. In the receiver section RF receiver which receives corresponding signal will command the microcontroller to move the robot in that particular direction. The design of the system includes microcontroller, MEMS sensor and RF technology. The robot system receives the command from the MEMS sensor which is placed on the fore finger at the transmitter section. Therefore the simple control mechanism of the robot is shown. Experimental results for fore finger based directional robot are enumerated

    The More I Understand it, the Less I Like it: The Relationship Between Understandability and Godspeed Scores for Robotic Gestures

    Get PDF
    This work investigates the relationship between the perception that people develop about a robot and the understandability of the gestures the latter displays. The experiments have involved 30 human observers that have rated 45 robotic gestures in terms of the Godspeed dimensions. At the same time, the observers have assigned a score to 10 possible interpretations (the same interpretations for all gestures). The results show that there is a statistically significant correlation between the understandability of the gestures - measured through an information theoretic approach - and all Godspeed scores. However, the correlation is positive in some cases (Anthropomorphism, Animacy and Perceived Intelligence), but negative in others (Perceived Safety and Likeability). In other words, higher understandability is not necessarily associated with more positive perceptions

    Efficiency of speech and iconic gesture integration for robotic and human communicators - a direct comparison

    Get PDF
    © 2015 IEEE. Co-verbal gestures are an important part of human communication, improving its efficiency for information conveyance. A key component of such improvement is the observer's ability to integrate information from the two communication channels, speech and gesture. Whether such integration also occurs when the multi-modal communication information is produced by a humanoid robot, and whether it is as efficient as for a human communicator, is an open question. Here, we present an experiment which, using a fully within subjects design, shows that for a range of iconic gestures, speech and gesture integration occurs with similar efficiency for human and for robot communicators. The gestures for this study were produced on an Aldebaran Robotics NAO robot platform with a Kinect based tele-operation system. We also show that our system is able to produce a range of iconic gestures that are understood by participants in unimodal (gesture only) communication, as well as being efficiently integrated with speech. Hence, we demonstrate the utility of iconic gestures for robotic communicators

    Lenguaje y cognición : arquitectura de atributos en la comunicación entre seres humanos y máquinas para la generación de empatía

    Get PDF
    Cada vez más expertos hablan de la “Sociedad de los robots” (Robot Society). Este tipo de máquinas cada vez están más presentes en nuestras vidas personales y profesionales. Y están llegando también –como no podría ser de otra manera- al mundo del periodismo. Agencias como Associated Press o medios como Forbes, Los Angeles Times, ProPublica y la televisión pública finlandesa YLE ya están utilizando robots para la generación de contenido automatizada. Pero la mera utilización de inteligencia artificial para la generación de contenidos no implica necesariamente cumplir con las expectativas de los profesionales ni de las audiencias. Una cuestión que, de momento, no ha sido afrontada con la dedicación que requiere un aspecto tan importante del diseño de los robots. En esta investigación, planteamos la posible creación futura de un/a presentador/a virtual, capaz de recoger la actualidad y “servirla” en forma de noticias. En resumen, si tuviésemos que elegir un robot que nos transmitiese la actualidad y los contenidos de los medios de comunicación, ¿qué características debería tener? ¿Podemos diseñar e implementar una “Susana Grisso” o un “Iñaki Gabilondo” cibernéticos, capaces de generar engagement por parte de sus audiencias? Nuestro análisis trata de definir los atributos principales del modelo de comunicación de los robots con los humanos, a través del ejemplo del cine. El objetivo principal es identificar cuáles de esos atributos son relevantes para las audiencias, de forma que el engagement de estas con la máquina y la empatía percibida sean mayores..

    Self-adaptive structure semi-supervised methods for streamed emblematic gestures

    Get PDF
    Although many researchers try to improve the level of machine intelligence, there is still a long way to achieve intelligence similar to what humans have. Scientists and engineers are continuously trying to increase the level of smartness of the modern technology, i.e. smartphones and robotics. Humans communicate with each other by using the voice and gestures. Hence, gestures are essential to transfer the information to the partner. To reach a higher level of intelligence, the machine should learn from and react to the human gestures, which mean learning from continuously streamed gestures. This task faces serious challenges since processing streamed data suffers from different problems. Besides the stream data being unlabelled, the stream is long. Furthermore, “concept-drift” and “concept evolution” are the main problems of them. The data of the data streams have several other problems that are worth to be mentioned here, e.g. they are: dynamically changed, presented only once, arrived at high speed, and non-linearly distributed. In addition to the general problems of the data streams, gestures have additional problems. For example, different techniques are required to handle the varieties of gesture types. The available methods solve some of these problems individually, while we present a technique to solve these problems altogether. Unlabelled data may have additional information that describes the labelled data more precisely. Hence, semi-supervised learning is used to handle the labelled and unlabelled data. However, the data size increases continuously, which makes training classifiers so hard. Hence, we integrate the incremental learning technique with semi-supervised learning, which enables the model to update itself on new data without the need of the old data. Additionally, we integrate the incremental class learning within the semi-supervised learning, since there is a high possibility of incoming new concepts in the streamed gestures. Moreover, the system should be able to distinguish among different concepts and also should be able to identify random movements. Hence, we integrate the novelty detection to distinguish between the gestures that belong to the known concepts and those that belong to unknown concepts. The extreme value theory is used for this purpose, which overrides the need of additional labelled data to set the novelty threshold and has several other supportive features. Clustering algorithms are used to distinguish among different new concepts and also to identify random movements. Furthermore, the system should be able to update itself on only the trusty assignments, since updating the classifier on wrongly assigned gesture affects the performance of the system. Hence, we propose confidence measures for the assigned labels. We propose six types of semi-supervised algorithms that depend on different techniques to handle different types of gestures. The proposed classifiers are based on the Parzen window classifier, support vector machine classifier, neural network (extreme learning machine), Polynomial classifier, Mahalanobis classifier, and nearest class mean classifier. All of these classifiers are provided with the mentioned features. Additionally, we submit a wrapper method that uses one of the proposed classifiers or ensemble of them to autonomously issue new labels to the new concepts and update the classifiers on the newly incoming information depending on whether they belong to the known classes or new classes. It can recognise the different novel concepts and also identify random movements. To evaluate the system we acquired gesture data with nine different gesture classes. Each of them represents a different order to the machine e.g. come, go, etc. The data are collected using the Microsoft Kinect sensor. The acquired data contain 2878 gestures achieved by ten volunteers. Different sets of features are computed and used in the evaluation of the system. Additionally, we used real data, synthetic data and public data as support to the evaluation process. All the features, incremental learning, incremental class learning, and novelty detection are evaluated individually. The outputs of the classifiers are compared with the original classifier or with the benchmark classifiers. The results show high performances of the proposed algorithms

    Human-recognizable robotic gestures

    No full text
    10.1109/TAMD.2012.2208962IEEE Transactions on Autonomous Mental Development44305-31

    Erratum: Human-recognizable robotic gestures (IEEE Transactions on Autonomous Mental Development 12 (305-314))

    No full text
    10.1109/TAMD.2013.2251711IEEE Transactions on Autonomous Mental Development5185
    corecore