51 research outputs found

    A truly human interface: interacting face-to-face with someone whose words are determined by a computer program

    Get PDF
    We use speech shadowing to create situations wherein people converse in person with a human whose words are determined by a conversational agent computer program. Speech shadowing involves a person (the shadower) repeating vocal stimuli originating from a separate communication source in real-time. Humans shadowing for conversational agent sources (e.g., chat bots) become hybrid agents (“echoborgs”) capable of face-to-face interlocution. We report three studies that investigated people’s experiences interacting with echoborgs and the extent to which echoborgs pass as autonomous humans. First, participants in a Turing Test spoke with a chat bot via either a text interface or an echoborg. Human shadowing did not improve the chat bot’s chance of passing but did increase interrogators’ ratings of how human-like the chat bot seemed. In our second study, participants had to decide whether their interlocutor produced words generated by a chat bot or simply pretended to be one. Compared to those who engaged a text interface, participants who engaged an echoborg were more likely to perceive their interlocutor as pretending to be a chat bot. In our third study, participants were naïve to the fact that their interlocutor produced words generated by a chat bot. Unlike those who engaged a text interface, the vast majority of participants who engaged an echoborg did not sense a robotic interaction. These findings have implications for android science, the Turing Test paradigm, and human–computer interaction. The human body, as the delivery mechanism of communication, fundamentally alters the social psychological dynamics of interactions with machine intelligence

    Robotic Embodiment Developing a System for and Applications with Full Body Ownership of a Humanoid Robot

    Get PDF
    [eng] It has been shown that with appropriate multisensory stimulation an illusion of owning an artificial object as part of their own body can be induced in people. Such body ownership illusions have been shown to occur with artificial limbs, such as rubber hands, and even entire artificial or virtual bodies. Although extensive research has been carried out regarding full body ownership illusions with mannequins and virtual bodies, few studies exist that apply this concept to humanoid robots. On the other hand, extensive research has been carried out with robots in terms of telepresence and remote manipulation of the robot, known as teleoperation. Combining these concepts would give rise to a highly immersive, embodied experience in a humanoid robot located at a remote physical location, which holds great potential in terms of real-world applications. In this thesis, we aim to apply this phenomenon of full body ownership illusions in the context of humanoid robots, and to develop real-world applications where this technology could be beneficial. More specifically, by relying on knowledge gained from previous studies regarding body ownership illusions, we investigated whether it is possible to elicit this illusion with a humanoid robot. In addition, we developed a system in the context of telepresence robots, where the participant is embodied in a humanoid robot that is present in a different physical location, and can use this robotic body to interact with the remote environment. To test the functionality of the system and to gain an understanding of body ownership illusions with robots, we carried out two experimental studies and one case-study of a demonstration of the system as a real-world application. In the Brain-Computer Interface versus Eye Tracker study, we used our system to investigate whether it was possible to induce a full body ownership illusion over a humanoid robot with a highly ‘robotic’ appearance. In addition, we compared two different abstract methods of control, a Steady-State Visually Evoked Potential (SSVEP) based Brain-Computer Interface and eye-tracking, in an immersive environment to drive the robot. This was done mainly as a motivation for developing a prototype of a system that could be used by disabled patients. Our results showed that a feeling of body ownership illusion and agency can be induced, even though the postures between participants and the embodied robot were incongruent (the participant was sitting, while the robot was standing). Additionally, both BCI and eye tracking were reported to be suitable methods of control, although the degree of body ownership illusion was influenced by the control method, with higher scores of ownership reported for the BCI condition. In the Tele-Immersive Journalism case study, we used the same system as above, but with the added capability of letting the participant control the robot body by moving their own body. Since in this case we provided synchronous visuomotor correlations with the robotic body we expected this to result in an even higher level of body ownership illusion. By making the robot body the source of their associated sensations we simulate a type of virtual teleportation. We applied this system successfully to the context of journalism, where a journalist could be embodied in a humanoid robot located in a remote destination and carry out interviews through their robotic body. We provide a case-study where the system was used by several journalists to report news about the system itself as well as for reporting other stories. In the Multi-Destination Beaming study, we extended the functionality of the system to include three destinations. The aim of the study was to investigate whether participants could cope with being in three places at same time, and embodied in three different surrogate bodies. We had two physical destinations with one robot in each, and a third virtual destination where the participant would be embodied in a virtual body. The results indicate that the system was physically and psychologically comfortable, and was rated highly by participants in terms of usability in real world. Additionally, high feelings of body ownership illusion and agency were reported, which were not influenced by the robot type. This provides us with clues regarding body ownership illusion with humanoid robots of different dimensions, along with insight about self-localisation and multilocation. Overall, our results show that it is possible to elicit a full body ownership illusion over humanoid robotic bodies. The studies presented here advance the current theoretical framework of body representation, agency and self-perception by providing information about various factors that may affect the illusion of body ownership, such as a highly robotic appearance of the artificial body, having indirect methods of control, or even being simultaneously embodied in three different bodies. Additionally, the setup described can also be used to great effect for highly immersive remote robotic embodiment applications, such as one demonstrated here in the field of journalism.[spa] Se ha demostrado que con la estimulación multisensorial adecuada es posible inducir la ilusión de apropiación de un objeto artificial como parte del propio cuerpo. Tales ilusiones de apropiación corporal han demostrado ser posibles sobre extremidades artificiales, como por ejemplo manos de goma, e incluso cuerpos enteros tanto artificiales como virtuales. Aunque se ha llevado a cabo una amplia investigación acerca de las ilusiones de apropiación corporal con maniquís y cuerpos virtuales, existen pocos estudios que apliquen este concepto a robots humanoides. Por otro lado, se ha llevado a cabo investigación extensa con robots por lo que respecta a la telepresencia y la manipulación remota del robot, también conocida como teleoperación. Combinar estos conceptos da lugar a una experiencia inmersiva de encarnación en un robot humanoide localizado en una posición física remota, cosa que acarrea un gran potencial por lo que respecta a las aplicaciones del mundo real. En esta tesis, pretendemos aplicar el fenómeno de las ilusiones de apropiación corporal al contexto de los robots humanoides, y desarrollar aplicaciones en el mundo real donde esta tecnología pueda ser beneficiosa. Más concretamente, mediante el conocimiento adquirido en los estudios previos relacionados con las ilusiones de apropiación corporal, investigamos si es posible inducir esta ilusión sobre un robot humanoide. Además, desarrollamos un sistema dentro del contexto de robots de telepresencia, donde el participante encarna un robot humanoide que está presente en una localización física diferente a la del participante, y puede usar el cuerpo robótico para interactuar con el entorno remoto. Con el objetivo de probar la funcionalidad del sistema y avanzar en el conocimiento de las ilusiones de encarnación corporal con robots, hemos llevado a cabo dos estudios experimentales y un caso práctico de una demostración del sistema como aplicación en el mundo real. En el estudio Interfaz Cerebro-Ordenador contra Rastreador Ocular, usamos nuestro sistema para investigar si era posible inducir una ilusión de apropiación corporal sobre un robot humanoide con una apariencia altamente `robótica'. Además, comparamos dos métodos abstractos de control diferentes, una interfaz cerebro-computadora (Brain-Computer Interface, BCI) basada en potenciales evocados visuales de estado estable (Steady-State Visually Evoked Potential, SSVEP) y un rastreador ocular, en un entorno inmersivo para dirigir un robot. Este estudio se realizó como motivación para desarrollar un prototipo de un sistema que pudiera ser usado por pacientes discapacitados. Nuestros resultados mostraron que es posible inducir una ilusión de apropiación y agencia corporal, aunque la postura del participante y la del robot sean incongruentes (el participante estaba sentado y el robot de pie). Además, tanto el método BCI como el rastreador ocular se mostraron como métodos válidos de control, aunque el grado de ilusión de apropiación corporal estuviera influenciado por el método de control, siendo la condición con BCI donde se obtuvo un mayor nivel de apropiación corporal. En el caso práctico Periodismo Tele-Inmersivo, usamos el mismo sistema que el descrito anteriormente, pero con la capacidad adicional de permitir al participante controlar el cuerpo del robot mediante el movimiento de su propio cuerpo. Teniendo en cuenta que en este caso añadíamos la correlación síncrona visuomotora con el cuerpo robótico, esperamos que esto conllevara un mayor nivel de ilusión de apropiación corporal. Haciendo que el cuerpo del robot sea el origen de las sensaciones asociadas pudimos simular un tipo de teleportación virtual. Aplicamos este sistema exitosamente al contexto del periodismo, en el cual un periodista podía encarnar un robot humanoide en una destinación remota y llevar a cabo entrevistas a través de su cuerpo robótico. Aportamos un caso práctico donde el sistema fue usado por varios periodistas para informar del mismo sistema entre otras historias. En el estudio Multi-Destino Beaming, ampliamos la funcionalidad del sistema incluyendo tres destinos posibles. El objetivo del estudio era investigar si los participantes podían enfrentarse al hecho de estar en tres lugares simultáneamente, y encarnar tres cuerpos sustitutos. Disponíamos de dos destinos físicos con un robot en cada uno, y un tercer destino virtual donde el participante encarnaba el cuerpo virtual. Los resultados indican que el sistema era cómodo tanto física como psicológicamente, y los participantes lo evaluaron altamente en términos de usabilidad en el mundo real. Asimismo, obtuvimos un nivel alto de ilusión de apropiación corporal y de agencia, sin ninguna influencia del tipo de robot. Esto nos provee información acerca de la ilusión de apropiación corporal con robots humanoides de dimensiones diversas, además de conocimiento sobre la propia localización y la multilocalización. En resumen, nuestros resultados demuestran que es posible inducir una ilusión de apropiación corporal sobre cuerpos robóticos humanoides. Los estudios presentados aquí dan un paso más en el marco teórico actual de la representación corporal, la agencia y la percepción de uno mismo mediante la información adquirida sobre diversos factores que pueden afectar la ilusión de apropiación corporal, tales como la apariencia altamente robótica del cuerpo artificial, métodos indirectos de control, o incluso estar encarnado simultáneamente en tres cuerpos distintos. Además, el equipo descrito también puede ser usado en aplicaciones altamente inmersivas de encarnación robótica remota, tales como la mostrada aquí en el campo del periodismo

    Beaming into the News: A System for and Case Study of Tele-Immersive Journalism

    Get PDF
    We show how a combination of virtual reality and robotics can be used to beam a physical representation of a person to a distant location, and describe an application of this system in the context of journalism. Full body motion capture data of a person is streamed and mapped in real time, onto the limbs of a humanoid robot present at the remote location. A pair of cameras in the robot's 'eyes' stream stereoscopic video back to the HMD worn by the visitor, and a two-way audio connection allows the visitor to talk to people in the remote destination. By fusing the multisensory data of the visitor with the robot, the visitor's 'consciousness' is transformed to the robot's body. This system was used by a journalist to interview a neuroscientist and a chef 900 miles distant, about food for the brain, resulting in an article published in the popular press

    Sustaining Emotional Communication when Interacting with an Android Robot

    Get PDF

    Effect of avatars and viewpoints on performance in virtual world: efficiency vs. telepresence

    Get PDF
    An increasing number of our interactions are mediated through e-technologies. In order to enhance the human’s feeling of presence into these virtual environments, also known as telepresence, the individual is usually embodied into an avatar. The natural adaptation capabilities, underlain by the plasticity of the body schema, of the human being make a body ownership of the avatar possible, in which the user feels more like his/her virtual alter ego than himself/herself. However, this phenomenon only occurs under specific conditions. Two experiments are designed to study the human’s feeling and performance according to a scale of natural relationship between the participant and the avatar. In both experiments, the human-avatar interaction is carried out by a Natural User Interface (NUI) and the individual’s performance is assessed through a behavioural index, based on the concept of affordances, and a questionnaire of presence The first experiment shows that the feeling of telepresence and ownership seem to be greater when the avatar’s kinematics and proportions are close to those of the user. However, the efficiency to complete the task is higher for a more mechanical and stereotypical avatar. The second experiment shows that the manipulation of the viewpoint induces a similar difference across the sessions. Results are discussed in terms of the neurobehavioral processes underlying performance in virtual worlds, which seem to be based on ownership when the virtual artefact ensures a preservation of sensorimotor contingencies, and simple geometrical mapping when the conditions become more artificial

    Humanization of robots: is it really such a good idea?

    Get PDF
    The aim of this review was to examine the pros and cons of humanizing social robots following a psychological perspective. As such, we had six goals. First, we defined what social robots are. Second, we clarified the meaning of humanizing social robots. Third, we presented the theoretical backgrounds for promoting humanization. Fourth, we conducted a review of empirical results of the positive effects and the negative effects of humanization on human–robot interaction (HRI). Fifth, we presented some of the political and ethical problems raised by the humanization of social robots. Lastly, we discussed the overall effects of the humanization of robots in HRI and suggested new avenues of research and development.info:eu-repo/semantics/publishedVersio

    Biological Plausibility of Arm Postures Influences the Controllability of Robotic Arm Teleoperation

    Get PDF
    International audienceObjective: We investigated how participants controlling a humanoid robotic arm's 3D endpoint position by moving their own hand are influenced by the robot's postures. We hypothesized that control would be facilitated (impeded) by biologically plausible (implausible) postures of the robot. Background: Kinematic redundancy, whereby different arm postures achieve the same goal, is such that a robotic arm or prosthesis could theoretically be controlled with less signals than constitutive joints. However, congruency between a robot's motion and our own is known to interfere with movement production. Hence, we expect the human-likeness of a robotic arm's postures during endpoint teleoperation to influence controllability. Method: Twenty-two able-bodied participants performed a target-reaching task with a robotic arm whose endpoint's 3D position was controlled by moving their own hand. They completed a two-condition experiment corresponding to the robot displaying either biologically plausible or implausible postures. Results: Upon initial practice in the experiment's first part, endpoint trajectories were faster and shorter when the robot displayed human-like postures. However, these effects did not persist in the second part, where performance with implausible postures appeared to have benefited from initial practice with plausible ones. Conclusion: Humanoid robotic arm endpoint control is impaired by biologically implausible joint coordinations during initial familiarization but not afterwards, suggesting that the human-likeness of a robot's postures is more critical for control in this initial period. Application: These findings provide insight for the design of robotic arm teleoperation and prosthesis control schemes, in order to favor better familiarization and control from their users

    Human-robot interaction for telemanipulation by small unmanned aerial systems

    Get PDF
    This dissertation investigated the human-robot interaction (HRI) for the Mission Specialist role in a telemanipulating unmanned aerial system (UAS). The emergence of commercial unmanned aerial vehicle (UAV) platforms transformed the civil and environmental engineering industries through applications such as surveying, remote infrastructure inspection, and construction monitoring, which normally use UAVs for visual inspection only. Recent developments, however, suggest that performing physical interactions in dynamic environments will be important tasks for future UAS, particularly in applications such as environmental sampling and infrastructure testing. In all domains, the availability of a Mission Specialist to monitor the interaction and intervene when necessary is essential for successful deployments. Additionally, manual operation is the default mode for safety reasons; therefore, understanding Mission Specialist HRI is important for all small telemanipulating UAS in civil engineering, regardless of system autonomy and application. A 5 subject exploratory study and a 36 subject experimental study were conducted to evaluate variations of a dedicated, mobile Mission Specialist interface for aerial telemanipulation from a small UAV. The Shared Roles Model was used to model the UAS human-robot team, and the Mission Specialist and Pilot roles were informed by the current state of practice for manipulating UAVs. Three interface camera view designs were tested using a within-subjects design, which included an egocentric view (perspective from the manipulator), exocentric view (perspective from the UAV), and mixed egocentric-exocentric view. The experimental trials required Mission Specialist participants to complete a series of tasks with physical, visual, and verbal requirements. Results from these studies found that subjects who preferred the exocentric condition performed tasks 50% faster when using their preferred interface; however, interface preferences did not affect performance for participants who preferred the mixed condition. This result led to a second finding that participants who preferred the exocentric condition were distracted by the egocentric view during the mixed condition, likely caused by cognitive tunneling, and the data suggest tradeoffs between performance improvements and attentional costs when adding information in the form of multiple views to the Mission Specialist interface. Additionally, based on this empirical evaluation of multiple camera views, the exocentric view was recommended for use in a dedicated Mission Specialist telemanipulation interface. Contributions of this thesis include: i) conducting the first focused HRI study of aerial telemanipulation, ii) development of an evaluative model for telemanipulation performance, iii) creation of new recommendations for aerial telemanipulation interfacing, and iv) contribution of code, hardware designs, and system architectures to the open-source UAV community. The evaluative model provides a detailed framework, a complement to the abstraction of the Shared Roles Model, that can be used to measure the effects of changes in the system, environment, operators, and interfacing factors on performance. The practical contributions of this work will expedite the use of manipulating UAV technologies by scientists, researchers, and stakeholders, particularly those in civil engineering, who will directly benefit from improved manipulating UAV performance

    Technological change, bargaining power, and wages

    Get PDF
    Book synopsis: “The robots are taking our jobs!” Not long ago, this worry was the stuff of science fiction. Now, as self–driving cars take to the streets and robots fill our warehouses and factories, it is entering mainstream political debate around the world. This raises important questions for all of us. How society uses new technologies is not a foregone conclusion. It depends on political decisions, cultural norms and economic choices as much as on the technologies themselves. This book looks at the phenomenon of new robot technologies, asks what impact they might have on the economy, and considers how governments, businesses and individuals should respond to them. Because technological change is a complex business, it includes views from a range of disciplines, including economics, engineering, history, philosophy and innovation studies
    corecore