22 research outputs found

    A Comparison of Avatar-, Video-, and Robot-Mediated Interaction on Users’ Trust in Expertise

    Get PDF
    Communication technologies are becoming increasingly diverse in form and functionality. A central concern is the ability to detect whether others are trustworthy. Judgments of trustworthiness rely, in part, on assessments of non-verbal cues, which are affected by media representations. In this research, we compared trust formation on three media representations. We presented 24 participants with advisors represented by two of the three alternate formats: video, avatar, or robot. Unknown to the participants, one was an expert, and the other was a non-expert. We observed participants’ advice-seeking behavior under risk as an indicator of their trust in the advisor. We found that most participants preferred seeking advice from the expert, but we also found a tendency for seeking robot or video advice. Avatar advice, in contrast, was more rarely sought. Users’ self-reports support these findings. These results suggest that when users make trust assessments, the physical presence of the robot representation might compensate for the lack of identity cues

    Makeup Lamps: Live Augmentation of Human Faces via Projection

    Get PDF
    We propose the first system for live dynamic augmentation of human faces. Using projector‐based illumination, we alter the appearance of human performers during novel performances. The key challenge of live augmentation is latency — an image is generated according to a specific pose, but is displayed on a different facial configuration by the time it is projected. Therefore, our system aims at reducing latency during every step of the process, from capture, through processing, to projection. Using infrared illumination, an optically and computationally aligned high‐speed camera detects facial orientation as well as expression. The estimated expression blendshapes are mapped onto a lower dimensional space, and the facial motion and non‐rigid deformation are estimated, smoothed and predicted through adaptive Kalman filtering. Finally, the desired appearance is generated interpolating precomputed offset textures according to time, global position, and expression. We have evaluated our system through an optimized CPU and GPU prototype, and demonstrated successful low latency augmentation for different performers and performances with varying facial play and motion speed. In contrast to existing methods, the presented system is the first method which fully supports dynamic facial projection mapping without the requirement of any physical tracking markers and incorporates facial expressions

    Multi-destination beaming: apparently being in three places at once through robotic and virtual embodiment

    Get PDF
    It has been shown that an illusion of ownership over an artificial limb or even an entire body can be induced in people through multisensory stimulation, providing evidence that the surrogate body is the person’s actual body. Such body ownership illusions (BOIs) have been shown to occur with virtual bodies, mannequins, and humanoid robots. In this study, we show the possibility of eliciting a full-BOI over not one, but multiple artificial bodies concurrently. We demonstrate this by describing a system that allowed a participant to inhabit and fully control two different humanoid robots located in two distinct places and a virtual body in immersive virtual reality, using real-time full-body tracking and two-way audio communication, thereby giving them the illusion of ownership over each of them. We implemented this by allowing the participant be embodied in any one surrogate body at a given moment and letting them instantaneously switch between them. While the participant was embodied in one of the bodies, a proxy system would track the locations currently unoccupied and would control their remote representation in order to continue performing the tasks in those locations in a logical fashion. To test the efficacy of this system, an exploratory study was carried out with a fully functioning setup with three destinations and a simplified version of the proxy for use in a social interaction. The results indicate that the system was physically and psychologically comfortable and was rated highly by participants in terms of usability. Additionally, feelings of BOI and agency were reported, which were not influenced by the type of body representation. The results provide us with clues regarding BOI with humanoid robots of different dimensions, along with insight about self-localization and multilocation

    Beaming into the Rat World: Enabling Real-Time Interaction between Rat and Human Each at Their Own Scale

    Get PDF
    Immersive virtual reality (IVR) typically generates the illusion in participants that they are in the displayed virtual scene where they can experience and interact in events as if they were really happening. Teleoperator (TO) systems place people at a remote physical destination embodied as a robotic device, and where typically participants have the sensation of being at the destination, with the ability to interact with entities there. In this paper, we show how to combine IVR and TO to allow a new class of application. The participant in the IVR is represented in the destination by a physical robot (TO) and simultaneously the remote place and entities within it are represented to the participant in the IVR. Hence, the IVR participant has a normal virtual reality experience, but where his or her actions and behaviour control the remote robot and can therefore have physical consequences. Here, we show how such a system can be deployed to allow a human and a rat to operate together, but the human interacting with the rat on a human scale, and the rat interacting with the human on the rat scale. The human is represented in a rat arena by a small robot that is slaved to the human’s movements, whereas the tracked rat is represented to the human in the virtual reality by a humanoid avatar. We describe the system and also a study that was designed to test whether humans can successfully play a game with the rat. The results show that the system functioned well and that the humans were able to interact with the rat to fulfil the tasks of the game. This system opens up the possibility of new applications in the life sciences involving participant observation of and interaction with animals but at human scale

    Situated Displays in Telecommunication

    Get PDF
    In face to face conversation, numerous cues of attention, eye contact, and gaze direction provide important channels of information. These channels create cues that include turn taking, establish a sense of engagement, and indicate the focus of conversation. However, some subtleties of gaze can be lost in common videoconferencing systems, because the single perspective view of the camera doesn't preserve the spatial characteristics of the face to face situation. In particular, in group conferencing, the `Mona Lisa effect' makes all observers feel that they are looked at when the remote participant looks at the camera. In this thesis, we present designs and evaluations of four novel situated teleconferencing systems, which aim to improve the teleconferencing experience. Firstly, we demonstrate the effectiveness of a spherical video telepresence system in that it allows a single observer at multiple viewpoints to accurately judge where the remote user is placing their gaze. Secondly, we demonstrate the gaze-preserving capability of a cylindrical video telepresence system, but for multiple observers at multiple viewpoints. Thirdly, we demonstrated the further improvement of a random hole autostereoscopic multiview telepresence system in conveying gaze by adding stereoscopic cues. Lastly, we investigate the influence of display type and viewing angle on how people place their trust during avatar-mediated interaction. The results show the spherical avatar telepresence system has the ability to be viewed qualitatively similarly from all angles and demonstrate how trust can be altered depending on how one views the avatar. Together these demonstrations motivate the further study of novel display configurations and suggest parameters for the design of future teleconferencing systems

    Money & Trust in Digital Society, Bitcoin and Stablecoins in ML enabled Metaverse Telecollaboration

    Full text link
    We present a state of the art and positioning book, about Digital society tools, namely; Web3, Bitcoin, Metaverse, AI/ML, accessibility, safeguarding and telecollaboration. A high level overview of Web3 technologies leads to a description of blockchain, and the Bitcoin network is specifically selected for detailed examination. Suitable components of the extended Bitcoin ecosystem are described in more depth. Other mechanisms for native digital value transfer are described, with a focus on `money'. Metaverse technology is over-viewed, primarily from the perspective of Bitcoin and extended reality. Bitcoin is selected as the best contender for value transfer in metaverses because of it's free and open source nature, and network effect. Challenges and risks of this approach are identified. A cloud deployable virtual machine based technology stack deployment guide with a focus on cybersecurity best practice can be downloaded from GitHub to experiment with the technologies. This deployable lab is designed to inform development of secure value transaction, for small and medium sized companies

    A Retro-Projected Robotic Head for Social Human-Robot Interaction

    Get PDF
    As people respond strongly to faces and facial features, both con- sciously and subconsciously, faces are an essential aspect of social robots. Robotic faces and heads until recently belonged to one of the following categories: virtual, mechatronic or animatronic. As an orig- inal contribution to the field of human-robot interaction, I present the R-PAF technology (Retro-Projected Animated Faces): a novel robotic head displaying a real-time, computer-rendered face, retro-projected from within the head volume onto a mask, as well as its driving soft- ware designed with openness and portability to other hybrid robotic platforms in mind. The work constitutes the first implementation of a non-planar mask suitable for social human-robot interaction, comprising key elements of social interaction such as precise gaze direction control, facial ex- pressions and blushing, and the first demonstration of an interactive video-animated facial mask mounted on a 5-axis robotic arm. The LightHead robot, a R-PAF demonstrator and experimental platform, has demonstrated robustness both in extended controlled and uncon- trolled settings. The iterative hardware and facial design, details of the three-layered software architecture and tools, the implementation of life-like facial behaviours, as well as improvements in social-emotional robotic communication are reported. Furthermore, a series of evalua- tions present the first study on human performance in reading robotic gaze and another first on user’s ethnic preference towards a robot face

    Telethrone : a situated display using retro-reflection basedmulti-view toward remote collaboration in small dynamic groups

    Get PDF
    This research identifies a gap in the tele-communication technology. Several novel technology demonstrators are tested experimentally throughout the research. The presented final system allows a remote participant in a conversation to unambiguously address individual members of a group of 5 people using non-verbal cues. The capability to link less formal groups through technology is the primary contribution. Technology-mediated communication is first reviewed, with attention to different supported styles of meetings. A gap is identified for small informal groups. Small dynamic groups which are convened on demand for the solution of specific problems may be called “ad-hoc”. In these meetings it is possible to ‘pull up a chair’. This is poorly supported by current tele-communication tools, that is, it is difficult for one or more members to join such a meeting from a remote location. It is also difficult for physically located parties to reorient themselves in the meeting as goals evolve. As the major contribution toward addressing this the ’Telethrone’ is introduced. Telethrone projects a remote user onto a chair, bringing them into your space. The chair seems to act as a situated display, which can support multi party head gaze, eye gaze, and body torque. Each observer knows where the projected user is looking. It is simpler to implement and cheaper than current comparable systems. The underpinning approach is technology and systems development, with regard to HCI and psychology throughout. Prototypes, refinements, and novel engineered systems are presented. Two experiments to test these systems are peer-reviewed, and further design & experimentation undertaken based on the positive results. The final paper is pending. An initial version of the new technology approach combined retro-reflective material with aligned pairs of cameras, and projectors, connected by IP video. A counterbalanced repeated measures experiment to analyse gaze interactions was undertaken. Results suggest that the remote user is not excluded from triadic poker game-play. Analysis of the multi-view aspect of the system was inconclusive as to whether it shows advantage over a set-up which does not support multi-view. User impressions from the questionnaires suggest that the current implementation still gives the impression of being a display despite its situated nature, although participants did feel the remote user was in the space with them. A refinement of the system using models generated by visual hull reconstruction can better connect eye gaze. An exploration is made of its ability to allow chairs to be moved around the meeting, and what this might enable for the participants of the meeting. The ability to move furniture was earlier identified as an aid to natural interaction, but may also affect highly correlated subgroups in an ad-hoc meeting. This is unsupported by current technologies. Repositioning of several onlooking chairs seems to support ’fault lines’. Performance constraints of the current system are explored. An experiment tests whether it is possible to judge remote participant eye gaze as the viewer changes location, attempting to address concerns raised by the first experiment in which the physical offsets of the IP cameras lenses from the projected eyes of the remote participants (in both directions), may have influenced perception of attention. A third experiment shows that five participants viewing a remote recording, presented through the Telethrone, can judge the attention of the remote participant accurately when the viewpoint is correctly rendered for their location in the room. This is compared to a control in which spatial discrimination is impossible. A figure for how many optically seperate retro-reflected segments is obtained through spatial anlysis and testing. It is possible to render the optical maximum of 5 independent viewpoints supporting an ’ideal’ meeting of 6 people. The tested system uses one computer at the meeting side of the exchange making it potentially deployable from a small flight case. The thesis presents and tests the utility of elements toward a system, and finds that remote users are in the conversation, spatially segmented with a view for each onlooker, that eye gaze can be reconnected through the system using 3D video, and that performance supports scalability up to the theoretical maximum for the material and an ideal meeting size
    corecore