21 research outputs found

    A Comparison of Avatar-, Video-, and Robot-Mediated Interaction on Users’ Trust in Expertise

    Get PDF
    Communication technologies are becoming increasingly diverse in form and functionality. A central concern is the ability to detect whether others are trustworthy. Judgments of trustworthiness rely, in part, on assessments of non-verbal cues, which are affected by media representations. In this research, we compared trust formation on three media representations. We presented 24 participants with advisors represented by two of the three alternate formats: video, avatar, or robot. Unknown to the participants, one was an expert, and the other was a non-expert. We observed participants’ advice-seeking behavior under risk as an indicator of their trust in the advisor. We found that most participants preferred seeking advice from the expert, but we also found a tendency for seeking robot or video advice. Avatar advice, in contrast, was more rarely sought. Users’ self-reports support these findings. These results suggest that when users make trust assessments, the physical presence of the robot representation might compensate for the lack of identity cues

    Beaming into the Rat World: Enabling Real-Time Interaction between Rat and Human Each at Their Own Scale

    Get PDF
    Immersive virtual reality (IVR) typically generates the illusion in participants that they are in the displayed virtual scene where they can experience and interact in events as if they were really happening. Teleoperator (TO) systems place people at a remote physical destination embodied as a robotic device, and where typically participants have the sensation of being at the destination, with the ability to interact with entities there. In this paper, we show how to combine IVR and TO to allow a new class of application. The participant in the IVR is represented in the destination by a physical robot (TO) and simultaneously the remote place and entities within it are represented to the participant in the IVR. Hence, the IVR participant has a normal virtual reality experience, but where his or her actions and behaviour control the remote robot and can therefore have physical consequences. Here, we show how such a system can be deployed to allow a human and a rat to operate together, but the human interacting with the rat on a human scale, and the rat interacting with the human on the rat scale. The human is represented in a rat arena by a small robot that is slaved to the human’s movements, whereas the tracked rat is represented to the human in the virtual reality by a humanoid avatar. We describe the system and also a study that was designed to test whether humans can successfully play a game with the rat. The results show that the system functioned well and that the humans were able to interact with the rat to fulfil the tasks of the game. This system opens up the possibility of new applications in the life sciences involving participant observation of and interaction with animals but at human scale

    Multi-destination beaming: apparently being in three places at once through robotic and virtual embodiment

    Get PDF
    It has been shown that an illusion of ownership over an artificial limb or even an entire body can be induced in people through multisensory stimulation, providing evidence that the surrogate body is the person’s actual body. Such body ownership illusions (BOIs) have been shown to occur with virtual bodies, mannequins, and humanoid robots. In this study, we show the possibility of eliciting a full-BOI over not one, but multiple artificial bodies concurrently. We demonstrate this by describing a system that allowed a participant to inhabit and fully control two different humanoid robots located in two distinct places and a virtual body in immersive virtual reality, using real-time full-body tracking and two-way audio communication, thereby giving them the illusion of ownership over each of them. We implemented this by allowing the participant be embodied in any one surrogate body at a given moment and letting them instantaneously switch between them. While the participant was embodied in one of the bodies, a proxy system would track the locations currently unoccupied and would control their remote representation in order to continue performing the tasks in those locations in a logical fashion. To test the efficacy of this system, an exploratory study was carried out with a fully functioning setup with three destinations and a simplified version of the proxy for use in a social interaction. The results indicate that the system was physically and psychologically comfortable and was rated highly by participants in terms of usability. Additionally, feelings of BOI and agency were reported, which were not influenced by the type of body representation. The results provide us with clues regarding BOI with humanoid robots of different dimensions, along with insight about self-localization and multilocation

    Situated Displays in Telecommunication

    Get PDF
    In face to face conversation, numerous cues of attention, eye contact, and gaze direction provide important channels of information. These channels create cues that include turn taking, establish a sense of engagement, and indicate the focus of conversation. However, some subtleties of gaze can be lost in common videoconferencing systems, because the single perspective view of the camera doesn't preserve the spatial characteristics of the face to face situation. In particular, in group conferencing, the `Mona Lisa effect' makes all observers feel that they are looked at when the remote participant looks at the camera. In this thesis, we present designs and evaluations of four novel situated teleconferencing systems, which aim to improve the teleconferencing experience. Firstly, we demonstrate the effectiveness of a spherical video telepresence system in that it allows a single observer at multiple viewpoints to accurately judge where the remote user is placing their gaze. Secondly, we demonstrate the gaze-preserving capability of a cylindrical video telepresence system, but for multiple observers at multiple viewpoints. Thirdly, we demonstrated the further improvement of a random hole autostereoscopic multiview telepresence system in conveying gaze by adding stereoscopic cues. Lastly, we investigate the influence of display type and viewing angle on how people place their trust during avatar-mediated interaction. The results show the spherical avatar telepresence system has the ability to be viewed qualitatively similarly from all angles and demonstrate how trust can be altered depending on how one views the avatar. Together these demonstrations motivate the further study of novel display configurations and suggest parameters for the design of future teleconferencing systems

    Makeup Lamps: Live Augmentation of Human Faces via Projection

    Get PDF
    We propose the first system for live dynamic augmentation of human faces. Using projector‐based illumination, we alter the appearance of human performers during novel performances. The key challenge of live augmentation is latency — an image is generated according to a specific pose, but is displayed on a different facial configuration by the time it is projected. Therefore, our system aims at reducing latency during every step of the process, from capture, through processing, to projection. Using infrared illumination, an optically and computationally aligned high‐speed camera detects facial orientation as well as expression. The estimated expression blendshapes are mapped onto a lower dimensional space, and the facial motion and non‐rigid deformation are estimated, smoothed and predicted through adaptive Kalman filtering. Finally, the desired appearance is generated interpolating precomputed offset textures according to time, global position, and expression. We have evaluated our system through an optimized CPU and GPU prototype, and demonstrated successful low latency augmentation for different performers and performances with varying facial play and motion speed. In contrast to existing methods, the presented system is the first method which fully supports dynamic facial projection mapping without the requirement of any physical tracking markers and incorporates facial expressions

    Remote Collaborative BIM-based Mixed Reality Approach for Supporting Facilities Management Field Tasks

    Get PDF
    Facilities Management (FM) day-to-day tasks require suitable methods to facilitate work orders and improve performance by better collaboration between the office and the field. Building Information Modeling (BIM) provides opportunities to support collaboration and to improve the efficiency of Computerized Maintenance Management Systems (CMMSs) by sharing building information between different applications/users throughout the lifecycle of the facility. However, manual retrieval of building element information can be challenging and time consuming for field workers during FM operations. Mixed Reality (MR) is a visualization technique that can be used to improve the visual perception of the facility by superimposing 3D virtual objects and textual information on top of the view of real-world building objects. The objectives of this research are: (1) investigating an automated method to capture and record task-related data (e.g., defects) with respect to a georeferenced BIM model and share them directly with the remote office based on the field worker point of view in mobile situations; (2) investigating the potential of using MR, BIM, and sensory data for FM tasks to provide improved visualization and perception that satisfy the needs of the facility manager at the office and the field workers with less visual and mental disturbance; and (3) developing an effective method for interactive visual collaboration to improve FM field tasks. This research discusses the development of a collaborative BIM-based MR approach to support facilities field tasks. The research framework integrates multisource facilities information, BIM models, and hybrid tracking in an MR-based setting to retrieve information based on time (e.g., inspection schedule) and the location of the field worker, visualize inspection and maintenance operations, and support remote collaboration and visual communication between the field worker and the manager at the office. The field worker uses an Augmented Reality (AR) application installed on his/her tablet. The manager at the office uses an Immersive Augmented Virtuality (IAV) application installed on a desktop computer. Based on the field worker location, as well as the inspection or maintenance schedule, the field worker is assigned work orders and instructions from the office. Other sensory data (e.g., infrared thermography) can provide additional layers of information by augmenting the actual view of the field worker and supporting him/her in making effective decisions about existing and potential problems while communicating with the office in an Interactive Virtual Collaboration (IVC) mode. The contributions of this research are (1) developing a MR framework for facilities management which has a field AR module and an office IAV module. These modules can be used independently or combined using remote IVC, (2) developing visualization methods for MR including the virtual hatch and multilayer views to enhance visual depth and context perception, (3) developing methods for AR and IAV modeling including BIM-based data integration and customization suitable for each MR method, and (4) enhancing indoor tracking for AR FM systems by developing a hybrid tracking method. To investigate the applicability of the research method, a prototype system called Collaborative BIM-based Markerless Mixed Reality Facility Management System (CBIM3R-FMS) is developed and tested in a case study. The usability testing and validation show that the proposed methods have high potential to improve FM field tasks

    Deformable Beamsplitters: Enhancing Perception with Wide Field of View, Varifocal Augmented Reality Displays

    Get PDF
    An augmented reality head-mounted display with full environmental awareness could present data in new ways and provide a new type of experience, allowing seamless transitions between real life and virtual content. However, creating a light-weight, optical see-through display providing both focus support and wide field of view remains a challenge. This dissertation describes a new dynamic optical element, the deformable beamsplitter, and its applications for wide field of view, varifocal, augmented reality displays. Deformable beamsplitters combine a traditional deformable membrane mirror and a beamsplitter into a single element, allowing reflected light to be manipulated by the deforming membrane mirror, while transmitted light remains unchanged. This research enables both single element optical design and correct focus while maintaining a wide field of view, as demonstrated by the description and analysis of two prototype hardware display systems which incorporate deformable beamsplitters. As a user changes the depth of their gaze when looking through these displays, the focus of virtual content can quickly be altered to match the real world by simply modulating air pressure in a chamber behind the deformable beamsplitter; thus ameliorating vergence–accommodation conflict. Two user studies verify the display prototypes’ capabilities and show the potential of the display in enhancing human performance at quickly perceiving visual stimuli. This work shows that near-eye displays built with deformable beamsplitters allow for simple optical designs that enable wide field of view and comfortable viewing experiences with the potential to enhance user perception.Doctor of Philosoph

    Ubiquitous interactive displays: magical experiences beyond the screen

    Get PDF
    Ubiquitous Interactive Displays are interfaces that extend interaction beyond traditional flat screens. This thesis presents a series of proof-of-concept systems exploring three interactive displays: the first part of this thesis explores interactive projective displays, where the use of projected light transforms and enhances physical objects in our environment. The second part of this thesis explores gestural displays, where traditional mobile devices such as our smartphones are equipped with depth sensors to enable input and output around a device. Finally, I introduce a new tactile display that imbues our physical spaces with a sense of touch in mid air without requiring the user to wear a physical device. These systems explore a future where interfaces are inherently everywhere, connecting our physical objects and spaces together through visual, gestural and tactile displays. I aim to demonstrate new technical innovations as well as compelling interactions with one ore more users and their physical environment. These new interactive displays enable novel experiences beyond flat screens that blurs the line between the physical and virtual world
    corecore