1,694 research outputs found

    Agent mediation and management of virtual communities: a redefinition of the traditional community concept

    Get PDF
    The paper explores the evolution of the concept of community in the light of computer mediated immersive virtual environments. The traditional concept of community has become strained in its attempts to capture the evolving virtual community. We believe the concept of the virtual community is of paramount importance and examine the extent to which this is being redefined to cater for it. We examine the management and mediation of such an environment and specifically the social process associated with the cohabited users. We advocate the use of multi-agent systems in delivering this functionalit

    Energy-based control approaches in human-robot collaborative disassembly

    Get PDF

    Expanding the bounds of seated virtual workspaces

    Get PDF
    Mixed Reality (MR), Augmented Reality (AR) and Virtual Reality (VR) headsets can improve upon existing physical multi-display environments by rendering large, ergonomic virtual display spaces whenever and wherever they are needed. However, given the physical and ergonomic limitations of neck movement, users may need assistance to view these display spaces comfortably. Through two studies, we developed new ways of minimising the physical effort and discomfort of viewing such display spaces. We first explored how the mapping between gaze angle and display position could be manipulated, helping users view wider display spaces than currently possible within an acceptable and comfortable range of neck movement. We then compared our implicit control of display position based on head orientation against explicit user control, finding significant benefits in terms of user preference, workload and comfort for implicit control. Our novel techniques create new opportunities for productive work by leveraging MR headsets to create interactive wide virtual workspaces with improved comfort and usability. These workspaces are flexible and can be used on-the-go, e.g., to improve remote working or make better use of commuter journeys

    Augmented Reality and Robotics: A Survey and Taxonomy for AR-enhanced Human-Robot Interaction and Robotic Interfaces

    Get PDF
    This paper contributes to a taxonomy of augmented reality and robotics based on a survey of 460 research papers. Augmented and mixed reality (AR/MR) have emerged as a new way to enhance human-robot interaction (HRI) and robotic interfaces (e.g., actuated and shape-changing interfaces). Recently, an increasing number of studies in HCI, HRI, and robotics have demonstrated how AR enables better interactions between people and robots. However, often research remains focused on individual explorations and key design strategies, and research questions are rarely analyzed systematically. In this paper, we synthesize and categorize this research field in the following dimensions: 1) approaches to augmenting reality; 2) characteristics of robots; 3) purposes and benefits; 4) classification of presented information; 5) design components and strategies for visual augmentation; 6) interaction techniques and modalities; 7) application domains; and 8) evaluation strategies. We formulate key challenges and opportunities to guide and inform future research in AR and robotics

    Facilitating Human-Robot Collaboration Using a Mixed-Reality Projection System

    Get PDF
    abstract: Human-Robot collaboration can be a challenging exercise especially when both the human and the robot want to work simultaneously on a given task. It becomes difficult for the human to understand the intentions of the robot and vice-versa. To overcome this problem, a novel approach using the concept of Mixed-Reality has been proposed, which uses the surrounding space as the canvas to augment projected information on and around 3D objects. A vision based tracking algorithm precisely detects the pose and state of the 3D objects, and human-skeleton tracking is performed to create a system that is both human-aware as well as context-aware. Additionally, the system can warn humans about the intentions of the robot, thereby creating a safer environment to work in. An easy-to-use and universal visual language has been created which could form the basis for interaction in various human-robot collaborations in manufacturing industries. An objective and subjective user study was conducted to test the hypothesis, that using this system to execute a human-robot collaborative task would result in higher performance as compared to using other traditional methods like printed instructions and through mobile devices. Multiple measuring tools were devised to analyze the data which finally led to the conclusion that the proposed mixed-reality projection system does improve the human-robot team's efficiency and effectiveness and hence, will be a better alternative in the future.Dissertation/ThesisMasters Thesis Computer Science 201
    corecore