22,765 research outputs found

    Collaborative Gaze Channelling for Improved Cooperation During Robotic Assisted Surgery

    Get PDF
    The use of multiple robots for performing complex tasks is becoming a common practice for many robot applications. When different operators are involved, effective cooperation with anticipated manoeuvres is important for seamless, synergistic control of all the end-effectors. In this paper, the concept of Collaborative Gaze Channelling (CGC) is presented for improved control of surgical robots for a shared task. Through eye tracking, the fixations of each operator are monitored and presented in a shared surgical workspace. CGC permits remote or physically separated collaborators to share their intention by visualising the eye gaze of their counterparts, and thus recovers, to a certain extent, the information of mutual intent that we rely upon in a vis-à-vis working setting. In this study, the efficiency of surgical manipulation with and without CGC for controlling a pair of bimanual surgical robots is evaluated by analysing the level of coordination of two independent operators. Fitts' law is used to compare the quality of movement with or without CGC. A total of 40 subjects have been recruited for this study and the results show that the proposed CGC framework exhibits significant improvement (p<0.05) on all the motion indices used for quality assessment. This study demonstrates that visual guidance is an implicit yet effective way of communication during collaborative tasks for robotic surgery. Detailed experimental validation results demonstrate the potential clinical value of the proposed CGC framework. © 2012 Biomedical Engineering Society.link_to_subscribed_fulltex

    A review of the empirical studies of computer supported human-to-human communication

    Get PDF
    This paper presents a review of the empirical studies of human-to-human communication which have been carried out over the last three decades. Although this review is primarily concerned with the empirical studies of computer supported human-to-human communication, a number of studies dealing with group work in non-computer-based collaborative environments, which form the basis of many of the empirical studies of the recent years in the area of CSCW, are also discussed. The concept of person and task spaces is introduced and then subsequently used to categorise the large volume of studies reported in this review. This paper also gives a comparative analysis of the findings of these studies, and draws a number of general conclusions to guide the design and evaluation of future CSCW systems

    Collaboration in Augmented Reality: How to establish coordination and joint attention?

    Get PDF
    Schnier C, Pitsch K, Dierker A, Hermann T. Collaboration in Augmented Reality: How to establish coordination and joint attention? In: Boedker S, Bouvin NO, Lutters W, Wulf V, Ciolfi L, eds. Proceedings of the 12th European Conference on Computer Supported Cooperative Work (ECSCW 2011). Springer-Verlag London; 2011: 405-416.We present an initial investigation from a semi-experimental setting, in which an HMD-based AR-system has been used for real-time collaboration in a task-oriented scenario (design of a museum exhibition). Analysis points out the specific conditions of interacting in an AR environment and focuses on one particular practical problem for the participants in coordinating their interaction: how to establish joint attention towards the same object or referent. Analysis allows insights into how the pair of users begins to familarize with the environment, the limitations and opportunities of the setting and how they establish new routines for e.g. solving the ʻjoint attentionʼ-problem

    An internet of laboratory things

    Get PDF
    By creating “an Internet of Laboratory Things” we have built a blend of real and virtual laboratory spaces that enables students to gain practical skills necessary for their professional science and engineering careers. All our students are distance learners. This provides them by default with the proving ground needed to develop their skills in remotely operating equipment, and collaborating with peers despite not being co-located. Our laboratories accommodate state of the art research grade equipment, as well as large-class sets of off-the-shelf work stations and bespoke teaching apparatus. Distance to the student is no object and the facilities are open all hours. This approach is essential for STEM qualifications requiring development of practical skills, with higher efficiency and greater accessibility than achievable in a solely residential programme

    RealTimeChess: Lessons from a Participatory Design Process for a Collaborative Multi-Touch, Multi-User Game

    Get PDF
    We report on a long-term participatory design process during which we designed and improved RealTimeChess, a collaborative but competitive game that is played using touch input by multiple people on a tabletop display. During the design process we integrated concurrent input from all players and pace control, allowing us to steer the interaction along a continuum between high-paced simultaneous and low-paced turn-based gameplay. In addition, we integrated tutorials for teaching interaction techniques, mechanisms to control territoriality, remote interaction, and alert feedback. Integrating these mechanism during the participatory design process allowed us to examine their effects in detail, revealing for instance effects of the competitive setting on the perception of awareness as well as territoriality. More generally, the resulting application provided us with a testbed to study interaction on shared tabletop surfaces and yielded insights important for other time-critical or attention-demanding applications.

    Mixed reality participants in smart meeting rooms and smart home enviroments

    Get PDF
    Human–computer interaction requires modeling of the user. A user profile typically contains preferences, interests, characteristics, and interaction behavior. However, in its multimodal interaction with a smart environment the user displays characteristics that show how the user, not necessarily consciously, verbally and nonverbally provides the smart environment with useful input and feedback. Especially in ambient intelligence environments we encounter situations where the environment supports interaction between the environment, smart objects (e.g., mobile robots, smart furniture) and human participants in the environment. Therefore it is useful for the profile to contain a physical representation of the user obtained by multi-modal capturing techniques. We discuss the modeling and simulation of interacting participants in a virtual meeting room, we discuss how remote meeting participants can take part in meeting activities and they have some observations on translating research results to smart home environments

    Virtual Meeting Rooms: From Observation to Simulation

    Get PDF
    Virtual meeting rooms are used for simulation of real meeting behavior and can show how people behave, how they gesture, move their heads, bodies, their gaze behavior during conversations. They are used for visualising models of meeting behavior, and they can be used for the evaluation of these models. They are also used to show the effects of controlling certain parameters on the behavior and in experiments to see what the effect is on communication when various channels of information - speech, gaze, gesture, posture - are switched off or manipulated in other ways. The paper presents the various stages in the development of a virtual meeting room as well and illustrates its uses by presenting some results of experiments to see whether human judges can induce conversational roles in a virtual meeting situation when they only see the head movements of participants in the meeting

    Virtual Meeting Rooms: From Observation to Simulation

    Get PDF
    Much working time is spent in meetings and, as a consequence, meetings have become the subject of multidisciplinary research. Virtual Meeting Rooms (VMRs) are 3D virtual replicas of meeting rooms, where various modalities such as speech, gaze, distance, gestures and facial expressions can be controlled. This allows VMRs to be used to improve remote meeting participation, to visualize multimedia data and as an instrument for research into social interaction in meetings. This paper describes how these three uses can be realized in a VMR. We describe the process from observation through annotation to simulation and a model that describes the relations between the annotated features of verbal and non-verbal conversational behavior.\ud As an example of social perception research in the VMR, we describe an experiment to assess human observers’ accuracy for head orientation
    corecore