85 research outputs found

    没入型テレプレゼンス環境における身体のマッピングと拡張に関する研究

    Get PDF
    学位の種別: 課程博士審査委員会委員 : (主査)東京大学教授 暦本 純一, 東京大学教授 坂村 健, 東京大学教授 越塚 登, 東京大学教授 中尾 彰宏, 東京大学教授 佐藤 洋一University of Tokyo(東京大学

    Design and Performance Analysis of a Skin-Stretcher Device for Urging Head Rotation

    Get PDF
    This paper introduces a novel skin-stretcher device for gently urging head rotation. The device pulls and/or pushes the skin on the user's neck by using servo motors. The user is induced to rotate his/her head based on the sensation caused by the local stretching of skin. This mechanism informs the user when and how much the head rotation is requested; however it does not force head rotation, i.e., it allows the user to ignore the stimuli and to maintain voluntary movements. We implemented a prototype device and analyzed the performance of the skin stretcher as a human-in-the-loop system. Experimental results define its fundamental characteristics, such as input-output gain, settling time, and other dynamic behaviors. Features are analyzed, for example, input-output gain is stable within the same installation condition, but various between users

    The Effects of Sharing Awareness Cues in Collaborative Mixed Reality

    Get PDF
    Augmented and Virtual Reality provide unique capabilities for Mixed Reality collaboration. This paper explores how different combinations of virtual awareness cues can provide users with valuable information about their collaborator's attention and actions. In a user study (n = 32, 16 pairs), we compared different combinations of three cues: Field-of-View (FoV) frustum, Eye-gaze ray, and Head-gaze ray against a baseline condition showing only virtual representations of each collaborator's head and hands. Through a collaborative object finding and placing task, the results showed that awareness cues significantly improved user performance, usability, and subjective preferences, with the combination of the FoV frustum and the Head-gaze ray being best. This work establishes the feasibility of room-scale MR collaboration and the utility of providing virtual awareness cues

    Cognitive Based Design of a HMI for Telenavigation of A Space Rover

    Get PDF
    Human Machine Interface (HMI) design is a critical field of work because no general guidelines or rules have been assessed. In order to aid practitioners to design effective HMIs, different methodologies have been studied. To understand task objectives and plan goal-oriented actions, human operators exploit specific cognitive processes that have to be supported with advanced interfaces. Including cognitive aspects in HMI design allows generating an information flow that reduces user mental workload, increasing his/her situation awareness. This paper focuses on design and test of aGraphical User Interface (GUI) for the telenavigation of a space rover that makes the cognitive process of the user a priority in relation to the other development guidelines. To achieve this, a Cognitive Task Analysis (CTA) techinque, known as Applied Cognitive Work Analysis (ACWA), is combined with a multi-agent empirical test to ensure the GUI effectiveness. The ACWA allows evaluating mission scenarios, i.e. piloting the rover on the Mars surface, in order to obtain a model of the human cognitive demands that arise in these complex work domains. These demands can be used to obtain an effective information flow between the GUI and the operator. The multi-agent empirical test, on the other hand, allows an early feedback on the user mental workload aiming to validate the GUI. The result of the methodology is a GUI that eases the information flow through the interface, enhancing operator’s performance

    Gaze and Peripheral Vision Analysis for Human-Environment Interaction: Applications in Automotive and Mixed-Reality Scenarios

    Get PDF
    This thesis studies eye-based user interfaces which integrate information about the user’s perceptual focus-of-attention into multimodal systems to enrich the interaction with the surrounding environment. We examine two new modalities: gaze input and output in the peripheral field of view. All modalities are considered in the whole spectrum of the mixed-reality continuum. We show the added value of these new forms of multimodal interaction in two important application domains: Automotive User Interfaces and Human-Robot Collaboration. We present experiments that analyze gaze under various conditions and help to design a 3D model for peripheral vision. Furthermore, this work presents several new algorithms for eye-based interaction, like deictic reference in mobile scenarios, for non-intrusive user identification, or exploiting the peripheral field view for advanced multimodal presentations. These algorithms have been integrated into a number of software tools for eye-based interaction, which are used to implement 15 use cases for intelligent environment applications. These use cases cover a wide spectrum of applications, from spatial interactions with a rapidly changing environment from within a moving vehicle, to mixed-reality interaction between teams of human and robots.In dieser Arbeit werden blickbasierte Benutzerschnittstellen untersucht, die Infor- mationen ¨uber das Blickfeld des Benutzers in multimodale Systeme integrieren, um neuartige Interaktionen mit der Umgebung zu erm¨oglichen. Wir untersuchen zwei neue Modalit¨aten: Blickeingabe und Ausgaben im peripheren Sichtfeld. Alle Modalit¨aten werden im gesamten Spektrum des Mixed-Reality-Kontinuums betra- chtet. Wir zeigen die Anwendung dieser neuen Formen der multimodalen Interak- tion in zwei wichtigen Dom¨anen auf: Fahrerassistenzsysteme und Werkerassistenz bei Mensch-Roboter-Kollaboration. Wir pr¨asentieren Experimente, die blickbasierte Benutzereingaben unter verschiedenen Bedingungen analysieren und helfen, ein 3D- Modell f¨ur das periphere Sehen zu entwerfen. Dar¨uber hinaus stellt diese Arbeit mehrere neue Algorithmen f¨ur die blickbasierte Interaktion vor, wie die deiktis- che Referenz in mobilen Szenarien, die nicht-intrusive Benutzeridentifikation, oder die Nutzung des peripheren Sichtfeldes f¨ur neuartige multimodale Pr¨asentationen. Diese Algorithmen sind in eine Reihe von Software-Werkzeuge integriert, mit de- nen 15 Anwendungsf¨alle f¨ur intelligente Umgebungen implementiert wurden. Diese Demonstratoren decken ein breites Anwendungsspektrum ab: von der r¨aumlichen In- teraktionen aus einem fahrenden Auto heraus bis hin zu Mixed-Reality-Interaktionen zwischen Mensch-Roboter-Teams
    corecore