144 research outputs found

    没入型テレプレゼンス環境における身体のマッピングと拡張に関する研究

    Get PDF
    学位の種別: 課程博士審査委員会委員 : (主査)東京大学教授 暦本 純一, 東京大学教授 坂村 健, 東京大学教授 越塚 登, 東京大学教授 中尾 彰宏, 東京大学教授 佐藤 洋一University of Tokyo(東京大学

    Towards Naturalistic Interfaces of Virtual Reality Systems

    Get PDF
    Interaction plays a key role in achieving realistic experience in virtual reality (VR). Its realization depends on interpreting the intents of human motions to give inputs to VR systems. Thus, understanding human motion from the computational perspective is essential to the design of naturalistic interfaces for VR. This dissertation studied three types of human motions, including locomotion (walking), head motion and hand motion in the context of VR. For locomotion, the dissertation presented a machine learning approach for developing a mechanical repositioning technique based on a 1-D treadmill for interacting with a unique new large-scale projective display, called the Wide-Field Immersive Stereoscopic Environment (WISE). The usability of the proposed approach was assessed through a novel user study that asked participants to pursue a rolling ball at variable speed in a virtual scene. In addition, the dissertation studied the role of stereopsis in avoiding virtual obstacles while walking by asking participants to step over obstacles and gaps under both stereoscopic and non-stereoscopic viewing conditions in VR experiments. In terms of head motion, the dissertation presented a head gesture interface for interaction in VR that recognizes real-time head gestures on head-mounted displays (HMDs) using Cascaded Hidden Markov Models. Two experiments were conducted to evaluate the proposed approach. The first assessed its offline classification performance while the second estimated the latency of the algorithm to recognize head gestures. The dissertation also conducted a user study that investigated the effects of visual and control latency on teleoperation of a quadcopter using head motion tracked by a head-mounted display. As part of the study, a method for objectively estimating the end-to-end latency in HMDs was presented. For hand motion, the dissertation presented an approach that recognizes dynamic hand gestures to implement a hand gesture interface for VR based on a static head gesture recognition algorithm. The proposed algorithm was evaluated offline in terms of its classification performance. A user study was conducted to compare the performance and the usability of the head gesture interface, the hand gesture interface and a conventional gamepad interface for answering Yes/No questions in VR. Overall, the dissertation has two main contributions towards the improvement of naturalism of interaction in VR systems. Firstly, the interaction techniques presented in the dissertation can be directly integrated into existing VR systems offering more choices for interaction to end users of VR technology. Secondly, the results of the user studies of the presented VR interfaces in the dissertation also serve as guidelines to VR researchers and engineers for designing future VR systems

    Augmented Reality and Robotics: A Survey and Taxonomy for AR-enhanced Human-Robot Interaction and Robotic Interfaces

    Get PDF
    This paper contributes to a taxonomy of augmented reality and robotics based on a survey of 460 research papers. Augmented and mixed reality (AR/MR) have emerged as a new way to enhance human-robot interaction (HRI) and robotic interfaces (e.g., actuated and shape-changing interfaces). Recently, an increasing number of studies in HCI, HRI, and robotics have demonstrated how AR enables better interactions between people and robots. However, often research remains focused on individual explorations and key design strategies, and research questions are rarely analyzed systematically. In this paper, we synthesize and categorize this research field in the following dimensions: 1) approaches to augmenting reality; 2) characteristics of robots; 3) purposes and benefits; 4) classification of presented information; 5) design components and strategies for visual augmentation; 6) interaction techniques and modalities; 7) application domains; and 8) evaluation strategies. We formulate key challenges and opportunities to guide and inform future research in AR and robotics

    Framework for autonomous navigation through MS HoloLenses

    Get PDF
    Τα τελευταία χρόνια, η τεράστια ανάπτυξη των τεχνολογιών εικονικής πραγματικότητας φαίνεται να κατακλύζει την τεχνολογική κοινότητα. Οι δυνατότητες που η οικογένεια της εικονικής πραγματικότητας φέρνει στο τραπέζι, αποτελούν μια εμπειρία που αλλάζει τόσο την καθημερινή όσο και τη βιομηχανική ζωή. Πιο συγκεκριμένα, η Επαυξημένη Πραγματικότητα (AR) θεωρείται από ένα μεγάλο μέρος της επιστημονικής κοινότητας, η κυρίαρχη τεχνολογία των Διεπαφών Χρήστη (UI). Το βασικό χαρακτηριστικό του AR είναι ότι προσθέτει ψηφιακό περιεχόμενο στο πραγματικό περιβάλλον χωρίς να απομονώνει το χρήστη από αυτό, παρέχοντας μια πολύ ρεαλιστική αλληλεπίδραση κοντά στην αντίληψη του χρήστη. Λαμβάνοντας υπόψη αυτά τα χαρακτηριστικά, η τεχνολογία AR μπορεί να χρησιμοποιηθεί για παράδειγμα σε περιπτώσεις βελτιωμένης μάθησης, ελέγχου μηχανής, πλοήγησης ανθρώπου / οχήματος. Για παράδειγμα, ένα AR UI ανεπτυγμένο σε γυαλιά AR μπορεί να βοηθήσει τον χειριστή να ελέγξει ένα μηχάνημα εύκολα και χωρίς κίνδυνο από απόσταση. Επιπλέον, αυτή η λειτουργικότητα μπορεί να εμπλουτιστεί χρησιμοποιώντας ένα μη επανδρωμένο όχημα, ένα ρομπότ, ως το μηχάνημα που θα ελέγχεται. Η ρομποτική είναι ένας τομέας της τεχνολογίας, του οποίου η παρέμβαση στη ζωή των ανθρώπων φαίνεται ασταμάτητη σε όλο και περισσότερες πτυχές. Σήμερα, τα μη επανδρωμένα οχήματα χρησιμοποιούνται στην πλειονότητα των βιομηχανικών δραστηριοτήτων και των καθημερινών συνηθειών. Ας εξετάσουμε μια κατάσταση κατά την οποία επιβλαβή απόβλητα πρέπει να εξαχθούν από μια συγκεκριμένη περιοχή. Η χρήση μη επανδρωμένου οχήματος είναι υποχρεωτική για τη συλλογή και την απομάκρυνση των αποβλήτων. Επιπλέον, ένα UI επαυξημένης πραγματικότητας για το τηλεχειριστήριο του UV, προσφέρει τη δυνατότητα στον χειριστή να αξιοποιήσει στο έπακρο τις δεξιότητές του χωρίς να διακινδυνεύσει τη ζωή του. Το AR UI προσφέρει έναν πολύ φυσικό και οικείο έλεγχο στον χρήστη. Σε αυτήν την πτυχιακή εργασία, εξετάζουμε το σενάριο όπου ο χρήστης ελέγχει / πλοηγεί ένα μη επανδρωμένο όχημα εδάφους με τη βοήθεια AR γυαλιών. Τα γυαλιά AR προβάλλουν μία ειδικά σχεδιασμένη διεπαφή χρήστη για τον έλεγχο κίνησης του ρομπότ. Η πλοήγηση του οχήματος εξαρτάται αποκλειστικά από την αντίληψη και την εμπειρία του χρήστη. Εκεί η τεχνολογία AR γίνεται πρακτική καθώς δεν επηρεάζει την όραση και την αντίληψη του περιβάλλοντος για τον χρήστη και το περιβάλλον του. Πιο συγκεκριμένα, πραγματοποιείται μια σειρά πειραμάτων, όπου ο χρήστης φορά τα AR γυαλιά και πλοηγεί το ρομπότ δίνοντας μια σειρά εντολών κίνησης. Φυσικά, το ρομπότ πρέπει να παραμένει πάντα στο οπτικό του πεδίο. Τα πειράματα εκτελέστηκαν τόσο σε προσομοιωμένο όσο και σε πραγματικό κόσμο. Για την προσομοίωση, χρησιμοποιήθηκε ο προσομοιωτής Gazebo με ένα εικονικό Turtlebot 2 με λειτουργικό σύστημα ROS και ο προσομοιωτής Unity για τα AR γυαλιά. Τα πειράματα του πραγματικού κόσμου εκτελέστηκαν με ένα Turtlebot2 που εκτελεί ROS και τα γυαλιά Microsoft HoloLens AR όπου αναπτύχθηκε η εφαρμογή AR.In recent years, the immense development of the virtual reality technologies seems to overwhelm the technological community. The possibilities which the virtual reality family brings to the table, pose a life changing experience for both daily and industrial life. More particular, Augmented Reality (AR) in considered by a large portion of the scientific community, the reign technology of User Interfaces (UI). The key feature of AR is that adds digital content to the real environment without isolating the user from it, providing a very realistic interaction, close to the user’s perception. Considering these features, AR technology can be used for instance in cases of enhanced learning, machine control, human/vehicle navigation. For example, an AR UI deployed in AR glasses can help the actor control a machine easily and without risk from distance. In addition, this functionality can be enriched by using an unmanned vehicle, a robot, as the machine that will be controlled. Robotics is a field of technology, whose intervention in people’s lives seems unstoppable in more and more aspects. Nowadays, unmanned vehicles are used in the majority of industrial operations and daily habits. Let us consider a situation where harmful waste should be extracted from a specific area. The use of an unmanned vehicle is mandatory for the collection and the removal of the waste. On top of this, an Augmented Reality UI for the remote control of the UV, offers the ability to the actor to make the most out of his skills without risking his life. The AR UI offers a very natural an intimate control to the user. In this Thesis, we examine the scenario where the user controls/navigates an unmanned ground vehicle with the aid of an AR headset. The AR headset projects a specially designed UI for the robot’s movement control. The vehicle’s navigation depends solely on the user’s perception and experience. That’s where the AR technology comes in handy as is does not affects the vision and the environment perception of the user and his surroundings. More specifically, a series of experiments are carried out, where the user wears the AR headset and navigates the robot by giving a series of movement commands. Of course, the robot should always remain on his field of view. Experiments were executed both in simulated and real world. For the simulation Gazebo simulator was used with a virtual Turtlebot 2 running ROS operating system and the Unity simulator for the AR headset. The real - world experiments were executed with a Turtlebot2 running ROS and the Microsoft HoloLens AR headset where our AR application was deployed

    Gesture Based Control of Semi-Autonomous Vehicles

    Get PDF
    The objective of this investigation is to explore the use of hand gestures to control semi-autonomous vehicles, such as quadcopters, using realistic, physics based simulations. This involves identifying natural gestures to control basic functions of a vehicle, such as maneuvering and onboard equipment operation, and building simulations using the Unity game engine to investigate preferred use of those gestures. In addition to creating a realistic operating experience, human factors associated with limitations on physical hand motion and information management are also considered in the simulation development process. Testing with external participants using a recreational quadcopter simulation built in Unity was conducted to assess the suitability of the simulation and preferences between a joystick approach and the gesture-based approach. Initial feedback indicated that the simulation represented the actual vehicle performance well and that the joystick is preferred over the gesture-based approach. Improvements in the gesture-based control are documented as additional features in the simulation, such as basic maneuver training and additional vehicle positioning information, are added to assist the user to better learn the gesture-based interface and implementation of active control concepts to interpret and apply vehicle forces and torques. Tests were also conducted with an actual ground vehicle to investigate if knowledge and skill from the simulated environment transfers to a real-life scenario. To assess this, an immersive virtual reality (VR) simulation was built in Unity as a training environment to learn how to control a remote control car using gestures. This was then followed by a control of the actual ground vehicle. Observations and participant feedback indicated that range of hand movement and hand positions transferred well to the actual demonstration. This illustrated that the VR simulation environment provides a suitable learning experience, and an environment from which to assess human performance; thus, also validating the observations from earlier tests. Overall results indicate that the gesture-based approach holds promise given the emergence of new technology, but additional work needs to be pursued. This includes algorithms to process gesture data to provide more stable and precise vehicle commands and training environments to familiarize users with this new interface concept

    Human Factors:Sustainable life and mobility

    Get PDF

    Human Factors:Sustainable life and mobility

    Get PDF
    corecore