20 research outputs found

    Immersive Teleoperation of the Eye Gaze of Social Robots Assessing Gaze-Contingent Control of Vergence, Yaw and Pitch of Robotic Eyes

    Get PDF
    International audienceThis paper presents a new teleoperation system – called stereo gaze-contingent steering (SGCS) – able to seamlessly control the vergence, yaw and pitch of the eyes of a humanoid robot – here an iCub robot – from the actual gaze direction of a remote pilot. The video stream captured by the cameras embedded in the mobile eyes of the iCub are fed into an HTC Vive R Head-Mounted Display equipped with an SMI R binocular eye-tracker. The SGCS achieves the effective coupling between the eye-tracked gaze of the pilot and the robot's eye movements. SGCS both ensures a faithful reproduction of the pilot's eye movements – that is perquisite for the readability of the robot's gaze patterns by its interlocutor – and maintains the pilot's oculomotor visual clues – that avoids fatigue and sickness due to sensorimotor conflicts. We here assess the precision of this servo-control by asking several pilots to gaze towards known objects positioned in the remote environment. We demonstrate that we succeed in controlling vergence with similar precision as eyes' azimuth and elevation. This system opens the way for robot-mediated human interactions in the personal space, notably when objects in the shared working space are involved

    没入型テレプレゼンス環境における身体のマッピングと拡張に関する研究

    Get PDF
    学位の種別: 課程博士審査委員会委員 : (主査)東京大学教授 暦本 純一, 東京大学教授 坂村 健, 東京大学教授 越塚 登, 東京大学教授 中尾 彰宏, 東京大学教授 佐藤 洋一University of Tokyo(東京大学

    Immersive Teleoperation of the Eye Gaze of Social Robots Assessing Gaze-Contingent Control of Vergence, Yaw and Pitch of Robotic Eyes

    Get PDF
    International audienceThis paper presents a new teleoperation system – called stereo gaze-contingent steering (SGCS) – able to seamlessly control the vergence, yaw and pitch of the eyes of a humanoid robot – here an iCub robot – from the actual gaze direction of a remote pilot. The video stream captured by the cameras embedded in the mobile eyes of the iCub are fed into an HTC Vive R Head-Mounted Display equipped with an SMI R binocular eye-tracker. The SGCS achieves the effective coupling between the eye-tracked gaze of the pilot and the robot's eye movements. SGCS both ensures a faithful reproduction of the pilot's eye movements – that is perquisite for the readability of the robot's gaze patterns by its interlocutor – and maintains the pilot's oculomotor visual clues – that avoids fatigue and sickness due to sensorimotor conflicts. We here assess the precision of this servo-control by asking several pilots to gaze towards known objects positioned in the remote environment. We demonstrate that we succeed in controlling vergence with similar precision as eyes' azimuth and elevation. This system opens the way for robot-mediated human interactions in the personal space, notably when objects in the shared working space are involved

    Towards Naturalistic Interfaces of Virtual Reality Systems

    Get PDF
    Interaction plays a key role in achieving realistic experience in virtual reality (VR). Its realization depends on interpreting the intents of human motions to give inputs to VR systems. Thus, understanding human motion from the computational perspective is essential to the design of naturalistic interfaces for VR. This dissertation studied three types of human motions, including locomotion (walking), head motion and hand motion in the context of VR. For locomotion, the dissertation presented a machine learning approach for developing a mechanical repositioning technique based on a 1-D treadmill for interacting with a unique new large-scale projective display, called the Wide-Field Immersive Stereoscopic Environment (WISE). The usability of the proposed approach was assessed through a novel user study that asked participants to pursue a rolling ball at variable speed in a virtual scene. In addition, the dissertation studied the role of stereopsis in avoiding virtual obstacles while walking by asking participants to step over obstacles and gaps under both stereoscopic and non-stereoscopic viewing conditions in VR experiments. In terms of head motion, the dissertation presented a head gesture interface for interaction in VR that recognizes real-time head gestures on head-mounted displays (HMDs) using Cascaded Hidden Markov Models. Two experiments were conducted to evaluate the proposed approach. The first assessed its offline classification performance while the second estimated the latency of the algorithm to recognize head gestures. The dissertation also conducted a user study that investigated the effects of visual and control latency on teleoperation of a quadcopter using head motion tracked by a head-mounted display. As part of the study, a method for objectively estimating the end-to-end latency in HMDs was presented. For hand motion, the dissertation presented an approach that recognizes dynamic hand gestures to implement a hand gesture interface for VR based on a static head gesture recognition algorithm. The proposed algorithm was evaluated offline in terms of its classification performance. A user study was conducted to compare the performance and the usability of the head gesture interface, the hand gesture interface and a conventional gamepad interface for answering Yes/No questions in VR. Overall, the dissertation has two main contributions towards the improvement of naturalism of interaction in VR systems. Firstly, the interaction techniques presented in the dissertation can be directly integrated into existing VR systems offering more choices for interaction to end users of VR technology. Secondly, the results of the user studies of the presented VR interfaces in the dissertation also serve as guidelines to VR researchers and engineers for designing future VR systems

    Visuo-haptic Command Interface for Control-Architecture Adaptable Teleoperation

    Get PDF
    Robotic teleoperation is the commanding of a remote robot. Depending on the operator's involvement required by a teleoperation task, the remote site is more or less autonomous. On the operator site, input and display devices record and present control-related information from and to the operator respectively. Kinaesthetic devices stimulate haptic senses, thus conveying information through the sensing of displacement, velocity and acceleration within muscles, tendons and joints. These devices have shown to excel in tasks with low autonomy while touch-screen based devices are beneficial in highly autonomous tasks. However, neither perform reliably over a broad range. This thesis examines the feasibility of the 'Motion Console Application for Novel Virtual, Augmented and Avatar Systems' (Motion CANVAAS) that unifies the input/display capabilities of kinaesthetic and visual touchscreen-based devices in order to bridge this gap. This work describes the design of the Motion CANVAAS, its construction, development and conducts an initial validation process. The Motion CANVAAS was evaluated via two pilot studies, each based on a different virtual environment: a modified Tetris application and a racing karts simulator. The target research variables were the coupling of input/display capabilities and the effect of the application-specific kinaesthetic feedback. Both studies proved the concept to be a viable solution as haptic input/output device and indicated potential advantages over current solutions. On the flip side, some of the system's limitations could be identified. With the insight gained from this work, the benefits as well as the limitations will be addressed in the future research. Additionally, a full user study will be conducted to shed light on the capabilities and performance of the device in teleoperation over a broad range of autonomy

    Dynamic virtual reality user interface for teleoperation of heterogeneous robot teams

    Full text link
    This research investigates the possibility to improve current teleoperation control for heterogeneous robot teams using modern Human-Computer Interaction (HCI) techniques such as Virtual Reality. It proposes a dynamic teleoperation Virtual Reality User Interface (VRUI) framework to improve the current approach to teleoperating heterogeneous robot teams

    Haptics: Science, Technology, Applications

    Get PDF
    This open access book constitutes the proceedings of the 12th International Conference on Human Haptic Sensing and Touch Enabled Computer Applications, EuroHaptics 2020, held in Leiden, The Netherlands, in September 2020. The 60 papers presented in this volume were carefully reviewed and selected from 111 submissions. The were organized in topical sections on haptic science, haptic technology, and haptic applications. This year's focus is on accessibility

    Virtual reality and body rotation: 2 flight experiences in comparison

    Get PDF
    Embodied interfaces, represented by devices that incorporate bodily motion and proprioceptive stimulation, are promising for Virtual Reality (VR) because they can improve immersion and user experience while at the same time reducing simulator sickness compared to more traditional handheld interfaces (e.g.,gamepads). The aim of the study is to evaluate a novel embodied interface called VitruvianVR. The machine is composed of two separate rings that allow its users to bodily rotate onto three different axes. The suitability of the VitruvianVR was tested in a Virtual Reality flight scenario. In order to reach the goal we compared the VitruvianVR to a gamepad using perfomance measures (i.e., accuracy, fails), head movements and position of the body. Furthermore, a series of data coming from questionnaires about sense of presence, user experience, cognitive load, usability and cybersickness was retrieved.Embodied interfaces, represented by devices that incorporate bodily motion and proprioceptive stimulation, are promising for Virtual Reality (VR) because they can improve immersion and user experience while at the same time reducing simulator sickness compared to more traditional handheld interfaces (e.g.,gamepads). The aim of the study is to evaluate a novel embodied interface called VitruvianVR. The machine is composed of two separate rings that allow its users to bodily rotate onto three different axes. The suitability of the VitruvianVR was tested in a Virtual Reality flight scenario. In order to reach the goal we compared the VitruvianVR to a gamepad using perfomance measures (i.e., accuracy, fails), head movements and position of the body. Furthermore, a series of data coming from questionnaires about sense of presence, user experience, cognitive load, usability and cybersickness was retrieved
    corecore