402 research outputs found

    Immersive Teleoperation of the Eye Gaze of Social Robots Assessing Gaze-Contingent Control of Vergence, Yaw and Pitch of Robotic Eyes

    Get PDF
    International audienceThis paper presents a new teleoperation system – called stereo gaze-contingent steering (SGCS) – able to seamlessly control the vergence, yaw and pitch of the eyes of a humanoid robot – here an iCub robot – from the actual gaze direction of a remote pilot. The video stream captured by the cameras embedded in the mobile eyes of the iCub are fed into an HTC Vive R Head-Mounted Display equipped with an SMI R binocular eye-tracker. The SGCS achieves the effective coupling between the eye-tracked gaze of the pilot and the robot's eye movements. SGCS both ensures a faithful reproduction of the pilot's eye movements – that is perquisite for the readability of the robot's gaze patterns by its interlocutor – and maintains the pilot's oculomotor visual clues – that avoids fatigue and sickness due to sensorimotor conflicts. We here assess the precision of this servo-control by asking several pilots to gaze towards known objects positioned in the remote environment. We demonstrate that we succeed in controlling vergence with similar precision as eyes' azimuth and elevation. This system opens the way for robot-mediated human interactions in the personal space, notably when objects in the shared working space are involved

    A 360 VR and Wi-Fi Tracking Based Autonomous Telepresence Robot for Virtual Tour

    Get PDF
    This study proposes a novel mobile robot teleoperation interface that demonstrates the applicability of a robot-aided remote telepresence system with a virtual reality (VR) device to a virtual tour scenario. To improve realism and provide an intuitive replica of the remote environment for the user interface, the implemented system automatically moves a mobile robot (viewpoint) while displaying a 360-degree live video streamed from the robot to a VR device (Oculus Rift). Upon the user choosing a destination location from a given set of options, the robot generates a route based on a shortest path graph and travels along that the route using a wireless signal tracking method that depends on measuring the direction of arrival (DOA) of radio signals. This paper presents an overview of the system and architecture, and discusses its implementation aspects. Experimental results show that the proposed system is able to move to the destination stably using the signal tracking method, and that at the same time, the user can remotely control the robot through the VR interface

    Effectiveness of a Wii Balance Board as a locomotion control method for a virtual reality telepresence robot

    Get PDF
    Abstract. While virtual reality can greatly contribute to the feeling of presence when operating a telepresence robot, it can come with multiple difficulties to implement in a manner that would make the user feel comfortable. One of those tasks is choosing a locomotion control method. Traditional locomotion control methods for telepresence robot, such as joysticks, might be easy to use but are lacking in immersion. Non-traditional locomotion control methods, for example, a treadmill-type might increase the immersion but the cost of equipment is too high for many users. In this study, we wanted to explore if the Wii Balance Board could be a suitable locomotion control method for a virtual reality telepresence robot. The Wii Balance Board was thought to possibly offer a low-cost and comfortable leaning-based locomotion control method for a telepresence robot. The Wii Balance Board was compared against joysticks, which were chosen as they are one of the most common locomotion control methods in virtual reality. For the experiment, we created a simulated environment in which the subjects had to operate a virtual robot through an assigned path with various obstacles. A 3D-model of the University of Oulu was used as the virtual environment, as it was readily available and represented a possible use case environment for a telepresence robot. The experiment consisted of nine three-part runs. After each run, the subjects filled out a form related to their preferences, and performance data was collected during each run. We had planned to run experiments for 40 people, but due to the COVID-19 outbreak, we were forced to conduct tests with only two researchers instead. After analyzing the results, we conclude that the Wii Balance Board is not suitable for controlling virtual reality telepresence robots in the tested environments. The Wii Balance Board was fatiguing to use after moderate periods of time and did not offer accurate enough control to be used in scenarios other than open environments. For future studies, we suggested to explore other options for joysticks, such as a balance board which would be better-designed for leaning purposes to compensate for the fatigue caused by constant leaning.Wii Balance Boardin tehokkuus ohjausmenetelmänä virtuaalitodellisuus etäläsnärobotille. Tiivistelmä. Vaikka virtuaalitodellisuus voi huomattavasti edistää läsnäolontunnetta käyttäessä etäläsnäolorobottia, siihen voi liittyä useiden haasteiden toteuttaminen tavoilla, jotka saavat käyttäjä saadaan tuntemaan olonsa mukavaksi. Yksi näistä haasteista on liikkeenohjaustyylin valitseminen. Perinteiset liikkeenohjaustyylit etäläsnäolorobotille, kuten ohjaussauvat, voivat olla helposti käytettäviä, mutta puutteellisia immersion kannalta. Epätavanomaiset liikkeenohjaustyylit, kuten juoksumattotyyppiset, voivat lisätä immersiota, mutta laitteistojen kustannukset ovat monille käyttäjille liian suuret. Tässä tutkimuksessa halusimme selvittää, olisiko Wii Balance Board -tasapainolevy sopiva ohjausmenetelmä etäläsnäolorobotille virtuaalitodellisuudessa. Wii Balance Board voisi tarjota halvan ja mukavan nojaukseen perustuvan liikkeenohjaustyylin etäläsnäoloroboteille. Wii Balance Boardia verrattiin ohjaussauvoihin, jotka valittiin, koska ne ovat yksi yleisimmistä liikkeenohjausmenetelmistä virtuaalitodellisuudessa. Tutkimusta varten loimme simuloidun ympäristön, jossa testihenkilöt ohjasivat virtuaalista robottia annettua reittiä pitkin erinäisiä esteitä väistellen. Ympäristönä käytimme Oulun Yliopistosta luotua virtuaalista mallia, koska se oli helposti saatavilla ja kuvasi mahdollista käyttötapausta etäläsnäolorobotille. Tutkimus koostui yhdeksästä kolmiosaisesta kierroksesta. Jokaisen kierroksen jälkeen koehenkilö täytti kyselyn mieltymykseen liittyen ja kierroksilta kerättiin tietoja suorituskykyyn liittyen. Olimme suunnitelleet tutkimuksen toteutettavaksi 40 henkilöllä, mutta COVID-19 taudin puhkeamisen takia meidän oli pakko suorittaa kokeita vain kahdella tutkijalla. Tulosten analysoinnin jälkeen päättelimme, että Wii Balance Board ei ole sopiva virtuaalitodellisuus etäläsnäolorobottien ohjaamiseen testatuissa ympäristöissä. Wii Balance Board oli uuvuttava käyttää kohtalaisen pitkien ajanjaksojen jälkeen eikä se tarjonnut tarpeeksi tarkkaa ohjausta muissa, kuin avoimissa ympäristöissä. Tulevia tutkimuksia varten ehdotimme tutkia muita vaihtoehtoja ohjaussauvoille, kuten tasapainolevy, joka olisi paremmin suunniteltu nojaustarkoituksiin jatkuvan kaltevuuden aiheuttaman väsymyksen kompensoimiseksi

    Anthropomorphic Robot Design and User Interaction Associated with Motion

    Get PDF
    Though in its original concept a robot was conceived to have some human-like shape, most robots now in use have specific industrial purposes and do not closely resemble humans. Nevertheless, robots that resemble human form in some way have continued to be introduced. They are called anthropomorphic robots. The fact that the user interface to all robots is now highly mediated means that the form of the user interface is not necessarily connected to the robots form, human or otherwise. Consequently, the unique way the design of anthropomorphic robots affects their user interaction is through their general appearance and the way they move. These robots human-like appearance acts as a kind of generalized predictor that gives its operators, and those with whom they may directly work, the expectation that they will behave to some extent like a human. This expectation is especially prominent for interactions with social robots, which are built to enhance it. Often interaction with them may be mainly cognitive because they are not necessarily kinematically intricate enough for complex physical interaction. Their body movement, for example, may be limited to simple wheeled locomotion. An anthropomorphic robot with human form, however, can be kinematically complex and designed, for example, to reproduce the details of human limb, torso, and head movement. Because of the mediated nature of robot control, there remains in general no necessary connection between the specific form of user interface and the anthropomorphic form of the robot. But their anthropomorphic kinematics and dynamics imply that the impact of their design shows up in the way the robot moves. The central finding of this report is that the control of this motion is a basic design element through which the anthropomorphic form can affect user interaction. In particular, designers of anthropomorphic robots can take advantage of the inherent human-like movement to 1) improve the users direct manual control over robot limbs and body positions, 2) improve users ability to detect anomalous robot behavior which could signal malfunction, and 3) enable users to be better able to infer the intent of robot movement. These three benefits of anthropomorphic design are inherent implications of the anthropomorphic form but they need to be recognized by designers as part of anthropomorphic design and explicitly enhanced to maximize their beneficial impact. Examples of such enhancements are provided in this report. If implemented, these benefits of anthropomorphic design can help reduce the risk of Inadequate Design of Human and Automation Robotic Integration (HARI) associated with the HARI-01 gap by providing efficient and dexterous operator control over robots and by improving operator ability to detect malfunctions and understand the intention of robot movement

    Leaning-based control of an immersive telepresence robot

    Get PDF
    Abstract. This thesis presents an implementation of a leaning-based control method which allows using the body to drive a telepresence robot. The implementation consisted of a control mapping to drive a differential drive telepresence robot using a Nintendo Wii Balance Board (Wiiboard). The motivation for using a balance board as a control device was to reduce Virtual Reality (VR) sickness by using small movements of your own body matching the motions seen on the screen; matching the body movement to the motion seen on the screen could mitigate sensory conflict between visual and vestibular organs which is generally held as one of the main causes for VR sickness. A user study (N=32) was conducted to compare the balance board to joysticks, in which the participants drove a simulated telepresence robot in a Virtual Environment (VE) along a marked path using both control methods. The results showed that the joystick did not cause any more VR sickness on the participants than the balance board, and the board proved to be statistically significantly more difficult to use, both subjectively and objectively. The balance board was unfamiliar to the participants and it was reported as hard to control. Analyzing the open-ended questions revealed a potential relationship between perceived difficulty and VR sickness, meaning that difficulty possibly affects sickness. The balance board’s potential to reduce VR sickness was held back by the difficulty to use it, thus making the board easier to use is the key to enabling its potential. A few suggestions were presented to achieve this goal.Immersiivisen etäläsnäolorobotin nojaamiseen perustuva ohjaus. Tiivistelmä. Tämä diplomityö esittelee nojautumiseen perustuvan ohjausmenetelmän toteutuksen, joka mahdollistaa etäläsnäolorobotin ohjaamisen käyttämällä kehoa. toteutus koostui ohjauskartoituksesta tasauspyörästö vetoisen etäläsnäolorobotin ohjaamiseksi Nintendo Wii Balance Board -tasapainolaudan avulla. Motivaatio tasapainolaudan käyttämiseen ohjauslaitteena oli vähentää virtuaalitodellisuus pahoinvointia käyttämällä pieniä oman kehon liikkeitä, jotka vastaavat näytöllä näkyviä liikkeitä; kehon liikkeen sovittaminen yhteen näytöllä nähtyyn liikkeeseen voi lieventää näkö- ja tasapainoelinten välistä aistiristiriitaa, jota pidetään yleisesti yhtenä pääsyistä virtuaalitodellisuus pahoinvointiin. Tasapainolautaa verrattiin ohjaussauvoihin käyttäjätutkimus (N=32), jossa osallistuja ajoivat simuloitua etäläsnäolorobottia virtuaaliympäristössä merkittyä reittiä pitkin käyttämällä molemmilla ohjausmenetelmiä. Tulokset osoittivat, että ohjaussauvat ei aiheuttanut osallistujille enempää virtuaalitodellisuus pahoinvointia kuin tasapainolauta, ja lauta osoittautui tilastollisesti merkitsevästi vaikeammaksi käyttää sekä subjektiivisesti että objektiivisesti. Tasapainolauta oli osallistujille tuntematon, ja sen ilmoitettiin olevan vaikeasti hallittava. Avointen kysymysten analysointi paljasti mahdollisen yhteyden koetun vaikeuden ja virtuaalitodellisuus pahoinvoinnin välillä, mikä tarkoittaa, että vaikeus voi mahdollisesti vaikuttaa pahoinvointiin. Tasapainolaudan vaikeus rajoitti sen potentiaalia vähentää virtuaalitodellisuus pahoinvointia, mikä tarkoittaa, että laudan käytön helpottaminen on avain sen potentiaalin saavuttamiseen. Muutamia ehdotuksia esitettiin tämän tavoitteen saavuttamiseksi

    Immersive Teleoperation of the Eye Gaze of Social Robots Assessing Gaze-Contingent Control of Vergence, Yaw and Pitch of Robotic Eyes

    Get PDF
    International audienceThis paper presents a new teleoperation system – called stereo gaze-contingent steering (SGCS) – able to seamlessly control the vergence, yaw and pitch of the eyes of a humanoid robot – here an iCub robot – from the actual gaze direction of a remote pilot. The video stream captured by the cameras embedded in the mobile eyes of the iCub are fed into an HTC Vive R Head-Mounted Display equipped with an SMI R binocular eye-tracker. The SGCS achieves the effective coupling between the eye-tracked gaze of the pilot and the robot's eye movements. SGCS both ensures a faithful reproduction of the pilot's eye movements – that is perquisite for the readability of the robot's gaze patterns by its interlocutor – and maintains the pilot's oculomotor visual clues – that avoids fatigue and sickness due to sensorimotor conflicts. We here assess the precision of this servo-control by asking several pilots to gaze towards known objects positioned in the remote environment. We demonstrate that we succeed in controlling vergence with similar precision as eyes' azimuth and elevation. This system opens the way for robot-mediated human interactions in the personal space, notably when objects in the shared working space are involved

    没入型テレプレゼンス環境における身体のマッピングと拡張に関する研究

    Get PDF
    学位の種別: 課程博士審査委員会委員 : (主査)東京大学教授 暦本 純一, 東京大学教授 坂村 健, 東京大学教授 越塚 登, 東京大学教授 中尾 彰宏, 東京大学教授 佐藤 洋一University of Tokyo(東京大学

    A helmet mounted display to adapt the telerobotic environment to human vision

    Get PDF
    A Helmet Mounted Display system has been developed. It provides the capability to display stereo images with the viewpoint tied to subjects' head orientation. The type of display might be useful in a telerobotic environment provided the correct operating parameters are known. The effects of update frequency were tested using a 3D tracking task. The effects of blur were tested using both tracking and pick-and-place tasks. For both, researchers found that operator performance can be degraded if the correct parameters are not used. Researchers are also using the display to explore the use of head movements as part of gaze as subjects search their visual field for target objects

    Perception-driven approaches to real-time remote immersive visualization

    Get PDF
    In remote immersive visualization systems, real-time 3D perception through RGB-D cameras, combined with modern Virtual Reality (VR) interfaces, enhances the user’s sense of presence in a remote scene through 3D reconstruction rendered in a remote immersive visualization system. Particularly, in situations when there is a need to visualize, explore and perform tasks in inaccessible environments, too hazardous or distant. However, a remote visualization system requires the entire pipeline from 3D data acquisition to VR rendering satisfies the speed, throughput, and high visual realism. Mainly when using point-cloud, there is a fundamental quality difference between the acquired data of the physical world and the displayed data because of network latency and throughput limitations that negatively impact the sense of presence and provoke cybersickness. This thesis presents state-of-the-art research to address these problems by taking the human visual system as inspiration, from sensor data acquisition to VR rendering. The human visual system does not have a uniform vision across the field of view; It has the sharpest visual acuity at the center of the field of view. The acuity falls off towards the periphery. The peripheral vision provides lower resolution to guide the eye movements so that the central vision visits all the interesting crucial parts. As a first contribution, the thesis developed remote visualization strategies that utilize the acuity fall-off to facilitate the processing, transmission, buffering, and rendering in VR of 3D reconstructed scenes while simultaneously reducing throughput requirements and latency. As a second contribution, the thesis looked into attentional mechanisms to select and draw user engagement to specific information from the dynamic spatio-temporal environment. It proposed a strategy to analyze the remote scene concerning the 3D structure of the scene, its layout, and the spatial, functional, and semantic relationships between objects in the scene. The strategy primarily focuses on analyzing the scene with models the human visual perception uses. It sets a more significant proportion of computational resources on objects of interest and creates a more realistic visualization. As a supplementary contribution, A new volumetric point-cloud density-based Peak Signal-to-Noise Ratio (PSNR) metric is proposed to evaluate the introduced techniques. An in-depth evaluation of the presented systems, comparative examination of the proposed point cloud metric, user studies, and experiments demonstrated that the methods introduced in this thesis are visually superior while significantly reducing latency and throughput
    corecore