55 research outputs found

    Towards Intelligent Telerobotics: Visualization and Control of Remote Robot

    Get PDF
    Human-machine cooperative or co-robotics has been recognized as the next generation of robotics. In contrast to current systems that use limited-reasoning strategies or address problems in narrow contexts, new co-robot systems will be characterized by their flexibility, resourcefulness, varied modeling or reasoning approaches, and use of real-world data in real time, demonstrating a level of intelligence and adaptability seen in humans and animals. The research I focused is in the two sub-field of co-robotics: teleoperation and telepresence. We firstly explore the ways of teleoperation using mixed reality techniques. I proposed a new type of display: hybrid-reality display (HRD) system, which utilizes commodity projection device to project captured video frame onto 3D replica of the actual target surface. It provides a direct alignment between the frame of reference for the human subject and that of the displayed image. The advantage of this approach lies in the fact that no wearing device needed for the users, providing minimal intrusiveness and accommodating users eyes during focusing. The field-of-view is also significantly increased. From a user-centered design standpoint, the HRD is motivated by teleoperation accidents, incidents, and user research in military reconnaissance etc. Teleoperation in these environments is compromised by the Keyhole Effect, which results from the limited field of view of reference. The technique contribution of the proposed HRD system is the multi-system calibration which mainly involves motion sensor, projector, cameras and robotic arm. Due to the purpose of the system, the accuracy of calibration should also be restricted within millimeter level. The followed up research of HRD is focused on high accuracy 3D reconstruction of the replica via commodity devices for better alignment of video frame. Conventional 3D scanner lacks either depth resolution or be very expensive. We proposed a structured light scanning based 3D sensing system with accuracy within 1 millimeter while robust to global illumination and surface reflection. Extensive user study prove the performance of our proposed algorithm. In order to compensate the unsynchronization between the local station and remote station due to latency introduced during data sensing and communication, 1-step-ahead predictive control algorithm is presented. The latency between human control and robot movement can be formulated as a linear equation group with a smooth coefficient ranging from 0 to 1. This predictive control algorithm can be further formulated by optimizing a cost function. We then explore the aspect of telepresence. Many hardware designs have been developed to allow a camera to be placed optically directly behind the screen. The purpose of such setups is to enable two-way video teleconferencing that maintains eye-contact. However, the image from the see-through camera usually exhibits a number of imaging artifacts such as low signal to noise ratio, incorrect color balance, and lost of details. Thus we develop a novel image enhancement framework that utilizes an auxiliary color+depth camera that is mounted on the side of the screen. By fusing the information from both cameras, we are able to significantly improve the quality of the see-through image. Experimental results have demonstrated that our fusion method compares favorably against traditional image enhancement/warping methods that uses only a single image

    Immersive Telerobotic Modular Framework using stereoscopic HMD's

    Get PDF
    Telepresença é o termo utilizado para descrever o conjunto de tecnologias que proporcionam aos utilizadores a sensação de que se encontram num local onde não estão fisicamente. Telepresença imersiva é o próximo passo e o objetivo passa por proporcionar a sensação de que o utilizador se encontra completamente imerso num ambiente remoto, estimulando para isso o maior número possível de sentidos e utilizando novas tecnologias tais como: visão estereoscópica, visão panorâmica, áudio 3D e Head Mounted Displays (HMDs).Telerobótica é um sub-campo da telepresença ligando a mesma à robótica, e que essencialmente consiste em proporcionar ao utilizador a possibilidade de controlar um robô de forma remota. Nas soluções do estado da arte da telerobótica existe uma falha, uma vez que a telerobótica não tem usufruido, no geral, das recentes evoluções em tecnologias de controlo e interfaces de interação pessoa- computador. Além da falta de estudos que apostam em soluções de imersividade, tais como visão estereoscópica, a telerobótica imersiva pode também incluir controlos mais intuitivos, tais como controladores de toque ou baseados em movimentos e gestos. Estes controlos são mais naturais e podem ser traduzidos de forma mais natural no sistema. Neste documento propomos uma abordagem alternativa a métodos mais comuns encontrados na teleoperação de robôs, como, por exemplo, os que se encontram em robôs de busca e salvamento (SAR). O nosso principal foco é testar o impacto que características imersivas, tais como visão estereoscópica e HMDs podem trazer para os robôs de telepresença e sistemas de telerobótica. Além disso, e tendo em conta que este é um novo e crescente campo, vamos mais além estando também a desenvolver uma framework modular que possuí a capacidade de ser extendida com diferentes robôs, com o fim de proporcionar aos investigadores uma plataforma com que podem testar diferentes casos de estudo.Pretendemos provar que adicionando tecnologias imersivas a um sistema de telerobótica é possível obter uma plataforma mais intuitiva, ou seja, menos propensa a erros induzidos por uma perceção e interação errada com o sistema de teleoperação do robô, por parte do operador. A perceção de profundidade e do ambiente em geral são significativamente melhoradas quando se utiliza esta solução de imersão. E o desempenho, tanto em tempo de operação numa tarefa como numa bem-sucedida identificação de objetos de interesse, é também reforçado. Desenvolvemos uma plataforma modular, de baixo/médio custo, de telerobótica imersiva que pode ser estendida com aplicações Android hardware-based no lado do robô. Esta solução tem por objetivo proporcionar a possibilidade de utilizar a mesma plataforma, em qualquer tipo de caso de estudo, estendendo a plataforma com diferentes tipos de robô. Em adição a uma framework modular e extensível, o projeto conta também com três principais módulos de interação, nomeadamente: - Módulo que contém um head mounted display com suporte a head tracking no ambiente do operador - Stream de visão estereoscópica através de Android - E um módulo que proporciona ao utilizador a possibilidade de interagir com o sistema com positional tracking No que respeita ao hardware não apenas a área móvel (e.g. smartphones, tablets, arduino) expandiu de forma avassaladora nos últimos anos, como também assistimos ao despertar de tecnologias de imersão a baixo custo, tais como o Oculus Rift, Google Cardboard ou Leap Motion.Estas soluções de hardware, de custo acessível, associadas aos avanços em stream de vídeo e áudio fornecidas pelas tecnologias WebRTC, principalmente pelo Google, tornam o desenvolvimento de uma solução de software em tempo real possível. Atualmente existe uma falta de métodos de software em tempo real em estereoscopia, mas acreditamos que a chegada de tecnologias WebRTC vai marcar o ponto de viragem, permitindo um plataforma económica e elevando a fasquia em termos de especificações.Telepresence is the term used to describe the set of technologies that enable people to feel or appear as if they were present in a location which they are not physically in. Immersive telepresence is the next step and the objective is to make the operator feel like he is immersed in a remote location, using as many senses as possible and new technologies such as stereoscopic vision, panoramic vision, 3D audio and Head Mounted Displays (HMDs).Telerobotics is a subfield of telepresence and merge it with robotics, providing the operator with the ability to control a robot remotely. In the current state of the art solutions there is a gap, since telerobotics have not enjoyed, in general, of the recent developments in control and human-computer interfaces technology. Besides the lack of studies investing on immersive solutions, such as stereoscopic vision, immersive telerobotics can also include more intuitive control capabilities such as haptic based controls or movement and gestures that would feel more natural and translated more naturally into the system. In this paper we propose an alternative approach to common teleoperation methods. As an example of common solutions, the reader can think about some of the methods found, for instance, in search and rescue (SAR) robots. Our main focus is to test the impact that immersive characteristics like stereoscopic vision and HMDs can bring to telepresence robots and telerobotics systems. Besides that, and since this is a new and growing field, we are also aiming to a modular framework capable of being extended with different robots in order to test different cases and aid researchers with an extensible platform.We claim that with immersive solutions the operator in a telerobotics system will have a more intuitive perception of the remote environment, and will be less error prone induced by a wrong perception and interaction with the teleoperation of the robot. We believe that the operator's depth perception and situational awareness are significantly improved when using immersive solutions, the performance both in terms of operation time and on successful identification, of particular objects, in remote environments are also enhanced.We have developed a low cost immersive telerobotic modular platform, this platform can be extended with hardware based Android applications in slave side (robot side). This solution provides the possibility of using the same platform, in any type of case study, by just extending it with different robots.In addition to the modular and extensible framework, the project will also features three main modules of interaction, namely:* A module that supports an head mounted display and head tracking in the operator environment* Stream of stereoscopic vision through Android with software synchronization* And a module that enables the operator to control the robot with positional tracking In the hardware side not only the mobile area (e.g. smartphones, tablets, arduino) expanded greatly in the last years but we also saw the raise of low cost immersive technologies, like the Oculus Rift DK2, Google Cardboard or Leap Motion. This cost effective hardware solutions associated with the advances in video and audio streaming provided by WebRTC technologies, achieved mostly by Google, make the development of a real-time software solution possible. Currently there is a lack of real-time software methods in stereoscopy, but the arrival of WebRTC technologies can be a game changer.We take advantage of this recent evolution in hardware and software in order to keep the platform economic and low cost, but at same time raising the flag in terms of performance and technical specifications of this kind of platform

    Exploring Robot Teleoperation in Virtual Reality

    Get PDF
    This thesis presents research on VR-based robot teleoperation with a focus on remote environment visualisation in virtual reality, the effects of remote environment reconstruction scale in virtual reality on the human-operator's ability to control the robot and human-operator's visual attention patterns when teleoperating a robot from virtual reality. A VR-based robot teleoperation framework was developed, it is compatible with various robotic systems and cameras, allowing for teleoperation and supervised control with any ROS-compatible robot and visualisation of the environment through any ROS-compatible RGB and RGBD cameras. The framework includes mapping, segmentation, tactile exploration, and non-physically demanding VR interface navigation and controls through any Unity-compatible VR headset and controllers or haptic devices. Point clouds are a common way to visualise remote environments in 3D, but they often have distortions and occlusions, making it difficult to accurately represent objects' textures. This can lead to poor decision-making during teleoperation if objects are inaccurately represented in the VR reconstruction. A study using an end-effector-mounted RGBD camera with OctoMap mapping of the remote environment was conducted to explore the remote environment with fewer point cloud distortions and occlusions while using a relatively small bandwidth. Additionally, a tactile exploration study proposed a novel method for visually presenting information about objects' materials in the VR interface, to improve the operator's decision-making and address the challenges of point cloud visualisation. Two studies have been conducted to understand the effect of virtual world dynamic scaling on teleoperation flow. The first study investigated the use of rate mode control with constant and variable mapping of the operator's joystick position to the speed (rate) of the robot's end-effector, depending on the virtual world scale. The results showed that variable mapping allowed participants to teleoperate the robot more effectively but at the cost of increased perceived workload. The second study compared how operators used a virtual world scale in supervised control, comparing the virtual world scale of participants at the beginning and end of a 3-day experiment. The results showed that as operators got better at the task they as a group used a different virtual world scale, and participants' prior video gaming experience also affected the virtual world scale chosen by operators. Similarly, the human-operator's visual attention study has investigated how their visual attention changes as they become better at teleoperating a robot using the framework. The results revealed the most important objects in the VR reconstructed remote environment as indicated by operators' visual attention patterns as well as their visual priorities shifts as they got better at teleoperating the robot. The study also demonstrated that operators’ prior video gaming experience affects their ability to teleoperate the robot and their visual attention behaviours

    Réalité évoquée, des rêves aux simulations (un cadre conceptuel de la réalité au regard de la présence)

    Get PDF
    Dans cette recherche, nous présentons le concept de Réalité Évoquée ( Evoked Reality ) afin d'essayer de relier différentes notions entourant la présence et la réalité au sein d'un cadre commun. Nous introduisons et illustrons le concept en tant que illusion de la réalité (Réalité Évoquée) qui évoque un sentiment de présence (Présence Évoquée) dans nos esprits. Nous distinguons les concepts de Réalité Média-Évoquée et Réalité Auto-Évoquée et nous les définissons clairement. Le concept de Réalité Évoquée nous permet d'introduire un modèle tripolaire de la réalité, qui remet en cause le modèle classique des deux pôles. Nous présentons également un modèle graphique appelé Reality-Presence Map (Carte Réalité-Présence) qui nous permet de localiser et d'analyser toutes les expériences cognitives concernant la présence et la réalité. Nous explorons également les qualia et la subjectivité de nos expériences de Réalité Évoquée. Deux expériences ont été réalisées : l'une dans le domaine de la Réalité Média-Évoquée et l'autre dans celui de l'Auto-Évoquée. Les expériences nous ont permis de valider nos hypothèses et de réaliser que nos recherches empiriques pouvaient encore être poussées plus loin encore. Enfin, nous illustrons les différentes implications et nous examinons les applications et les utilisations possibles de notre concept, en particulier dans le domaine de la recherche sur la présence. En outre, nous proposons d'étendre la recherche sur la présence au-delà du domaine de la réalité virtuelle et des moyens de communication et de l'étudier dans une perspective plus large que celle des sciences cognitives. Nous sommes convaincus que ce concept de Réalité Évoquée et le modèle proposé peuvent avoir des applications significatives dans l'étude de la présence et dans l'exploration des possibilités qui dépassent la réalité virtuelle.In this research, we introduce the concept of "Evoked Reality" in an attempt to bring together various ideas on presence and reality onto a common platform. The concept we propose and illustrate is in fact an 'illusion of reality' (Evoked Realty) that simply evokes a 'sense of presence' (Evoked Presence) in our minds. We clearly define and differentiate between a Media-Evoked and a Self-Evoked Reality. That helped us introduce the Three Pole Reality Model that redefines the classical Two Pole Reality Model. We also present a graphical model called Reality-Presence Map, which would help us locate and analyse every possible cognitive experience relating to presence and reality. We also explore the qualia and subjectivity of our experiences of Evoked Reality. Two experiments were conducted, one in the area of Media-Evoked Reality and one in Self-Evoked Reality. The experiments in fact lead to fruitful conclusions regarding our hypotheses and help us understand what could be further empirically studied. Ultimately, we illustrate different implications and shed light on prospective applications and uses of our concept, especially in the area of research on presence. In addition, we strongly suggest that we must open up presence research beyond the domain of virtual reality and communication media, and examine it from a broader perspective of cognitive science. We strongly believe that this concept of Evoked Reality and the proposed model may have significant applications in the study of presence, and in exploring the possibilities beyond virtual reality.PARIS-Arts et Métiers (751132303) / SudocSudocFranceF

    Guido and Am I Robot? A Case Study of Two Robotic Artworks Operating in Public Spaces

    Get PDF
    This article is a case study of two artworks that were commissioned for and exhibited in art venues in 2016 and 2017. The first artwork, Guido the Robot Guide, guided the visitors to an art-science exhibition, presenting the exhibits with a robot's perspective. Guido was the result of a collaboration between artists and engineers. The concept was an irreverent robot guide that could switch transparently from autonomous mode to operator control, allowing for seamless natural interaction. We examine how the project unfolded, its successes and limitations. Following on Guido, the lead artist developed the robotic installation Am I Robot? where the idea of a hybrid autonomous/remote-manual mode was implemented fully in a non-utilitarian machine that was exhibited in several art galleries. The article provides a concise contextualisation and details technical and design aspects as well as observations of visitors' interactions with the artworks. We evaluate the hybrid system's potential for creative robotics applications and identify directions for future research

    ISMCR 1994: Topical Workshop on Virtual Reality. Proceedings of the Fourth International Symposium on Measurement and Control in Robotics

    Get PDF
    This symposium on measurement and control in robotics included sessions on: (1) rendering, including tactile perception and applied virtual reality; (2) applications in simulated medical procedures and telerobotics; (3) tracking sensors in a virtual environment; (4) displays for virtual reality applications; (5) sensory feedback including a virtual environment application with partial gravity simulation; and (6) applications in education, entertainment, technical writing, and animation

    Vision 21: Interdisciplinary Science and Engineering in the Era of Cyberspace

    Get PDF
    The symposium Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace was held at the NASA Lewis Research Center on March 30-31, 1993. The purpose of the symposium was to simulate interdisciplinary thinking in the sciences and technologies which will be required for exploration and development of space over the next thousand years. The keynote speakers were Hans Moravec, Vernor Vinge, Carol Stoker, and Myron Krueger. The proceedings consist of transcripts of the invited talks and the panel discussion by the invited speakers, summaries of workshop sessions, and contributed papers by the attendees

    THE FUTURE OF DIGITAL WORK - USE CASES FOR AUGMENTED REALITY GLASSES

    Get PDF
    Microsoft’s HoloLens enables true augmented reality (AR) by placing virtual objects within the real world. This paper aims at presenting trades (based on ISIC) that can benefit from AR as well as possible use cases. Firstly, the authors conducted a systematic literature search to identi-fy relevant papers. Six databases (including EBSCOhost, ScienceDirect and SpringerLink) were scanned for the term “HoloLens”. Out of 680 results, two researchers identified 150 articles as thematically relevant. Secondly, these papers were analysed utilising qualitative content analy-sis. Findings reveal 26 trades where AR glasses are in use for practice or research purposes. The most frequent are human health, education and research. In addition, we provide a cata-logue of 7 main use cases, such as Process Guidance or Data Access and Visualisation as well as 27 sub use cases addressing corresponding functionalities in more detail. The results of this paper are trades and application scenarios for AR glasses. Thus, this article contributes to re-search in the field of service systems design, especially AR glasses-based service systems, and provide evidence for the future of digital work

    Exploring Robot Teleoperation in Virtual Reality

    Get PDF
    This thesis presents research on VR-based robot teleoperation with a focus on remote environment visualisation in virtual reality, the effects of remote environment reconstruction scale in virtual reality on the human-operator's ability to control the robot and human-operator's visual attention patterns when teleoperating a robot from virtual reality. A VR-based robot teleoperation framework was developed, it is compatible with various robotic systems and cameras, allowing for teleoperation and supervised control with any ROS-compatible robot and visualisation of the environment through any ROS-compatible RGB and RGBD cameras. The framework includes mapping, segmentation, tactile exploration, and non-physically demanding VR interface navigation and controls through any Unity-compatible VR headset and controllers or haptic devices. Point clouds are a common way to visualise remote environments in 3D, but they often have distortions and occlusions, making it difficult to accurately represent objects' textures. This can lead to poor decision-making during teleoperation if objects are inaccurately represented in the VR reconstruction. A study using an end-effector-mounted RGBD camera with OctoMap mapping of the remote environment was conducted to explore the remote environment with fewer point cloud distortions and occlusions while using a relatively small bandwidth. Additionally, a tactile exploration study proposed a novel method for visually presenting information about objects' materials in the VR interface, to improve the operator's decision-making and address the challenges of point cloud visualisation. Two studies have been conducted to understand the effect of virtual world dynamic scaling on teleoperation flow. The first study investigated the use of rate mode control with constant and variable mapping of the operator's joystick position to the speed (rate) of the robot's end-effector, depending on the virtual world scale. The results showed that variable mapping allowed participants to teleoperate the robot more effectively but at the cost of increased perceived workload. The second study compared how operators used a virtual world scale in supervised control, comparing the virtual world scale of participants at the beginning and end of a 3-day experiment. The results showed that as operators got better at the task they as a group used a different virtual world scale, and participants' prior video gaming experience also affected the virtual world scale chosen by operators. Similarly, the human-operator's visual attention study has investigated how their visual attention changes as they become better at teleoperating a robot using the framework. The results revealed the most important objects in the VR reconstructed remote environment as indicated by operators' visual attention patterns as well as their visual priorities shifts as they got better at teleoperating the robot. The study also demonstrated that operators’ prior video gaming experience affects their ability to teleoperate the robot and their visual attention behaviours

    Telepresence and Transgenic Art

    Get PDF
    corecore