131 research outputs found

    Virtual Reality based Telerobotics Framework with Depth Cameras

    Get PDF
    This work describes a virtual reality (VR) based robot teleoperation framework which relies on scene visualization from depth cameras and implements human-robot and human-scene interaction gestures. We suggest that mounting a camera on a slave robot's end-effector (an in-hand camera) allows the operator to achieve better visualization of the remote scene and improve task performance. We compared experimentally the operator's ability to understand the remote environment in different visualization modes: single external static camera, in-hand camera, in-hand and external static camera, in-hand camera with OctoMap occupancy mapping. The latter option provided the operator with a better understanding of the remote environment whilst requiring relatively small communication bandwidth. Consequently, we propose suitable grasping methods compatible with the VR based teleoperation with the in-hand camera. Video demonstration: https://youtu.be/3vZaEykMS_E

    Digital Cognitive Companions for Marine Vessels : On the Path Towards Autonomous Ships

    Get PDF
    As for the automotive industry, industry and academia are making extensive efforts to create autonomous ships. The solutions for this are very technology-intense. Many building blocks, often relying on AI technology, need to work together to create a complete system that is safe and reliable to use. Even when the ships are fully unmanned, humans are still foreseen to guide the ships when unknown situations arise. This will be done through teleoperation systems.In this thesis, methods are presented to enhance the capability of two building blocks that are important for autonomous ships; a positioning system, and a system for teleoperation.The positioning system has been constructed to not rely on the Global Positioning System (GPS), as this system can be jammed or spoofed. Instead, it uses Bayesian calculations to compare the bottom depth and magnetic field measurements with known sea charts and magnetic field maps, in order to estimate the position. State-of-the-art techniques for this method typically use high-resolution maps. The problem is that there are hardly any high-resolution terrain maps available in the world. Hence we present a method using standard sea-charts. We compensate for the lower accuracy by using other domains, such as magnetic field intensity and bearings to landmarks. Using data from a field trial, we showed that the fusion method using multiple domains was more robust than using only one domain. In the second building block, we first investigated how 3D and VR approaches could support the remote operation of unmanned ships with a data connection with low throughput, by comparing respective graphical user interfaces (GUI) with a Baseline GUI following the currently applied interfaces in such contexts. Our findings show that both the 3D and VR approaches outperform the traditional approach significantly. We found the 3D GUI and VR GUI users to be better at reacting to potentially dangerous situations than the Baseline GUI users, and they could keep track of the surroundings more accurately. Building from this, we conducted a teleoperation user study using real-world data from a field-trial in the archipelago, where the users should assist the positioning system with bearings to landmarks. The users experienced the tool to give a good overview, and despite the connection with the low throughput, they managed through the GUI to significantly improve the positioning accuracy

    Registro espacial 2D–3D para a inspeção remota de subestações de energia

    Get PDF
    Remote inspection and supervisory control are critical features for smart factories, civilian surveillance, power systems, among other domains. For reducing the time to make decisions, operators must have both a high situation awareness, implying a considerable amount of data to be presented, and minimal sensory load. Recent research suggests the adoption of computer vision techniques for automatic inspection, as well as virtual reality (VR) as an alternative to traditional SCADA interfaces. Nevertheless, although VR may provide a good representation of a substation’s state, it lacks some real-time information, available from online field cameras and microphones. Since these two sources of information (VR and field information) are not integrated into one single solution, we miss the opportunity of using VR as a SCADA-aware remote inspection tool, during operation and disaster-response routines. This work discusses a method to augment virtual environments of power substations with field images, enabling operators to promptly see a virtual representation of the inspected area's surroundings. The resulting environment is integrated with an image-based state inference machine, continuously checking the inferred states against the ones reported by the SCADA database. Whenever a discrepancy is found, an alarm is triggered and the virtual camera can be immediately teleported to the affected region, speeding up system reestablishment. The solution is based on a client-server architecture and allows multiple cameras deployed in multiple substations. Our results concern the quality of the 2D–3D registration and the rendering framerate for a simple scenario. The collected quantitative metrics suggest good camera pose estimations and registrations, as well as an arguably optimal rendering framerate for substations' equipment inspection.CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível SuperiorCEMIG - Companhia Energética de Minas GeraisCNPq - Conselho Nacional de Desenvolvimento Científico e TecnológicoFAPEMIG - Fundação de Amparo a Pesquisa do Estado de Minas GeraisTese (Doutorado)A inspeção remota e o controle supervisório são requisitos críticos para fábricas modernas, vigilância de civis, sistemas de energia e outras áreas. Para reduzir o tempo da tomada de decisão, os operadores precisam de uma elevada consciência da situação em campo, o que implica em uma grande quantidade de dados a serem apresentados, mas com menor carga sensorial possível. Estudos recentes sugerem a adoção de técnicas de visão computacional para inspeção automática, e a Realidade Virtual (VR) como uma alternativa às interfaces tradicionais do SCADA. Entretanto, apesar de fornecer uma boa representação do estado da subestação, os ambientes virtuais carecem de algumas informações de campo, provenientes de câmeras e microfones. Como essas duas fontes de dados (VR e dispositivos de captura) não são integrados em uma única solução, perde-se a oportunidade de usar VR como uma ferramenta de inspeção remota conectada ao SCADA, durante a operação e rotinas de respostas a desastres. Este trabalho trata de um método para aumentar ambientes virtuais de subestações com imagens de campo, permitindo aos operadores a rápida visualização de uma representação virtual do entorno da área monitorada. O ambiente resultante é integrado com uma máquina de inferência estados por imagens, comparando continuamente os estados inferidos com aqueles reportados pela base SCADA. Na ocasião de uma discrepância, um alarme é gerado e possibilita que a câmera virtual seja imediatamente teletransportada para a região afetada, acelerando o processo de retomada do sistema. A solução se baseia em uma arquitetura cliente-servidor e permite múltiplas câmeras presentes em múltiplas subestações. Os resultados dizem respeito à qualidade do registro 2D–3D e à taxa de renderização para um cenário simples. As métricas quantitativas coletadas sugerem bons níveis de registro e estimativa de pose de câmera, além de uma taxa ótima de renderização para fins de inspeção de equipamentos em subestações

    Exploring Robot Teleoperation in Virtual Reality

    Get PDF
    This thesis presents research on VR-based robot teleoperation with a focus on remote environment visualisation in virtual reality, the effects of remote environment reconstruction scale in virtual reality on the human-operator's ability to control the robot and human-operator's visual attention patterns when teleoperating a robot from virtual reality. A VR-based robot teleoperation framework was developed, it is compatible with various robotic systems and cameras, allowing for teleoperation and supervised control with any ROS-compatible robot and visualisation of the environment through any ROS-compatible RGB and RGBD cameras. The framework includes mapping, segmentation, tactile exploration, and non-physically demanding VR interface navigation and controls through any Unity-compatible VR headset and controllers or haptic devices. Point clouds are a common way to visualise remote environments in 3D, but they often have distortions and occlusions, making it difficult to accurately represent objects' textures. This can lead to poor decision-making during teleoperation if objects are inaccurately represented in the VR reconstruction. A study using an end-effector-mounted RGBD camera with OctoMap mapping of the remote environment was conducted to explore the remote environment with fewer point cloud distortions and occlusions while using a relatively small bandwidth. Additionally, a tactile exploration study proposed a novel method for visually presenting information about objects' materials in the VR interface, to improve the operator's decision-making and address the challenges of point cloud visualisation. Two studies have been conducted to understand the effect of virtual world dynamic scaling on teleoperation flow. The first study investigated the use of rate mode control with constant and variable mapping of the operator's joystick position to the speed (rate) of the robot's end-effector, depending on the virtual world scale. The results showed that variable mapping allowed participants to teleoperate the robot more effectively but at the cost of increased perceived workload. The second study compared how operators used a virtual world scale in supervised control, comparing the virtual world scale of participants at the beginning and end of a 3-day experiment. The results showed that as operators got better at the task they as a group used a different virtual world scale, and participants' prior video gaming experience also affected the virtual world scale chosen by operators. Similarly, the human-operator's visual attention study has investigated how their visual attention changes as they become better at teleoperating a robot using the framework. The results revealed the most important objects in the VR reconstructed remote environment as indicated by operators' visual attention patterns as well as their visual priorities shifts as they got better at teleoperating the robot. The study also demonstrated that operators’ prior video gaming experience affects their ability to teleoperate the robot and their visual attention behaviours

    Haptics: Science, Technology, Applications

    Get PDF
    This open access book constitutes the proceedings of the 12th International Conference on Human Haptic Sensing and Touch Enabled Computer Applications, EuroHaptics 2020, held in Leiden, The Netherlands, in September 2020. The 60 papers presented in this volume were carefully reviewed and selected from 111 submissions. The were organized in topical sections on haptic science, haptic technology, and haptic applications. This year's focus is on accessibility

    Unmanned Robotic Systems and Applications

    Get PDF
    This book presents recent studies of unmanned robotic systems and their applications. With its five chapters, the book brings together important contributions from renowned international researchers. Unmanned autonomous robots are ideal candidates for applications such as rescue missions, especially in areas that are difficult to access. Swarm robotics (multiple robots working together) is another exciting application of the unmanned robotics systems, for example, coordinated search by an interconnected group of moving robots for the purpose of finding a source of hazardous emissions. These robots can behave like individuals working in a group without a centralized control
    corecore