172 research outputs found

    Large-scale environment mapping and immersive human-robot interaction for agricultural mobile robot teleoperation

    Full text link
    Remote operation is a crucial solution to problems encountered in agricultural machinery operations. However, traditional video streaming control methods fall short in overcoming the challenges of single perspective views and the inability to obtain 3D information. In light of these issues, our research proposes a large-scale digital map reconstruction and immersive human-machine remote control framework for agricultural scenarios. In our methodology, a DJI unmanned aerial vehicle(UAV) was utilized for data collection, and a novel video segmentation approach based on feature points was introduced. To tackle texture richness variability, an enhanced Structure from Motion (SfM) using superpixel segmentation was implemented. This method integrates the open Multiple View Geometry (openMVG) framework along with Local Features from Transformers (LoFTR). The enhanced SfM results in a point cloud map, which is further processed through Multi-View Stereo (MVS) to generate a complete map model. For control, a closed-loop system utilizing TCP for VR control and positioning of agricultural machinery was introduced. Our system offers a fully visual-based immersive control method, where upon connection to the local area network, operators can utilize VR for immersive remote control. The proposed method enhances both the robustness and convenience of the reconstruction process, thereby significantly facilitating operators in acquiring more comprehensive on-site information and engaging in immersive remote control operations. The code is available at: https://github.com/LiuTao1126/Enhance-SF

    RealTHASC—a cyber-physical XR testbed for AI-supported real-time human autonomous systems collaborations

    Get PDF
    Today’s research on human-robot teaming requires the ability to test artificial intelligence (AI) algorithms for perception and decision-making in complex real-world environments. Field experiments, also referred to as experiments “in the wild,” do not provide the level of detailed ground truth necessary for thorough performance comparisons and validation. Experiments on pre-recorded real-world data sets are also significantly limited in their usefulness because they do not allow researchers to test the effectiveness of active robot perception and control or decision strategies in the loop. Additionally, research on large human-robot teams requires tests and experiments that are too costly even for the industry and may result in considerable time losses when experiments go awry. The novel Real-Time Human Autonomous Systems Collaborations (RealTHASC) facility at Cornell University interfaces real and virtual robots and humans with photorealistic simulated environments by implementing new concepts for the seamless integration of wearable sensors, motion capture, physics-based simulations, robot hardware and virtual reality (VR). The result is an extended reality (XR) testbed by which real robots and humans in the laboratory are able to experience virtual worlds, inclusive of virtual agents, through real-time visual feedback and interaction. VR body tracking by DeepMotion is employed in conjunction with the OptiTrack motion capture system to transfer every human subject and robot in the real physical laboratory space into a synthetic virtual environment, thereby constructing corresponding human/robot avatars that not only mimic the behaviors of the real agents but also experience the virtual world through virtual sensors and transmit the sensor data back to the real human/robot agent, all in real time. New cross-domain synthetic environments are created in RealTHASC using Unreal Engine™, bridging the simulation-to-reality gap and allowing for the inclusion of underwater/ground/aerial autonomous vehicles, each equipped with a multi-modal sensor suite. The experimental capabilities offered by RealTHASC are demonstrated through three case studies showcasing mixed real/virtual human/robot interactions in diverse domains, leveraging and complementing the benefits of experimentation in simulation and in the real world

    Control and communication systems for automated vehicles cooperation and coordination

    Get PDF
    Mención Internacional en el título de doctorThe technological advances in the Intelligent Transportation Systems (ITS) are exponentially improving over the last century. The objective is to provide intelligent and innovative services for the different modes of transportation, towards a better, safer, coordinated and smarter transport networks. The Intelligent Transportation Systems (ITS) focus is divided into two main categories; the first is to improve existing components of the transport networks, while the second is to develop intelligent vehicles which facilitate the transportation process. Different research efforts have been exerted to tackle various aspects in the fields of the automated vehicles. Accordingly, this thesis is addressing the problem of multiple automated vehicles cooperation and coordination. At first, 3DCoAutoSim driving simulator was developed in Unity game engine and connected to Robot Operating System (ROS) framework and Simulation of Urban Mobility (SUMO). 3DCoAutoSim is an abbreviation for "3D Simulator for Cooperative Advanced Driver Assistance Systems (ADAS) and Automated Vehicles Simulator". 3DCoAutoSim was tested under different circumstances and conditions, afterward, it was validated through carrying-out several controlled experiments and compare the results against their counter reality experiments. The obtained results showed the efficiency of the simulator to handle different situations, emulating real world vehicles. Next is the development of the iCab platforms, which is an abbreviation for "Intelligent Campus Automobile". The platforms are two electric golf-carts that were modified mechanically, electronically and electrically towards the goal of automated driving. Each iCab was equipped with several on-board embedded computers, perception sensors and auxiliary devices, in order to execute the necessary actions for self-driving. Moreover, the platforms are capable of several Vehicle-to-Everything (V2X) communication schemes, applying three layers of control, utilizing cooperation architecture for platooning, executing localization systems, mapping systems, perception systems, and finally several planning systems. Hundreds of experiments were carried-out for the validation of each system in the iCab platform. Results proved the functionality of the platform to self-drive from one point to another with minimal human intervention.Los avances tecnológicos en Sistemas Inteligentes de Transporte (ITS) han crecido de forma exponencial durante el último siglo. El objetivo de estos avances es el de proveer de sistemas innovadores e inteligentes para ser aplicados a los diferentes medios de transporte, con el fin de conseguir un transporte mas eficiente, seguro, coordinado e inteligente. El foco de los ITS se divide principalmente en dos categorías; la primera es la mejora de los componentes ya existentes en las redes de transporte, mientras que la segunda es la de desarrollar vehículos inteligentes que hagan más fácil y eficiente el transporte. Diferentes esfuerzos de investigación se han llevado a cabo con el fin de solucionar los numerosos aspectos asociados con la conducción autónoma. Esta tesis propone una solución para la cooperación y coordinación de múltiples vehículos. Para ello, en primer lugar se desarrolló un simulador (3DCoAutoSim) de conducción basado en el motor de juegos Unity, conectado al framework Robot Operating System (ROS) y al simulador Simulation of Urban Mobility (SUMO). 3DCoAutoSim ha sido probado en diferentes condiciones y circunstancias, para posteriormente validarlo con resultados a través de varios experimentos reales controlados. Los resultados obtenidos mostraron la eficiencia del simulador para manejar diferentes situaciones, emulando los vehículos en el mundo real. En segundo lugar, se desarrolló la plataforma de investigación Intelligent Campus Automobile (iCab), que consiste en dos carritos eléctricos de golf, que fueron modificados eléctrica, mecánica y electrónicamente para darle capacidades autónomas. Cada iCab se equipó con diferentes computadoras embebidas, sensores de percepción y unidades auxiliares, con la finalidad de transformarlos en vehículos autónomos. Además, se les han dado capacidad de comunicación multimodal (V2X), se les han aplicado tres capas de control, incorporando una arquitectura de cooperación para operación en modo tren, diferentes esquemas de localización, mapeado, percepción y planificación de rutas. Innumerables experimentos han sido realizados para validar cada uno de los diferentes sistemas incorporados. Los resultados prueban la funcionalidad de esta plataforma para realizar conducción autónoma y cooperativa con mínima intervención humana.Programa Oficial de Doctorado en Ingeniería Eléctrica, Electrónica y AutomáticaPresidente: Francisco Javier Otamendi Fernández de la Puebla.- Secretario: Hanno Hildmann.- Vocal: Pietro Cerr

    SLAV-Sim: A Framework for Self-Learning Autonomous Vehicle Simulation:SLAV-Sim

    Get PDF
    With the advent of autonomous vehicles, sensors and algorithm testing have become crucial parts of the autonomous vehicle development cycle. Having access to real-world sensors and vehicles is a dream for researchers and small-scale original equipment manufacturers (OEMs) due to the software and hardware development life-cycle duration and high costs. Therefore, simulator-based virtual testing has gained traction over the years as the preferred testing method due to its low cost, efficiency, and effectiveness in executing a wide range of testing scenarios. Companies like ANSYS and NVIDIA have come up with robust simulators, and open-source simulators such as CARLA have also populated the market. However, there is a lack of lightweight and simple simulators catering to specific test cases. In this paper, we introduce the SLAV-Sim, a lightweight simulator that specifically trains the behaviour of a self-learning autonomous vehicle. This simulator has been created using the Unity engine and provides an end-to-end virtual testing framework for different reinforcement learning (RL) algorithms in a variety of scenarios using camera sensors and raycasts

    Review of three-dimensional human-computer interaction with focus on the leap motion controller

    Get PDF
    Modern hardware and software development has led to an evolution of user interfaces from command-line to natural user interfaces for virtual immersive environments. Gestures imitating real-world interaction tasks increasingly replace classical two-dimensional interfaces based on Windows/Icons/Menus/Pointers (WIMP) or touch metaphors. Thus, the purpose of this paper is to survey the state-of-the-art Human-Computer Interaction (HCI) techniques with a focus on the special field of three-dimensional interaction. This includes an overview of currently available interaction devices, their applications of usage and underlying methods for gesture design and recognition. Focus is on interfaces based on the Leap Motion Controller (LMC) and corresponding methods of gesture design and recognition. Further, a review of evaluation methods for the proposed natural user interfaces is given

    Surgical Subtask Automation for Intraluminal Procedures using Deep Reinforcement Learning

    Get PDF
    Intraluminal procedures have opened up a new sub-field of minimally invasive surgery that use flexible instruments to navigate through complex luminal structures of the body, resulting in reduced invasiveness and improved patient benefits. One of the major challenges in this field is the accurate and precise control of the instrument inside the human body. Robotics has emerged as a promising solution to this problem. However, to achieve successful robotic intraluminal interventions, the control of the instrument needs to be automated to a large extent. The thesis first examines the state-of-the-art in intraluminal surgical robotics and identifies the key challenges in this field, which include the need for safe and effective tool manipulation, and the ability to adapt to unexpected changes in the luminal environment. To address these challenges, the thesis proposes several levels of autonomy that enable the robotic system to perform individual subtasks autonomously, while still allowing the surgeon to retain overall control of the procedure. The approach facilitates the development of specialized algorithms such as Deep Reinforcement Learning (DRL) for subtasks like navigation and tissue manipulation to produce robust surgical gestures. Additionally, the thesis proposes a safety framework that provides formal guarantees to prevent risky actions. The presented approaches are evaluated through a series of experiments using simulation and robotic platforms. The experiments demonstrate that subtask automation can improve the accuracy and efficiency of tool positioning and tissue manipulation, while also reducing the cognitive load on the surgeon. The results of this research have the potential to improve the reliability and safety of intraluminal surgical interventions, ultimately leading to better outcomes for patients and surgeons
    corecore