1,619 research outputs found

    Augmented Reality guidance for fusion plant maintenance

    Get PDF

    Recent Advancements in Augmented Reality for Robotic Applications: A Survey

    Get PDF
    Robots are expanding from industrial applications to daily life, in areas such as medical robotics, rehabilitative robotics, social robotics, and mobile/aerial robotics systems. In recent years, augmented reality (AR) has been integrated into many robotic applications, including medical, industrial, human–robot interactions, and collaboration scenarios. In this work, AR for both medical and industrial robot applications is reviewed and summarized. For medical robot applications, we investigated the integration of AR in (1) preoperative and surgical task planning; (2) image-guided robotic surgery; (3) surgical training and simulation; and (4) telesurgery. AR for industrial scenarios is reviewed in (1) human–robot interactions and collaborations; (2) path planning and task allocation; (3) training and simulation; and (4) teleoperation control/assistance. In addition, the limitations and challenges are discussed. Overall, this article serves as a valuable resource for working in the field of AR and robotic research, offering insights into the recent state of the art and prospects for improvement

    Augmented reality (AR) for surgical robotic and autonomous systems: State of the art, challenges, and solutions

    Get PDF
    Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human-robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future

    Vision based 3D Gesture Tracking using Augmented Reality and Virtual Reality for Improved Learning Applications

    Get PDF
    3D gesture recognition and tracking based augmented reality and virtual reality have become a big interest of research because of advanced technology in smartphones. By interacting with 3D objects in augmented reality and virtual reality, users get better understanding of the subject matter where there have been requirements of customized hardware support and overall experimental performance needs to be satisfactory. This research investigates currently various vision based 3D gestural architectures for augmented reality and virtual reality. The core goal of this research is to present analysis on methods, frameworks followed by experimental performance on recognition and tracking of hand gestures and interaction with virtual objects in smartphones. This research categorized experimental evaluation for existing methods in three categories, i.e. hardware requirement, documentation before actual experiment and datasets. These categories are expected to ensure robust validation for practical usage of 3D gesture tracking based on augmented reality and virtual reality. Hardware set up includes types of gloves, fingerprint and types of sensors. Documentation includes classroom setup manuals, questionaries, recordings for improvement and stress test application. Last part of experimental section includes usage of various datasets by existing research. The overall comprehensive illustration of various methods, frameworks and experimental aspects can significantly contribute to 3D gesture recognition and tracking based augmented reality and virtual reality.Peer reviewe

    Quadrotor UAV Interface and Localization Design

    Get PDF
    Our project\u27s task was to assist Lincoln Laboratory in preparation for the future automation of a quadrotor UAV system. We created an interface between the quadrotor and ROS to allow for computerized-control of the UAV. Tests of our system indicated that our solution could be feasible with further research. In the next phase of the projects, we created a localization system automate take-off and landing in future mission environments by altering the augmented reality library, ARToolKit, to work with ROS. We performed accuracy, range, update rate, lighting, and tag occlusion tests on our modified code to determine its viability in real-world conditions. We concluded that our current system would not be a feasible due to inconsistencies in tag-detection, but that it merits further research

    Quadrotor UAV Interface and Localization Design

    Get PDF
    MIT Lincoln Laboratory has expressed growing interest in projects involving quadrotor Unmanned Aerial Vehicles (UAVs). Our tasks were to develop a system providing computerized remote control of the provided UAV, as well as to develop a high-accuracy localization system. We integrated the UAV\u27s control system with standard Robot Operating System (ROS) software tools. We measured the reliability of the control system, and determined performance characteristics. We found our control scheme to be usable pending minor improvements. To enable localization, we explored machine vision, ultimately altering the Augmented Reality library ARToolKit to interface with ROS. After several tests, we determined that ARToolKit is not currently a feasible alternative to standard localization techniques

    Goal Based Human Swarm Interaction for Collaborative Transport

    Get PDF
    Human-swarm interaction is an important milestone for the introduction of swarm-intelligence based solutions into real application scenarios. One of the main hurdles towards this goal is the creation of suitable interfaces for humans to convey the correct intent to multiple robots. As the size of the swarm increases, the complexity of dealing with explicit commands for individual robots becomes intractable. This brings a great challenge for the developer or the operator to drive robots to finish even the most basic tasks. In our work, we consider a different approach that humans specify only the desired goal rather than issuing individual commands necessary to obtain this task. We explore this approach in a collaborative transport scenario, where the user chooses the target position of an object, and a group of robots moves it by adapting themselves to the environment. The main outcome of this thesis is the design of integration of a collaborative transport behavior of swarm robots and an augmented reality human interface. We implemented an augmented reality (AR) application in which a virtual object is displayed overlapped on a detected target object. Users can manipulate the virtual object to generate the goal configuration for the object. The designed centralized controller translate the goal position to the robots and synchronize the state transitions. The whole system is tested on Khepera IV robots through the integration of Vicon system and ARGoS simulator
    • …
    corecore