7,709 research outputs found

    Augmented Reality-based Feedback for Technician-in-the-loop C-arm Repositioning

    Full text link
    Interventional C-arm imaging is crucial to percutaneous orthopedic procedures as it enables the surgeon to monitor the progress of surgery on the anatomy level. Minimally invasive interventions require repeated acquisition of X-ray images from different anatomical views to verify tool placement. Achieving and reproducing these views often comes at the cost of increased surgical time and radiation dose to both patient and staff. This work proposes a marker-free "technician-in-the-loop" Augmented Reality (AR) solution for C-arm repositioning. The X-ray technician operating the C-arm interventionally is equipped with a head-mounted display capable of recording desired C-arm poses in 3D via an integrated infrared sensor. For C-arm repositioning to a particular target view, the recorded C-arm pose is restored as a virtual object and visualized in an AR environment, serving as a perceptual reference for the technician. We conduct experiments in a setting simulating orthopedic trauma surgery. Our proof-of-principle findings indicate that the proposed system can decrease the 2.76 X-ray images required per desired view down to zero, suggesting substantial reductions of radiation dose during C-arm repositioning. The proposed AR solution is a first step towards facilitating communication between the surgeon and the surgical staff, improving the quality of surgical image acquisition, and enabling context-aware guidance for surgery rooms of the future. The concept of technician-in-the-loop design will become relevant to various interventions considering the expected advancements of sensing and wearable computing in the near future

    FlightGoggles: A Modular Framework for Photorealistic Camera, Exteroceptive Sensor, and Dynamics Simulation

    Full text link
    FlightGoggles is a photorealistic sensor simulator for perception-driven robotic vehicles. The key contributions of FlightGoggles are twofold. First, FlightGoggles provides photorealistic exteroceptive sensor simulation using graphics assets generated with photogrammetry. Second, it provides the ability to combine (i) synthetic exteroceptive measurements generated in silico in real time and (ii) vehicle dynamics and proprioceptive measurements generated in motio by vehicle(s) in a motion-capture facility. FlightGoggles is capable of simulating a virtual-reality environment around autonomous vehicle(s). While a vehicle is in flight in the FlightGoggles virtual reality environment, exteroceptive sensors are rendered synthetically in real time while all complex extrinsic dynamics are generated organically through the natural interactions of the vehicle. The FlightGoggles framework allows for researchers to accelerate development by circumventing the need to estimate complex and hard-to-model interactions such as aerodynamics, motor mechanics, battery electrochemistry, and behavior of other agents. The ability to perform vehicle-in-the-loop experiments with photorealistic exteroceptive sensor simulation facilitates novel research directions involving, e.g., fast and agile autonomous flight in obstacle-rich environments, safe human interaction, and flexible sensor selection. FlightGoggles has been utilized as the main test for selecting nine teams that will advance in the AlphaPilot autonomous drone racing challenge. We survey approaches and results from the top AlphaPilot teams, which may be of independent interest.Comment: Initial version appeared at IROS 2019. Supplementary material can be found at https://flightgoggles.mit.edu. Revision includes description of new FlightGoggles features, such as a photogrammetric model of the MIT Stata Center, new rendering settings, and a Python AP

    ROS2 versus AUTOSAR: automated PARKING system case-study

    Get PDF
    Vehicles are complex systems as they combine several engineering disciplines, such as mechanical, electric, electronic, software and telecommunication. In the last decades, most innovations in the automotive domain have been achieved as a combination of electronics and software. Consequently, the software development and deployment has resulted a highly sophisticated engineering process to manage and to integrate. With the introduction of artificial intelligence, automated driving has become a reality. However it has additionally increased the requirements on the system design. One widely accepted approach to manage complexity is to divide the system into subsystems through a well-defined architecture. The architecture of an autonomous system must be suitable to guarantee that the self-driving functionality remains safe in a broad range of operational domains. The challenge is how to design the architecture of the system to be reliable and resilient to changing context. The automotive industry has well established standards and development practices, but it is open to explore and integrate solutions from other domains like Internet of Things and Robotics. In the area of autonomous systems, the capabilities of the robotics middleware ROS2 have been used for prototyping purposes. It is an open question whether ROS2 is suitable for automotive safety relevant applications. This master thesis addresses this challenge through evaluating the possible application of ROS2 in the automotive domain. The development consists of implementing an architecture for an autonomous driving function case-study, an Automated Parking System, which adapts to its context by switching between different operational modes. The Automated Parking System has been implemented and validated in a simulation environment. The experiment results show which benefits bring ROS2 compared with the automotive standardised architecture AUTOSAR

    SPOTS: Stable Placement of Objects with Reasoning in Semi-Autonomous Teleoperation Systems

    Full text link
    Pick-and-place is one of the fundamental tasks in robotics research. However, the attention has been mostly focused on the ``pick'' task, leaving the ``place'' task relatively unexplored. In this paper, we address the problem of placing objects in the context of a teleoperation framework. Particularly, we focus on two aspects of the place task: stability robustness and contextual reasonableness of object placements. Our proposed method combines simulation-driven physical stability verification via real-to-sim and the semantic reasoning capability of large language models. In other words, given place context information (e.g., user preferences, object to place, and current scene information), our proposed method outputs a probability distribution over the possible placement candidates, considering the robustness and reasonableness of the place task. Our proposed method is extensively evaluated in two simulation and one real world environments and we show that our method can greatly increase the physical plausibility of the placement as well as contextual soundness while considering user preferences.Comment: 7 page

    Augmented Reality Simulation Modules for EVD Placement Training and Planning Aids

    Get PDF
    When a novice neurosurgeon performs a psychomotor surgical task (e.g., tool navigation into brain structures), a potential risk of damaging healthy tissues and eloquent brain structures is unavoidable. When novices make multiple hits, thus a set of undesirable trajectories is created, and resulting in the potential for surgical complications. Thus, it is important that novices not only aim for a high-level of surgical mastery but also receive deliberate training in common neurosurgical procedures and underlying tasks. Surgical simulators have emerged as an adequate candidate as effective method to teach novices in safe and free-error training environments. The design of neurosurgical simulators requires a comprehensive approach to development and. In that in mind, we demonstrate a detailed case study in which two Augmented Reality (AR) training simulation modules were designed and implemented through the adoption of Model-driven Engineering. User performance evaluation is a key aspect of the surgical simulation validity. Many AR surgical simulators become obsolete; either they are not sufficient to support enough surgical scenarios, or they were validated according to subjective assessments that did not meet every need. Accordingly, we demonstrate the feasibility of the AR simulation modules through two user studies, objectively measuring novices’ performance based on quantitative metrics. Neurosurgical simulators are prone to perceptual distance underestimation. Few investigations were conducted for improving user depth perception in head-mounted display-based AR systems with perceptual motion cues. Consequently, we report our investigation’s results about whether or not head motion and perception motion cues had an influence on users’ performance

    Autonomous robot systems and competitions: proceedings of the 12th International Conference

    Get PDF
    This is the 2012’s edition of the scientific meeting of the Portuguese Robotics Open (ROBOTICA’ 2012). It aims to disseminate scientific contributions and to promote discussion of theories, methods and experiences in areas of relevance to Autonomous Robotics and Robotic Competitions. All accepted contributions are included in this proceedings book. The conference program has also included an invited talk by Dr.ir. Raymond H. Cuijpers, from the Department of Human Technology Interaction of Eindhoven University of Technology, Netherlands.The conference is kindly sponsored by the IEEE Portugal Section / IEEE RAS ChapterSPR-Sociedade Portuguesa de Robótic
    • …
    corecore