106 research outputs found

    Augmented Reality and Artificial Intelligence in Image-Guided and Robot-Assisted Interventions

    Get PDF
    In minimally invasive orthopedic procedures, the surgeon places wires, screws, and surgical implants through the muscles and bony structures under image guidance. These interventions require alignment of the pre- and intra-operative patient data, the intra-operative scanner, surgical instruments, and the patient. Suboptimal interaction with patient data and challenges in mastering 3D anatomy based on ill-posed 2D interventional images are essential concerns in image-guided therapies. State of the art approaches often support the surgeon by using external navigation systems or ill-conditioned image-based registration methods that both have certain drawbacks. Augmented reality (AR) has been introduced in the operating rooms in the last decade; however, in image-guided interventions, it has often only been considered as a visualization device improving traditional workflows. Consequently, the technology is gaining minimum maturity that it requires to redefine new procedures, user interfaces, and interactions. This dissertation investigates the applications of AR, artificial intelligence, and robotics in interventional medicine. Our solutions were applied in a broad spectrum of problems for various tasks, namely improving imaging and acquisition, image computing and analytics for registration and image understanding, and enhancing the interventional visualization. The benefits of these approaches were also discovered in robot-assisted interventions. We revealed how exemplary workflows are redefined via AR by taking full advantage of head-mounted displays when entirely co-registered with the imaging systems and the environment at all times. The proposed AR landscape is enabled by co-localizing the users and the imaging devices via the operating room environment and exploiting all involved frustums to move spatial information between different bodies. The system's awareness of the geometric and physical characteristics of X-ray imaging allows the exploration of different human-machine interfaces. We also leveraged the principles governing image formation and combined it with deep learning and RGBD sensing to fuse images and reconstruct interventional data. We hope that our holistic approaches towards improving the interface of surgery and enhancing the usability of interventional imaging, not only augments the surgeon's capabilities but also augments the surgical team's experience in carrying out an effective intervention with reduced complications

    Pivot calibration concept for sensor attached mobile c-arms

    Full text link
    Medical augmented reality has been actively studied for decades and many methods have been proposed torevolutionize clinical procedures. One example is the camera augmented mobile C-arm (CAMC), which providesa real-time video augmentation onto medical images by rigidly mounting and calibrating a camera to the imagingdevice. Since then, several CAMC variations have been suggested by calibrating 2D/3D cameras, trackers, andmore recently a Microsoft HoloLens to the C-arm. Different calibration methods have been applied to establishthe correspondence between the rigidly attached sensor and the imaging device. A crucial step for these methodsis the acquisition of X-Ray images or 3D reconstruction volumes; therefore, requiring the emission of ionizingradiation. In this work, we analyze the mechanical motion of the device and propose an alternatative methodto calibrate sensors to the C-arm without emitting any radiation. Given a sensor is rigidly attached to thedevice, we introduce an extended pivot calibration concept to compute the fixed translation from the sensor tothe C-arm rotation center. The fixed relationship between the sensor and rotation center can be formulated as apivot calibration problem with the pivot point moving on a locus. Our method exploits the rigid C-arm motiondescribing a Torus surface to solve this calibration problem. We explain the geometry of the C-arm motion andits relation to the attached sensor, propose a calibration algorithm and show its robustness against noise, as wellas trajectory and observed pose density by computer simulations. We discuss this geometric-based formulationand its potential extensions to different C-arm applications.Comment: Accepted for Image-Guided Procedures, Robotic Interventions, and Modeling 2020, Houston, TX, US

    The CAMP Lab Computer Aided Medical Procedures and Augmented Reality

    Get PDF
    Abstract-The CAMP lab is integrated within the Department of Informatics at Technical University of Munich and is considered one of the leading groups concerned with medical augmented reality, computer assisted interventions, as well as non-medical related computer vision. In this short paper, we give an outline of the history of the lab and present a summary of some of our past and current activities relevant to augmented and virtual reality in computer assisted interventions and surgeries. References to published work in major journals and conferences allow the reader to get access to more detailed information on each subject. It was not possible to cover all aspects of our research within this paper, but we hope to provide an overview on some of these within this short paper. The readers are also invited to visit our web-site at http://campar.in.tum.de to get more information on aspects of our work. Applications for PhD and PostDoc positions can be made through the form a

    Development, Implementation and Pre-clinical Evaluation of Medical Image Computing Tools in Support of Computer-aided Diagnosis: Respiratory, Orthopedic and Cardiac Applications

    Get PDF
    Over the last decade, image processing tools have become crucial components of all clinical and research efforts involving medical imaging and associated applications. The imaging data available to the radiologists continue to increase their workload, raising the need for efficient identification and visualization of the required image data necessary for clinical assessment. Computer-aided diagnosis (CAD) in medical imaging has evolved in response to the need for techniques that can assist the radiologists to increase throughput while reducing human error and bias without compromising the outcome of the screening, diagnosis or disease assessment. More intelligent, but simple, consistent and less time-consuming methods will become more widespread, reducing user variability, while also revealing information in a more clear, visual way. Several routine image processing approaches, including localization, segmentation, registration, and fusion, are critical for enhancing and enabling the development of CAD techniques. However, changes in clinical workflow require significant adjustments and re-training and, despite the efforts of the academic research community to develop state-of-the-art algorithms and high-performance techniques, their footprint often hampers their clinical use. Currently, the main challenge seems to not be the lack of tools and techniques for medical image processing, analysis, and computing, but rather the lack of clinically feasible solutions that leverage the already developed and existing tools and techniques, as well as a demonstration of the potential clinical impact of such tools. Recently, more and more efforts have been dedicated to devising new algorithms for localization, segmentation or registration, while their potential and much intended clinical use and their actual utility is dwarfed by the scientific, algorithmic and developmental novelty that only result in incremental improvements over already algorithms. In this thesis, we propose and demonstrate the implementation and evaluation of several different methodological guidelines that ensure the development of image processing tools --- localization, segmentation and registration --- and illustrate their use across several medical imaging modalities --- X-ray, computed tomography, ultrasound and magnetic resonance imaging --- and several clinical applications: Lung CT image registration in support for assessment of pulmonary nodule growth rate and disease progression from thoracic CT images. Automated reconstruction of standing X-ray panoramas from multi-sector X-ray images for assessment of long limb mechanical axis and knee misalignment. Left and right ventricle localization, segmentation, reconstruction, ejection fraction measurement from cine cardiac MRI or multi-plane trans-esophageal ultrasound images for cardiac function assessment. When devising and evaluating our developed tools, we use clinical patient data to illustrate the inherent clinical challenges associated with highly variable imaging data that need to be addressed before potential pre-clinical validation and implementation. In an effort to provide plausible solutions to the selected applications, the proposed methodological guidelines ensure the development of image processing tools that help achieve sufficiently reliable solutions that not only have the potential to address the clinical needs, but are sufficiently streamlined to be potentially translated into eventual clinical tools provided proper implementation. G1: Reducing the number of degrees of freedom (DOF) of the designed tool, with a plausible example being avoiding the use of inefficient non-rigid image registration methods. This guideline addresses the risk of artificial deformation during registration and it clearly aims at reducing complexity and the number of degrees of freedom. G2: The use of shape-based features to most efficiently represent the image content, either by using edges instead of or in addition to intensities and motion, where useful. Edges capture the most useful information in the image and can be used to identify the most important image features. As a result, this guideline ensures a more robust performance when key image information is missing. G3: Efficient method of implementation. This guideline focuses on efficiency in terms of the minimum number of steps required and avoiding the recalculation of terms that only need to be calculated once in an iterative process. An efficient implementation leads to reduced computational effort and improved performance. G4: Commence the workflow by establishing an optimized initialization and gradually converge toward the final acceptable result. This guideline aims to ensure reasonable outcomes in consistent ways and it avoids convergence to local minima, while gradually ensuring convergence to the global minimum solution. These guidelines lead to the development of interactive, semi-automated or fully-automated approaches that still enable the clinicians to perform final refinements, while they reduce the overall inter- and intra-observer variability, reduce ambiguity, increase accuracy and precision, and have the potential to yield mechanisms that will aid with providing an overall more consistent diagnosis in a timely fashion

    Appearance Modelling and Reconstruction for Navigation in Minimally Invasive Surgery

    Get PDF
    Minimally invasive surgery is playing an increasingly important role for patient care. Whilst its direct patient benefit in terms of reduced trauma, improved recovery and shortened hospitalisation has been well established, there is a sustained need for improved training of the existing procedures and the development of new smart instruments to tackle the issue of visualisation, ergonomic control, haptic and tactile feedback. For endoscopic intervention, the small field of view in the presence of a complex anatomy can easily introduce disorientation to the operator as the tortuous access pathway is not always easy to predict and control with standard endoscopes. Effective training through simulation devices, based on either virtual reality or mixed-reality simulators, can help to improve the spatial awareness, consistency and safety of these procedures. This thesis examines the use of endoscopic videos for both simulation and navigation purposes. More specifically, it addresses the challenging problem of how to build high-fidelity subject-specific simulation environments for improved training and skills assessment. Issues related to mesh parameterisation and texture blending are investigated. With the maturity of computer vision in terms of both 3D shape reconstruction and localisation and mapping, vision-based techniques have enjoyed significant interest in recent years for surgical navigation. The thesis also tackles the problem of how to use vision-based techniques for providing a detailed 3D map and dynamically expanded field of view to improve spatial awareness and avoid operator disorientation. The key advantage of this approach is that it does not require additional hardware, and thus introduces minimal interference to the existing surgical workflow. The derived 3D map can be effectively integrated with pre-operative data, allowing both global and local 3D navigation by taking into account tissue structural and appearance changes. Both simulation and laboratory-based experiments are conducted throughout this research to assess the practical value of the method proposed

    Geometrical Calibration of X-Ray Imaging With RGB Cameras for 3D Reconstruction

    Full text link
    (c) 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.We present a methodology to recover the geometrical calibration of conventional X-ray settings with the help of an ordinary video camera and visible fiducials that are present in the scene. After calibration, equivalent points of interest can be easily identifiable with the help of the epipolar geometry. The same procedure also allows the measurement of real anatomic lengths and angles and obtains accurate 3D locations from image points. Our approach completely eliminates the need for X-ray-opaque reference marks (and necessary supporting frames) which can sometimes be invasive for the patient, occlude the radiographic picture, and end up projected outside the imaging sensor area in oblique protocols. Two possible frameworks are envisioned: a spatially shifting X-ray anode around the patient/object and a moving patient that moves/rotates while the imaging system remains fixed. As a proof of concept, experiences with a device under test (DUT), an anthropomorphic phantom and a real brachytherapy session have been carried out. The results show that it is possible to identify common points with a proper level of accuracy and retrieve three-dimensional locations, lengths and shapes with a millimetric level of precision. The presented approach is simple and compatible with both current and legacy widespread diagnostic X-ray imaging deployments and it can represent a good and inexpensive alternative to other radiological modalities like CT.This work was carried out with the support of Information Storage S.L., University of Valencia (grant #CPI-15-170), CSD2007-00042 Consolider Ingenio CPAN (grant #CPAN13-TR01) as well as with the support of the Spanish Ministry of Industry, Energy and Tourism (Grant TSI-100101-2013-019).Albiol Colomer, F.; Corbi, A.; Albiol Colomer, A. (2016). Geometrical Calibration of X-Ray Imaging With RGB Cameras for 3D Reconstruction. IEEE Transactions on Medical Imaging. 35(8):1952-1961. https://doi.org/10.1109/TMI.2016.2540929S1952196135

    Radiologia diagnostica per immagini estese e soluzioni di stitching con pannelli DR

    Get PDF
    In radiologia diagnostica, talvolta, grandi aree del corpo umano vengono esaminate tramite l’utilizzo dei raggi x, per esempio vengono eseguiti esami alla colonna vertebrale oppure agli arti inferiori. Con le cassette digitali (CR), per via della loro estesa area superficiale, è possibile eseguire questo tipo di investigazioni tramite una singola esposizione utilizzando una singola cassetta, di dimensione 35 x 84 cm (Kodak) oppure 35 x 91 cm (Carestream) oppure 43 x 129 cm (Carestream). Con i pannelli digitali (DR), a causa della loro dimensione ridotta e dei loro alti costi, allo stato dell’arte non è possibile fare tali investigazioni con una unica esposizione, ma è necessario effettuare diverse esposizioni in accordo con l’estensione dell’area esaminata, unendo due o più immagini radiologiche. Questa operazione è chiamata “stitching” poiché varie immagini sono unite (stitch ossia cucite) insieme. Sono state introdotte e sviluppate tre tecniche che permettono di effettuare un esame di stitching con pannelli DR: rotazionale, lineare e wide. L’obbiettivo di questo lavoro è quello di sottolineare le differenze e di evidenziare le problematiche, prendendo in considerazione la qualità dell’immagine e la semplicità di utilizzo, con lo scopo di trovare la tecnica migliore. Le metodologie sono state valutate tramite tre diversi parametri: la qualità dell’immagine, la comodità d’uso (prendendo in considerazione il tempo per effettuare un esame) e la semplicità dello sviluppo meccanico ed elettronico della tecnica. Ciascun metodo possiede delle qualità buone, ma anche criticità: pertanto la scelta della tecnica migliore non è semplice perché ognuna ha i suoi vantaggi e i suoi svantaggi. Al giorno d’oggi lo stitching rotazionale è il più utilizzato perché la qualità delle immagini è molto buona e non ci sono errori di parallasse. Però non è un sistema semplice da sviluppare perché ci sono due diversi movimenti meccanici da gestire. Per questa ragione è stato introdotto il sistema di stitching lineare, che è più semplice dal punto di vista meccanico ed elettronico, ma peggiora la qualità delle immagini. Il wide stitching è la tecnica più vicina alla tecnica con cassetta CR e possiede una qualità dell’immagine molto buona, ma la difficoltà nello sviluppo di un collimatore in grado di effettuare questa tecnica e la richiesta di alte prestazioni per il tubo radiogeno sono un grande ostacolo. La conclusione è che sebbene sia complesso e costoso, lo stitching rotazionale risulta la tecnica migliore tra quelle investigate. Un ruolo importante, che è andato perso con l’avvento dei sistemi digitali, era offerto dai filtri compensativi. Essi avevano il compito di proteggere i tessuti che assorbono di più la radiazione (es. ossa) rispetto alle parti del corpo più trasparenti (tessuti molli), permettendo di avere in un’unica esposizioni entrambe le tipologie di tessuti senza zone di sovra o sotto esposizione nell’immagine. Con i sistemi CR i filtri a cuneo erano di uso comune nelle investigazioni agli arti inferiori che comprendevano anche la zona del bacino, per esaminare patologie come l’inclinazione dell’anca. Con i sistemi DR i filtri a cuneo non sono utilizzati e sono dannosi per i metodi di stitching lineare e rotazionale. Per simulare l’effetto del filtro a cuneo, nei sistemi DR deve essere applicato un filtraggio, ad esempio la LUT logaritmica, prima di procedere con l’unione delle immagini. Rispetto ai sistemi CR, la dinamica più ampia dei sistemi DR riduce le zone dell’immagine sovra o sotto esposte, rendendo quindi possibile il recupero dell’immagine di quasi tutti i tessuti. Questo, unito alla filtrazione software, ha consentito di evitare il filtro compensativo per i sistemi DR negli esami di stitching. Allo stato attuale dell’arte le cassette CR che vengono utilizzate per esami di stitching hanno un grosso vantaggio rispetto ai sistemi DR: l’esame viene effettuato in un’unica esposizione, annullando tutti gli artefatti da movimento. Tuttavia le prestazioni dei pannelli, in termini di contrasto, risoluzione e dose, sono migliori rispetto a quelle delle CR e compensano lo svantaggio delle esposizioni multiple. Lo studio e il miglioramento dei pannelli digitali è rivolto a superare i limiti di questa tecnica. A tale scopo sono da poco tempo in commercio sistemi digitali cassette size, con sistema di scaricamento delle immagini wi-fi, che sono competitivi con le cassette CR
    • …
    corecore