733 research outputs found

    A Framework for Semi-automatic Fiducial Localization in Volumetric Images

    Get PDF
    Fiducial localization in volumetric images is a common task performed by image-guided navigation and augmented reality systems. These systems often rely on fiducials for image-space to physical-space registration, or as easily identifiable structures for registration validation purposes. Automated methods for fiducial localization in volumetric im- ages are available. Unfortunately, these methods are not generalizable as they explicitly utilize strong a priori knowledge such as fiducial intensity values in CT, or known spatial configurations as part of the algorithm. Thus, manual localization has remained the most general approach, read- ily applicable across fiducial types and imaging modalities. The main drawbacks of manual localization are the variability and accuracy errors associated with visual localization. We describe a semi-automatic fiducial localization approach that combines the strengths of the human opera- tor and an underlying computational system. The operator identifies the rough location of the fiducial, and the computational system accurately localizes it via intensity based registration using the mutual information similarity measure. This approach is generic, implicitly accommodating for all fiducial types and imaging modalities. The framework was evalu- ated using five fiducial types and three imaging modalities. We obtained a maximal localization accuracy error of 0.35mm, with a maximal preci- sion variability of 0.5mm

    Artificial Intelligence-based Motion Tracking in Cancer Radiotherapy: A Review

    Full text link
    Radiotherapy aims to deliver a prescribed dose to the tumor while sparing neighboring organs at risk (OARs). Increasingly complex treatment techniques such as volumetric modulated arc therapy (VMAT), stereotactic radiosurgery (SRS), stereotactic body radiotherapy (SBRT), and proton therapy have been developed to deliver doses more precisely to the target. While such technologies have improved dose delivery, the implementation of intra-fraction motion management to verify tumor position at the time of treatment has become increasingly relevant. Recently, artificial intelligence (AI) has demonstrated great potential for real-time tracking of tumors during treatment. However, AI-based motion management faces several challenges including bias in training data, poor transparency, difficult data collection, complex workflows and quality assurance, and limited sample sizes. This review serves to present the AI algorithms used for chest, abdomen, and pelvic tumor motion management/tracking for radiotherapy and provide a literature summary on the topic. We will also discuss the limitations of these algorithms and propose potential improvements.Comment: 36 pages, 5 Figures, 4 Table

    SEEG assistant: a 3DSlicer extension to support epilepsy surgery

    Get PDF

    Real-time intrafraction motion monitoring in external beam radiotherapy

    Get PDF
    © 2019 Institute of Physics and Engineering in Medicine. Radiotherapy (RT) aims to deliver a spatially conformal dose of radiation to tumours while maximizing the dose sparing to healthy tissues. However, the internal patient anatomy is constantly moving due to respiratory, cardiac, gastrointestinal and urinary activity. The long term goal of the RT community to 'see what we treat, as we treat' and to act on this information instantaneously has resulted in rapid technological innovation. Specialized treatment machines, such as robotic or gimbal-steered linear accelerators (linac) with in-room imaging suites, have been developed specifically for real-time treatment adaptation. Additional equipment, such as stereoscopic kilovoltage (kV) imaging, ultrasound transducers and electromagnetic transponders, has been developed for intrafraction motion monitoring on conventional linacs. Magnetic resonance imaging (MRI) has been integrated with cobalt treatment units and more recently with linacs. In addition to hardware innovation, software development has played a substantial role in the development of motion monitoring methods based on respiratory motion surrogates and planar kV or Megavoltage (MV) imaging that is available on standard equipped linacs. In this paper, we review and compare the different intrafraction motion monitoring methods proposed in the literature and demonstrated in real-time on clinical data as well as their possible future developments. We then discuss general considerations on validation and quality assurance for clinical implementation. Besides photon RT, particle therapy is increasingly used to treat moving targets. However, transferring motion monitoring technologies from linacs to particle beam lines presents substantial challenges. Lessons learned from the implementation of real-time intrafraction monitoring for photon RT will be used as a basis to discuss the implementation of these methods for particle RT

    IMPROVING DAILY CLINICAL PRACTICE WITH ABDOMINAL PATIENT SPECIFIC 3D MODELS

    Get PDF
    This thesis proposes methods and procedures to proficiently introduce patient 3D models in the daily clinical practice for diagnosis and treatment of abdominal diseases. The objective of the work consists in providing and visualizing quantitative geometrical and topological information on the anatomy of interest, and to develop systems that allow to improve radiology and surgery. The 3D visualization drastically simplifies the interpretation process of medical images and provides benefits both in diagnosing and in surgical planning phases. Further advantages can be introduced registering virtual pre-operative information (3D models) with real intra-operative information (patient and surgical instruments). The surgeon can use mixed-reality systems that allow him/her to see covered structures before reaching them, surgical navigators for see the scene (anatomy and instruments) from different point of view and smart mechatronics devices, which, knowing the anatomy, assist him/her in an active way. All these aspects are useful in terms of safety, efficiency and financial resources for the physicians, for the patient and for the sanitary system too. The entire process, from volumetric radiological images acquisition up to the use of 3D anatomical models inside the surgical room, has been studied and specific applications have been developed. A segmentation procedure has been designed taking into account acquisition protocols commonly used in radiological departments, and a software tool, that allows to obtain efficient 3D models, have been implemented and tested. The alignment problem has been investigated examining the various sources of errors during the image acquisition, in the radiological department, and during to the execution of the intervention. A rigid body registration procedure compatible with the surgical environment has been defined and implemented. The procedure has been integrated in a surgical navigation system and is useful as starting initial registration for more accurate alignment methods based on deformable approaches. Monoscopic and stereoscopic 3D localization machine vision routines, using the laparoscopic and/or generic cameras images, have been implemented to obtain intra-operative information that can be used to model abdominal deformations. Further, the use of this information for fusion and registration purposes allows to enhance the potentialities of computer assisted surgery. In particular a precise alignment between virtual and real anatomies for mixed-reality purposes, and the development of tracker-free navigation systems, has been obtained elaborating video images and providing an analytical adaptation of the virtual camera to the real camera. Clinical tests, demonstrating the usability of the proposed solutions, are reported. Test results and appreciation of radiologists and surgeons, to the proposed prototypes, encourage their integration in the daily clinical practice and future developments

    Improving Radiotherapy Targeting for Cancer Treatment Through Space and Time

    Get PDF
    Radiotherapy is a common medical treatment in which lethal doses of ionizing radiation are preferentially delivered to cancerous tumors. In external beam radiotherapy, radiation is delivered by a remote source which sits several feet from the patient\u27s surface. Although great effort is taken in properly aligning the target to the path of the radiation beam, positional uncertainties and other errors can compromise targeting accuracy. Such errors can lead to a failure in treating the target, and inflict significant toxicity to healthy tissues which are inadvertently exposed high radiation doses. Tracking the movement of targeted anatomy between and during treatment fractions provides valuable localization information that allows for the reduction of these positional uncertainties. Inter- and intra-fraction anatomical localization data not only allows for more accurate treatment setup, but also potentially allows for 1) retrospective treatment evaluation, 2) margin reduction and modification of the dose distribution to accommodate daily anatomical changes (called `adaptive radiotherapy\u27), and 3) targeting interventions during treatment (for example, suspending radiation delivery while the target it outside the path of the beam). The research presented here investigates the use of inter- and intra-fraction localization technologies to improve radiotherapy to targets through enhanced spatial and temporal accuracy. These technologies provide significant advancements in cancer treatment compared to standard clinical technologies. Furthermore, work is presented for the use of localization data acquired from these technologies in adaptive treatment planning, an investigational technique in which the distribution of planned dose is modified during the course of treatment based on biological and/or geometrical changes of the patient\u27s anatomy. The focus of this research is directed at abdominal sites, which has historically been central to the problem of motion management in radiation therapy

    Development, Implementation and Pre-clinical Evaluation of Medical Image Computing Tools in Support of Computer-aided Diagnosis: Respiratory, Orthopedic and Cardiac Applications

    Get PDF
    Over the last decade, image processing tools have become crucial components of all clinical and research efforts involving medical imaging and associated applications. The imaging data available to the radiologists continue to increase their workload, raising the need for efficient identification and visualization of the required image data necessary for clinical assessment. Computer-aided diagnosis (CAD) in medical imaging has evolved in response to the need for techniques that can assist the radiologists to increase throughput while reducing human error and bias without compromising the outcome of the screening, diagnosis or disease assessment. More intelligent, but simple, consistent and less time-consuming methods will become more widespread, reducing user variability, while also revealing information in a more clear, visual way. Several routine image processing approaches, including localization, segmentation, registration, and fusion, are critical for enhancing and enabling the development of CAD techniques. However, changes in clinical workflow require significant adjustments and re-training and, despite the efforts of the academic research community to develop state-of-the-art algorithms and high-performance techniques, their footprint often hampers their clinical use. Currently, the main challenge seems to not be the lack of tools and techniques for medical image processing, analysis, and computing, but rather the lack of clinically feasible solutions that leverage the already developed and existing tools and techniques, as well as a demonstration of the potential clinical impact of such tools. Recently, more and more efforts have been dedicated to devising new algorithms for localization, segmentation or registration, while their potential and much intended clinical use and their actual utility is dwarfed by the scientific, algorithmic and developmental novelty that only result in incremental improvements over already algorithms. In this thesis, we propose and demonstrate the implementation and evaluation of several different methodological guidelines that ensure the development of image processing tools --- localization, segmentation and registration --- and illustrate their use across several medical imaging modalities --- X-ray, computed tomography, ultrasound and magnetic resonance imaging --- and several clinical applications: Lung CT image registration in support for assessment of pulmonary nodule growth rate and disease progression from thoracic CT images. Automated reconstruction of standing X-ray panoramas from multi-sector X-ray images for assessment of long limb mechanical axis and knee misalignment. Left and right ventricle localization, segmentation, reconstruction, ejection fraction measurement from cine cardiac MRI or multi-plane trans-esophageal ultrasound images for cardiac function assessment. When devising and evaluating our developed tools, we use clinical patient data to illustrate the inherent clinical challenges associated with highly variable imaging data that need to be addressed before potential pre-clinical validation and implementation. In an effort to provide plausible solutions to the selected applications, the proposed methodological guidelines ensure the development of image processing tools that help achieve sufficiently reliable solutions that not only have the potential to address the clinical needs, but are sufficiently streamlined to be potentially translated into eventual clinical tools provided proper implementation. G1: Reducing the number of degrees of freedom (DOF) of the designed tool, with a plausible example being avoiding the use of inefficient non-rigid image registration methods. This guideline addresses the risk of artificial deformation during registration and it clearly aims at reducing complexity and the number of degrees of freedom. G2: The use of shape-based features to most efficiently represent the image content, either by using edges instead of or in addition to intensities and motion, where useful. Edges capture the most useful information in the image and can be used to identify the most important image features. As a result, this guideline ensures a more robust performance when key image information is missing. G3: Efficient method of implementation. This guideline focuses on efficiency in terms of the minimum number of steps required and avoiding the recalculation of terms that only need to be calculated once in an iterative process. An efficient implementation leads to reduced computational effort and improved performance. G4: Commence the workflow by establishing an optimized initialization and gradually converge toward the final acceptable result. This guideline aims to ensure reasonable outcomes in consistent ways and it avoids convergence to local minima, while gradually ensuring convergence to the global minimum solution. These guidelines lead to the development of interactive, semi-automated or fully-automated approaches that still enable the clinicians to perform final refinements, while they reduce the overall inter- and intra-observer variability, reduce ambiguity, increase accuracy and precision, and have the potential to yield mechanisms that will aid with providing an overall more consistent diagnosis in a timely fashion

    Predicting Slice-to-Volume Transformation in Presence of Arbitrary Subject Motion

    Full text link
    This paper aims to solve a fundamental problem in intensity-based 2D/3D registration, which concerns the limited capture range and need for very good initialization of state-of-the-art image registration methods. We propose a regression approach that learns to predict rotation and translations of arbitrary 2D image slices from 3D volumes, with respect to a learned canonical atlas co-ordinate system. To this end, we utilize Convolutional Neural Networks (CNNs) to learn the highly complex regression function that maps 2D image slices into their correct position and orientation in 3D space. Our approach is attractive in challenging imaging scenarios, where significant subject motion complicates reconstruction performance of 3D volumes from 2D slice data. We extensively evaluate the effectiveness of our approach quantitatively on simulated MRI brain data with extreme random motion. We further demonstrate qualitative results on fetal MRI where our method is integrated into a full reconstruction and motion compensation pipeline. With our CNN regression approach we obtain an average prediction error of 7mm on simulated data, and convincing reconstruction quality of images of very young fetuses where previous methods fail. We further discuss applications to Computed Tomography and X-ray projections. Our approach is a general solution to the 2D/3D initialization problem. It is computationally efficient, with prediction times per slice of a few milliseconds, making it suitable for real-time scenarios.Comment: 8 pages, 4 figures, 6 pages supplemental material, currently under review for MICCAI 201

    Improvements in the registration of multimodal medical imaging : application to intensity inhomogeneity and partial volume corrections

    Get PDF
    Alignment or registration of medical images has a relevant role on clinical diagnostic and treatment decisions as well as in research settings. With the advent of new technologies for multimodal imaging, robust registration of functional and anatomical information is still a challenge, particular in small-animal imaging given the lesser structural content of certain anatomical parts, such as the brain, than in humans. Besides, patient-dependent and acquisition artefacts affecting the images information content further complicate registration, as is the case of intensity inhomogeneities (IIH) showing in MRI and the partial volume effect (PVE) attached to PET imaging. Reference methods exist for accurate image registration but their performance is severely deteriorated in situations involving little images Overlap. While several approaches to IIH and PVE correction exist these methods still do not guarantee or rely on robust registration. This Thesis focuses on overcoming current limitations af registration to enable novel IIH and PVE correction methods.El registre d'imatges mèdiques té un paper rellevant en les decisions de diagnòstic i tractament clíniques així com en la recerca. Amb el desenvolupament de noves tecnologies d'imatge multimodal, el registre robust d'informació funcional i anatòmica és encara avui un repte, en particular, en imatge de petit animal amb un menor contingut estructural que en humans de certes parts anatòmiques com el cervell. A més, els artefactes induïts pel propi pacient i per la tècnica d'adquisició que afecten el contingut d'informació de les imatges complica encara més el procés de registre. És el cas de les inhomogeneïtats d'intensitat (IIH) que apareixen a les RM i de l'efecte de volum parcial (PVE) característic en PET. Tot i que existeixen mètodes de referència pel registre acurat d'imatges la seva eficàcia es veu greument minvada en casos de poc solapament entre les imatges. De la mateixa manera, també existeixen mètodes per la correcció d'IIH i de PVE però que no garanteixen o que requereixen un registre robust. Aquesta tesi es centra en superar aquestes limitacions sobre el registre per habilitar nous mètodes per la correcció d'IIH i de PVE

    Recent trends, technical concepts and components of computer-assisted orthopedic surgery systems: A comprehensive review

    Get PDF
    Computer-assisted orthopedic surgery (CAOS) systems have become one of the most important and challenging types of system in clinical orthopedics, as they enable precise treatment of musculoskeletal diseases, employing modern clinical navigation systems and surgical tools. This paper brings a comprehensive review of recent trends and possibilities of CAOS systems. There are three types of the surgical planning systems, including: systems based on the volumetric images (computer tomography (CT), magnetic resonance imaging (MRI) or ultrasound images), further systems utilize either 2D or 3D fluoroscopic images, and the last one utilizes the kinetic information about the joints and morphological information about the target bones. This complex review is focused on three fundamental aspects of CAOS systems: their essential components, types of CAOS systems, and mechanical tools used in CAOS systems. In this review, we also outline the possibilities for using ultrasound computer-assisted orthopedic surgery (UCAOS) systems as an alternative to conventionally used CAOS systems.Web of Science1923art. no. 519
    corecore