291 research outputs found

    Medical Image Registration Framework Using Multiscale Edge Information

    Get PDF
    AbstractEfficient multiscale deformable registration frameworks are proposed by combining edge preserving scale space (EPSS) with the free form deformation (FFD) for registration of medical images, where multiscale edge information can be used for optimizing the registration process. EPSS which is derived from the total variation model with the L1 norm (TV-L1) can provide useful spatial edge information for mutual information (MI) based registration. At each scale in registration process, the selected edges and contours are sufficiently strong to drive the deformation using the FFD grid, and then the deformation fields can be gained by a coarse to fine manner. In our deformable registration framework, two ways are proposed for implementing this idea. The experiments on clinical images including PETCT and CT-CBCT show accuracy and robustness when compared to traditional method for medical imaging system

    Regmentation: A New View of Image Segmentation and Registration

    Get PDF
    Image segmentation and registration have been the two major areas of research in the medical imaging community for decades and still are. In the context of radiation oncology, segmentation and registration methods are widely used for target structure definition such as prostate or head and neck lymph node areas. In the past two years, 45% of all articles published in the most important medical imaging journals and conferences have presented either segmentation or registration methods. In the literature, both categories are treated rather separately even though they have much in common. Registration techniques are used to solve segmentation tasks (e.g. atlas based methods) and vice versa (e.g. segmentation of structures used in a landmark based registration). This article reviews the literature on image segmentation methods by introducing a novel taxonomy based on the amount of shape knowledge being incorporated in the segmentation process. Based on that, we argue that all global shape prior segmentation methods are identical to image registration methods and that such methods thus cannot be characterized as either image segmentation or registration methods. Therefore we propose a new class of methods that are able solve both segmentation and registration tasks. We call it regmentation. Quantified on a survey of the current state of the art medical imaging literature, it turns out that 25% of the methods are pure registration methods, 46% are pure segmentation methods and 29% are regmentation methods. The new view on image segmentation and registration provides a consistent taxonomy in this context and emphasizes the importance of regmentation in current medical image processing research and radiation oncology image-guided applications

    Artificial Intelligence in Radiation Therapy

    Get PDF
    Artificial intelligence (AI) has great potential to transform the clinical workflow of radiotherapy. Since the introduction of deep neural networks, many AI-based methods have been proposed to address challenges in different aspects of radiotherapy. Commercial vendors have started to release AI-based tools that can be readily integrated to the established clinical workflow. To show the recent progress in AI-aided radiotherapy, we have reviewed AI-based studies in five major aspects of radiotherapy including image reconstruction, image registration, image segmentation, image synthesis, and automatic treatment planning. In each section, we summarized and categorized the recently published methods, followed by a discussion of the challenges, concerns, and future development. Given the rapid development of AI-aided radiotherapy, the efficiency and effectiveness of radiotherapy in the future could be substantially improved through intelligent automation of various aspects of radiotherapy

    Automatic 3D segmentation of the prostate on magnetic resonance images for radiotherapy planning

    Get PDF
    Abstract. Accurate segmentation of the prostate, the seminal vesicles, the bladder and the rectum is a crucial step for planning radiotherapy (RT) procedures. Modern radiotherapy protocols have included the delineation of the pelvic organs in magnetic resonance images (MRI), as the guide to the therapeutic beam irradiation over the target organ. However, this task is highly inter and intra-expert variable and may take about 20 minutes per patient, even for trained experts, constituting an important burden in most radiological services. Automatic or semi-automatic segmentation strategies might then improve the efficiency by decreasing the measured times while conserving the required accuracy. This thesis presents a fully automatic prostate segmentation framework that selects the most similar prostates w.r.t. a test prostate image and combines them to estimate the segmentation for the test prostate. A robust multi-scale analysis establishes the set of most similar prostates from a database, independently of the acquisition protocol. Those prostates are then non-rigidly registered towards the test image and fusioned by a linear combination. The proposed approach was evaluated using a MRI public dataset of patients with benign hyperplasia or cancer, following different acquisition protocols, namely 26 endorectal and 24 external. Evaluating under a leave-one-out scheme, results show reliable segmentations, obtaining an average dice coefficient of 79%, when comparing with the expert manual segmentation.La delineación exacta de la próstata, las vesículas seminales, la vejiga y el recto es un paso fundamental para el planeamiento de procedimientos de radioterapia. Protocolos modernos han incluido la delineación de los órganos pélvicos en imágenes de resonancia magnética (IRM), como la guia para la irradiación del haz terapéutico sobre el órgano objetivo. Sin embargo, esta tarea es altamente variable intra e inter-experto y puede tomar al rededor de 20 minutos por paciente, incluso para expertos entrenados, convirtiéndose en una carga importante en la mayoría de los servicios de radiología. Métodos automáticos o semi-automáticos podrían mejorar la eficiencia disminuyendo los tiempos medidos mientras se conserva la precisión requerida. Este trabajo presenta una estrategia de segmentación de la próstata completamente automático que selecciona las prostatas más similares con respecto a una imagen de resonancia magnética de prueba y combina las delineaciones asociadas a dichas imágenes para estimar la segmentación de la imagen de prueba. Un análisis multiescala robusto permite establecer el conjunto de las próstatas más parecidas de una base de datos, independiente del protocolo de adquisición. Las imágenes seleccionadas son registradas de forma no rigida con respecto a la imagen de prueba y luego son fusionadas mediante una combinación lineal. El enfoque propuesto fue evaluado utilizando un conjunto público de imágenes de resonancia magnética de pacientes con hiperplasia benigna o con cancer, con diferentes protocolos de adquisición, esto es 26 externas y 24 endorectales. Este trabajo fue evaluado bajo un esquema leave-one-out, cuyos resultados mostraron segmentaciones confiables, obteniendo un DSC promedio de 79%, cuando se compararon los resultados obtenidos con las segmentaciones manuales de expertos.Maestrí

    Three-Dimensional Medical Image Fusion with Deformable Cross-Attention

    Full text link
    Multimodal medical image fusion plays an instrumental role in several areas of medical image processing, particularly in disease recognition and tumor detection. Traditional fusion methods tend to process each modality independently before combining the features and reconstructing the fusion image. However, this approach often neglects the fundamental commonalities and disparities between multimodal information. Furthermore, the prevailing methodologies are largely confined to fusing two-dimensional (2D) medical image slices, leading to a lack of contextual supervision in the fusion images and subsequently, a decreased information yield for physicians relative to three-dimensional (3D) images. In this study, we introduce an innovative unsupervised feature mutual learning fusion network designed to rectify these limitations. Our approach incorporates a Deformable Cross Feature Blend (DCFB) module that facilitates the dual modalities in discerning their respective similarities and differences. We have applied our model to the fusion of 3D MRI and PET images obtained from 660 patients in the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. Through the application of the DCFB module, our network generates high-quality MRI-PET fusion images. Experimental results demonstrate that our method surpasses traditional 2D image fusion methods in performance metrics such as Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM). Importantly, the capacity of our method to fuse 3D images enhances the information available to physicians and researchers, thus marking a significant step forward in the field. The code will soon be available online

    A Survey on Deep Learning in Medical Image Registration: New Technologies, Uncertainty, Evaluation Metrics, and Beyond

    Full text link
    Over the past decade, deep learning technologies have greatly advanced the field of medical image registration. The initial developments, such as ResNet-based and U-Net-based networks, laid the groundwork for deep learning-driven image registration. Subsequent progress has been made in various aspects of deep learning-based registration, including similarity measures, deformation regularizations, and uncertainty estimation. These advancements have not only enriched the field of deformable image registration but have also facilitated its application in a wide range of tasks, including atlas construction, multi-atlas segmentation, motion estimation, and 2D-3D registration. In this paper, we present a comprehensive overview of the most recent advancements in deep learning-based image registration. We begin with a concise introduction to the core concepts of deep learning-based image registration. Then, we delve into innovative network architectures, loss functions specific to registration, and methods for estimating registration uncertainty. Additionally, this paper explores appropriate evaluation metrics for assessing the performance of deep learning models in registration tasks. Finally, we highlight the practical applications of these novel techniques in medical imaging and discuss the future prospects of deep learning-based image registration

    Computational methods to predict and enhance decision-making with biomedical data.

    Get PDF
    The proposed research applies machine learning techniques to healthcare applications. The core ideas were using intelligent techniques to find automatic methods to analyze healthcare applications. Different classification and feature extraction techniques on various clinical datasets are applied. The datasets include: brain MR images, breathing curves from vessels around tumor cells during in time, breathing curves extracted from patients with successful or rejected lung transplants, and lung cancer patients diagnosed in US from in 2004-2009 extracted from SEER database. The novel idea on brain MR images segmentation is to develop a multi-scale technique to segment blood vessel tissues from similar tissues in the brain. By analyzing the vascularization of the cancer tissue during time and the behavior of vessels (arteries and veins provided in time), a new feature extraction technique developed and classification techniques was used to rank the vascularization of each tumor type. Lung transplantation is a critical surgery for which predicting the acceptance or rejection of the transplant would be very important. A review of classification techniques on the SEER database was developed to analyze the survival rates of lung cancer patients, and the best feature vector that can be used to predict the most similar patients are analyzed
    corecore