21 research outputs found

    Scalable High-Performance Image Registration Framework by Unsupervised Deep Feature Representations Learning

    Get PDF
    Feature selection is a critical step in deformable image registration. In particular, selecting the most discriminative features that accurately and concisely describe complex morphological patterns in image patches improves correspondence detection, which in turn improves image registration accuracy. Furthermore, since more and more imaging modalities are being invented to better identify morphological changes in medical imaging data,, the development of deformable image registration method that scales well to new image modalities or new image applications with little to no human intervention would have a significant impact on the medical image analysis community. To address these concerns, a learning-based image registration framework is proposed that uses deep learning to discover compact and highly discriminative features upon observed imaging data. Specifically, the proposed feature selection method uses a convolutional stacked auto-encoder to identify intrinsic deep feature representations in image patches. Since deep learning is an unsupervised learning method, no ground truth label knowledge is required. This makes the proposed feature selection method more flexible to new imaging modalities since feature representations can be directly learned from the observed imaging data in a very short amount of time. Using the LONI and ADNI imaging datasets, image registration performance was compared to two existing state-of-the-art deformable image registration methods that use handcrafted features. To demonstrate the scalability of the proposed image registration framework image registration experiments were conducted on 7.0-tesla brain MR images. In all experiments, the results showed the new image registration framework consistently demonstrated more accurate registration results when compared to state-of-the-art

    A Survey on Deep Learning in Medical Image Registration: New Technologies, Uncertainty, Evaluation Metrics, and Beyond

    Full text link
    Over the past decade, deep learning technologies have greatly advanced the field of medical image registration. The initial developments, such as ResNet-based and U-Net-based networks, laid the groundwork for deep learning-driven image registration. Subsequent progress has been made in various aspects of deep learning-based registration, including similarity measures, deformation regularizations, and uncertainty estimation. These advancements have not only enriched the field of deformable image registration but have also facilitated its application in a wide range of tasks, including atlas construction, multi-atlas segmentation, motion estimation, and 2D-3D registration. In this paper, we present a comprehensive overview of the most recent advancements in deep learning-based image registration. We begin with a concise introduction to the core concepts of deep learning-based image registration. Then, we delve into innovative network architectures, loss functions specific to registration, and methods for estimating registration uncertainty. Additionally, this paper explores appropriate evaluation metrics for assessing the performance of deep learning models in registration tasks. Finally, we highlight the practical applications of these novel techniques in medical imaging and discuss the future prospects of deep learning-based image registration

    ADVANCED MOTION MODELS FOR RIGID AND DEFORMABLE REGISTRATION IN IMAGE-GUIDED INTERVENTIONS

    Get PDF
    Image-guided surgery (IGS) has been a major area of interest in recent decades that continues to transform surgical interventions and enable safer, less invasive procedures. In the preoperative contexts, diagnostic imaging, including computed tomography (CT) and magnetic resonance (MR) imaging, offers a basis for surgical planning (e.g., definition of target, adjacent anatomy, and the surgical path or trajectory to the target). At the intraoperative stage, such preoperative images and the associated planning information are registered to intraoperative coordinates via a navigation system to enable visualization of (tracked) instrumentation relative to preoperative images. A major limitation to such an approach is that motions during surgery, either rigid motions of bones manipulated during orthopaedic surgery or brain soft-tissue deformation in neurosurgery, are not captured, diminishing the accuracy of navigation systems. This dissertation seeks to use intraoperative images (e.g., x-ray fluoroscopy and cone-beam CT) to provide more up-to-date anatomical context that properly reflects the state of the patient during interventions to improve the performance of IGS. Advanced motion models for inter-modality image registration are developed to improve the accuracy of both preoperative planning and intraoperative guidance for applications in orthopaedic pelvic trauma surgery and minimally invasive intracranial neurosurgery. Image registration algorithms are developed with increasing complexity of motion that can be accommodated (single-body rigid, multi-body rigid, and deformable) and increasing complexity of registration models (statistical models, physics-based models, and deep learning-based models). For orthopaedic pelvic trauma surgery, the dissertation includes work encompassing: (i) a series of statistical models to model shape and pose variations of one or more pelvic bones and an atlas of trajectory annotations; (ii) frameworks for automatic segmentation via registration of the statistical models to preoperative CT and planning of fixation trajectories and dislocation / fracture reduction; and (iii) 3D-2D guidance using intraoperative fluoroscopy. For intracranial neurosurgery, the dissertation includes three inter-modality deformable registrations using physic-based Demons and deep learning models for CT-guided and CBCT-guided procedures

    A Colour Wheel to Rule them All: Analysing Colour & Geometry in Medical Microscopy

    Get PDF
    Personalized medicine is a rapidly growing field in healthcare that aims to customize medical treatments and preventive measures based on each patient’s unique characteristics, such as their genes, environment, and lifestyle factors. This approach acknowledges that people with the same medical condition may respond differently to therapies and seeks to optimize patient outcomes while minimizing the risk of adverse effects. To achieve these goals, personalized medicine relies on advanced technologies, such as genomics, proteomics, metabolomics, and medical imaging. Digital histopathology, a crucial aspect of medical imaging, provides clinicians with valuable insights into tissue structure and function at the cellular and molecular levels. By analyzing small tissue samples obtained through minimally invasive techniques, such as biopsy or aspirate, doctors can gather extensive data to evaluate potential diagnoses and clinical decisions. However, digital analysis of histology images presents unique challenges, including the loss of 3D information and stain variability, which is further complicated by sample variability. Limited access to data exacerbates these challenges, making it difficult to develop accurate computational models for research and clinical use in digital histology. Deep learning (DL) algorithms have shown significant potential for improving the accuracy of Computer-Aided Diagnosis (CAD) and personalized treatment models, particularly in medical microscopy. However, factors such as limited generability, lack of interpretability, and bias sometimes hinder their clinical impact. Furthermore, the inherent variability of histology images complicates the development of robust DL methods. Thus, this thesis focuses on developing new tools to address these issues. Our essential objective is to create transparent, accessible, and efficient methods based on classical principles from various disciplines, including histology, medical imaging, mathematics, and art, to tackle microscopy image registration and colour analysis successfully. These methods can contribute significantly to the advancement of personalized medicine, particularly in studying the tumour microenvironment for diagnosis and therapy research. First, we introduce a novel automatic method for colour analysis and non-rigid histology registration, enabling the study of heterogeneity morphology in tumour biopsies. This method achieves accurate tissue cut registration, drastically reducing landmark distance and excellent border overlap. Second, we introduce ABANICCO, a novel colour analysis method that combines geometric analysis, colour theory, fuzzy colour spaces, and multi-label systems for automatically classifying pixels into a set of conventional colour categories. ABANICCO outperforms benchmark methods in accuracy and simplicity. It is computationally straightforward, making it useful in scenarios involving changing objects, limited data, unclear boundaries, or when users lack prior knowledge of the image or colour theory. Moreover, results can be modified to match each particular task. Third, we apply the acquired knowledge to create a novel pipeline of rigid histology registration and ABANICCO colour analysis for the in-depth study of triple-negative breast cancer biopsies. The resulting heterogeneity map and tumour score provide valuable insights into the composition and behaviour of the tumour, informing clinical decision-making and guiding treatment strategies. Finally, we consolidate the developed ideas into an efficient pipeline for tissue reconstruction and multi-modality data integration on Tuberculosis infection data. This enables accurate element distribution analysis to understand better interactions between bacteria, host cells, and the immune system during the course of infection. The methods proposed in this thesis represent a transparent approach to computational pathology, addressing the needs of medical microscopy registration and colour analysis while bridging the gap between clinical practice and computational research. Moreover, our contributions can help develop and train better, more robust DL methods.En una época en la que la medicina personalizada está revolucionando la asistencia sanitaria, cada vez es más importante adaptar los tratamientos y las medidas preventivas a la composición genética, el entorno y el estilo de vida de cada paciente. Mediante el empleo de tecnologías avanzadas, como la genómica, la proteómica, la metabolómica y la imagen médica, la medicina personalizada se esfuerza por racionalizar el tratamiento para mejorar los resultados y reducir los efectos secundarios. La microscopía médica, un aspecto crucial de la medicina personalizada, permite a los médicos recopilar y analizar grandes cantidades de datos a partir de pequeñas muestras de tejido. Esto es especialmente relevante en oncología, donde las terapias contra el cáncer se pueden optimizar en función de la apariencia tisular específica de cada tumor. La patología computacional, un subcampo de la visión por ordenador, trata de crear algoritmos para el análisis digital de biopsias. Sin embargo, antes de que un ordenador pueda analizar imágenes de microscopía médica, hay que seguir varios pasos para conseguir las imágenes de las muestras. La primera etapa consiste en recoger y preparar una muestra de tejido del paciente. Para que esta pueda observarse fácilmente al microscopio, se corta en secciones ultrafinas. Sin embargo, este delicado procedimiento no está exento de dificultades. Los frágiles tejidos pueden distorsionarse, desgarrarse o agujerearse, poniendo en peligro la integridad general de la muestra. Una vez que el tejido está debidamente preparado, suele tratarse con tintes de colores característicos. Estos tintes acentúan diferentes tipos de células y tejidos con colores específicos, lo que facilita a los profesionales médicos la identificación de características particulares. Sin embargo, esta mejora en visualización tiene un alto coste. En ocasiones, los tintes pueden dificultar el análisis informático de las imágenes al mezclarse de forma inadecuada, traspasarse al fondo o alterar el contraste entre los distintos elementos. El último paso del proceso consiste en digitalizar la muestra. Se toman imágenes de alta resolución del tejido con distintos aumentos, lo que permite su análisis por ordenador. Esta etapa también tiene sus obstáculos. Factores como una calibración incorrecta de la cámara o unas condiciones de iluminación inadecuadas pueden distorsionar o hacer borrosas las imágenes. Además, las imágenes de porta completo obtenidas so de tamaño considerable, complicando aún más el análisis. En general, si bien la preparación, la tinción y la digitalización de las muestras de microscopía médica son fundamentales para el análisis digital, cada uno de estos pasos puede introducir retos adicionales que deben abordarse para garantizar un análisis preciso. Además, convertir un volumen de tejido completo en unas pocas secciones teñidas reduce drásticamente la información 3D disponible e introduce una gran incertidumbre. Las soluciones de aprendizaje profundo (deep learning, DL) son muy prometedoras en el ámbito de la medicina personalizada, pero su impacto clínico a veces se ve obstaculizado por factores como la limitada generalizabilidad, el sobreajuste, la opacidad y la falta de interpretabilidad, además de las preocupaciones éticas y en algunos casos, los incentivos privados. Por otro lado, la variabilidad de las imágenes histológicas complica el desarrollo de métodos robustos de DL. Para superar estos retos, esta tesis presenta una serie de métodos altamente robustos e interpretables basados en principios clásicos de histología, imagen médica, matemáticas y arte, para alinear secciones de microscopía y analizar sus colores. Nuestra primera contribución es ABANICCO, un innovador método de análisis de color que ofrece una segmentación de colores objectiva y no supervisada y permite su posterior refinamiento mediante herramientas fáciles de usar. Se ha demostrado que la precisión y la eficacia de ABANICCO son superiores a las de los métodos existentes de clasificación y segmentación del color, e incluso destaca en la detección y segmentación de objetos completos. ABANICCO puede aplicarse a imágenes de microscopía para detectar áreas teñidas para la cuantificación de biopsias, un aspecto crucial de la investigación de cáncer. La segunda contribución es un método automático y no supervisado de segmentación de tejidos que identifica y elimina el fondo y los artefactos de las imágenes de microscopía, mejorando así el rendimiento de técnicas más sofisticadas de análisis de imagen. Este método es robusto frente a diversas imágenes, tinciones y protocolos de adquisición, y no requiere entrenamiento. La tercera contribución consiste en el desarrollo de métodos novedosos para registrar imágenes histopatológicas de forma eficaz, logrando el equilibrio adecuado entre un registro preciso y la preservación de la morfología local, en función de la aplicación prevista. Como cuarta contribución, los tres métodos mencionados se combinan para crear procedimientos eficientes para la integración completa de datos volumétricos, creando visualizaciones altamente interpretables de toda la información presente en secciones consecutivas de biopsia de tejidos. Esta integración de datos puede tener una gran repercusión en el diagnóstico y el tratamiento de diversas enfermedades, en particular el cáncer de mama, al permitir la detección precoz, la realización de pruebas clínicas precisas, la selección eficaz de tratamientos y la mejora en la comunicación el compromiso con los pacientes. Por último, aplicamos nuestros hallazgos a la integración multimodal de datos y la reconstrucción de tejidos para el análisis preciso de la distribución de elementos químicos en tuberculosis, lo que arroja luz sobre las complejas interacciones entre las bacterias, las células huésped y el sistema inmunitario durante la infección tuberculosa. Este método también aborda problemas como el daño por adquisición, típico de muchas modalidades de imagen. En resumen, esta tesis muestra la aplicación de métodos clásicos de visión por ordenador en el registro de microscopía médica y el análisis de color para abordar los retos únicos de este campo, haciendo hincapié en la visualización eficaz y fácil de datos complejos. Aspiramos a seguir perfeccionando nuestro trabajo con una amplia validación técnica y un mejor análisis de los datos. Los métodos presentados en esta tesis se caracterizan por su claridad, accesibilidad, visualización eficaz de los datos, objetividad y transparencia. Estas características los hacen perfectos para tender puentes robustos entre los investigadores de inteligencia artificial y los clínicos e impulsar así la patología computacional en la práctica y la investigación médicas.Programa de Doctorado en Ciencia y Tecnología Biomédica por la Universidad Carlos III de MadridPresidenta: María Jesús Ledesma Carbayo.- Secretario: Gonzalo Ricardo Ríos Muñoz.- Vocal: Estíbaliz Gómez de Marisca

    Deformable Image Registration Using Convolutional Neural Networks for Connectomics

    Get PDF
    Department of Computer Science and EngineeringIn this thesis, a new novel method to align two images with recent deep learning scheme called ssEMnet is presented. The reconstruction of serial-section electron microscopy (ssEM) images gives critical insight to neuroscientist understanding real brains. However, alignment of each ssEM plane is not straightforward because of its densely twisted circuit structures. In addition, dynamic deformations are applied to images in the process of acquiring ssEM dataset from specimens. Even worse, non-matched artifacts like dusts and folds occur in the EM images. In recent deep learning researches, especially related with convolutional neural networks (CNNs) have shown to be able to handle various problems in computer vision area. However, there is no clear success on ssEM image registration problem using CNNs. ssEMnet is constructed with two parts. The first part is a spatial transformer module which supports differentiable transformation of images in deep neural network. A convolutional autoencoder (CAE) which encodes dense features follows. The CAE is trained by unsupervised fashion and its features give wide receptive field information to align the source and target images. This method is compared with two other major ssEM image registration methods and increases accuracy and robustness, although it has less number of user parameters.ope

    Joint registration and synthesis using a probabilistic model for alignment of MRI and histological sections

    Get PDF
    Nonlinear registration of 2D histological sections with corresponding slices of MRI data is a critical step of 3D histology reconstruction. This task is difficult due to the large differences in image contrast and resolution, as well as the complex nonrigid distortions produced when sectioning the sample and mounting it on the glass slide. It has been shown in brain MRI registration that better spatial alignment across modalities can be obtained by synthesizing one modality from the other and then using intra-modality registration metrics, rather than by using mutual information (MI) as metric. However, such an approach typically requires a database of aligned images from the two modalities, which is very difficult to obtain for histology/MRI. Here, we overcome this limitation with a probabilistic method that simultaneously solves for registration and synthesis directly on the target images, without any training data. In our model, the MRI slice is assumed to be a contrast-warped, spatially deformed version of the histological section. We use approximate Bayesian inference to iteratively refine the probabilistic estimate of the synthesis and the registration, while accounting for each other's uncertainty. Moreover, manually placed landmarks can be seamlessly integrated in the framework for increased performance. Experiments on a synthetic dataset show that, compared with MI, the proposed method makes it possible to use a much more flexible deformation model in the registration to improve its accuracy, without compromising robustness. Moreover, our framework also exploits information in manually placed landmarks more efficiently than MI, since landmarks inform both synthesis and registration - as opposed to registration alone. Finally, we show qualitative results on the public Allen atlas, in which the proposed method provides a clear improvement over MI based registration

    Reconstructing Geometry from Its Latent Structures

    Get PDF
    Our world is full of objects with complex shapes and structures. Through extensive experience humans quickly develop an intuition about how objects are shaped, and what their material properties are simply by analyzing their appearance. We engage this intuitive understanding of geometry in nearly everything we do.It is not surprising then, that a careful treatment of geometry stands to give machines a powerful advantage in the many tasks of visual perception. To that end, this thesis focuses on geometry recovery in a wide range of real-world problems. First, we describe a new approach to image registration. We observe that the structure of the imaged subject becomes embedded in the image intensities. By minimizing the change in shape of these intensity structures we ensure a physically realizable deformation. We then describe a method for reassembling fragmented, thin-shelled objects from range-images of their fragments using only the geometric and photometric structure embedded in the boundary of each fragment. Third, we describe a method for recovering and representing the shape of a geometric texture (such as bark, or sandpaper) by studying the characteristic properties of texture---self similarity and scale variability. Finally, we describe two methods for recovering the 3D geometry and reflectance properties of an object from images taken under natural illumination. We note that the structure of the surrounding environment, modulated by the reflectance, becomes embedded in the appearance of the object giving strong clues about the object's shape.Though these domains are quite diverse, an essential premise---that observations of objects contain within them salient clues about the object's structure---enables new and powerful approaches. For each problem we begin by investigating what these clues are.We then derive models and methods to canonically represent these clues and enable their full exploitation. The wide-ranging success of each method shows the importance of our carefully formulated observations about geometry, and the fundamental role geometry plays in visual perception.Ph.D., Computer Science -- Drexel University, 201

    Development of Quantitative Bone SPECT Analysis Methods for Metastatic Bone Disease

    Get PDF
    Prostate cancer is one of the most prevalent types of cancer in males in the United States. Bone is a common site of metastases for metastatic prostate cancer. However, bone metastases are often considered “unmeasurable” using standard anatomic imaging and the RECIST 1.1 criteria. As a result, response to therapy is often suboptimally evaluated by visual interpretation of planar bone scintigraphy with response criteria related to the presence or absence of new lesions. With the commercial availability of quantitative single-photon emission computed tomography (SPECT) methods, it is now feasible to establish quantitative metrics of therapy response by skeletal metastases. Quantitative bone SPECT (QBSPECT) may provide the ability to estimate bone lesion uptake, volume, and the number of lesions more accurately than planar imaging. However, the accuracy of activity quantification in QBSPECT relies heavily on the precision with which bone metastases and bone structures are delineated. In this research, we aim at developing automated image segmentation methods for fast and accurate delineation of bone and bone metastases in QBSPECT. To begin, we developed registration methods to generate a dataset of realistic and anatomically-varying computerized phantoms for use in QBSPECT simulations. Using these simulations, we develop supervised computer-automated segmentation methods to minimize intra- and inter-observer variations in delineating bone metastases. This project provides accurate segmentation techniques for QBSPECT and paves the way for the development of QBSPECT methods for assessing bone metastases’ therapy response
    corecore