3,970 research outputs found
Computerized Analysis of Magnetic Resonance Images to Study Cerebral Anatomy in Developing Neonates
The study of cerebral anatomy in developing neonates is of great importance for
the understanding of brain development during the early period of life. This
dissertation therefore focuses on three challenges in the modelling of cerebral
anatomy in neonates during brain development. The methods that have been
developed all use Magnetic Resonance Images (MRI) as source data.
To facilitate study of vascular development in the neonatal period, a set of image
analysis algorithms are developed to automatically extract and model cerebral
vessel trees. The whole process consists of cerebral vessel tracking from
automatically placed seed points, vessel tree generation, and vasculature
registration and matching. These algorithms have been tested on clinical Time-of-
Flight (TOF) MR angiographic datasets.
To facilitate study of the neonatal cortex a complete cerebral cortex segmentation
and reconstruction pipeline has been developed. Segmentation of the neonatal
cortex is not effectively done by existing algorithms designed for the adult brain
because the contrast between grey and white matter is reversed. This causes pixels
containing tissue mixtures to be incorrectly labelled by conventional methods. The
neonatal cortical segmentation method that has been developed is based on a novel
expectation-maximization (EM) method with explicit correction for mislabelled
partial volume voxels. Based on the resulting cortical segmentation, an implicit
surface evolution technique is adopted for the reconstruction of the cortex in
neonates. The performance of the method is investigated by performing a detailed
landmark study.
To facilitate study of cortical development, a cortical surface registration algorithm
for aligning the cortical surface is developed. The method first inflates extracted
cortical surfaces and then performs a non-rigid surface registration using free-form
deformations (FFDs) to remove residual alignment. Validation experiments using
data labelled by an expert observer demonstrate that the method can capture local
changes and follow the growth of specific sulcus
Deep Learning Techniques for Automated Analysis and Processing of High Resolution Medical Imaging
Programa Oficial de Doutoramento en Computación . 5009V01[Abstract]
Medical imaging plays a prominent role in modern clinical practice for numerous
medical specialties. For instance, in ophthalmology, different imaging techniques are
commonly used to visualize and study the eye fundus. In this context, automated
image analysis methods are key towards facilitating the early diagnosis and adequate
treatment of several diseases. Nowadays, deep learning algorithms have already
demonstrated a remarkable performance for different image analysis tasks. However,
these approaches typically require large amounts of annotated data for the training
of deep neural networks. This complicates the adoption of deep learning approaches,
especially in areas where large scale annotated datasets are harder to obtain, such
as in medical imaging.
This thesis aims to explore novel approaches for the automated analysis of medical
images, particularly in ophthalmology. In this regard, the main focus is on
the development of novel deep learning-based approaches that do not require large
amounts of annotated training data and can be applied to high resolution images.
For that purpose, we have presented a novel paradigm that allows to take advantage
of unlabeled complementary image modalities for the training of deep neural
networks. Additionally, we have also developed novel approaches for the detailed
analysis of eye fundus images. In that regard, this thesis explores the analysis of
relevant retinal structures as well as the diagnosis of different retinal diseases. In
general, the developed algorithms provide satisfactory results for the analysis of the
eye fundus, even when limited annotated training data is available.[Resumen]
Las técnicas de imagen tienen un papel destacado en la práctica clínica moderna
de numerosas especialidades médicas. Por ejemplo, en oftalmología es común el uso
de diferentes técnicas de imagen para visualizar y estudiar el fondo de ojo. En este
contexto, los métodos automáticos de análisis de imagen son clave para facilitar
el diagnóstico precoz y el tratamiento adecuado de diversas enfermedades. En la
actualidad, los algoritmos de aprendizaje profundo ya han demostrado un notable
rendimiento en diferentes tareas de análisis de imagen. Sin embargo, estos métodos
suelen necesitar grandes cantidades de datos etiquetados para el entrenamiento de
las redes neuronales profundas. Esto complica la adopción de los métodos de aprendizaje
profundo, especialmente en áreas donde los conjuntos masivos de datos etiquetados
son más difíciles de obtener, como es el caso de la imagen médica.
Esta tesis tiene como objetivo explorar nuevos métodos para el análisis automático de imagen médica, concretamente en oftalmología. En este sentido, el foco
principal es el desarrollo de nuevos métodos basados en aprendizaje profundo que no
requieran grandes cantidades de datos etiquetados para el entrenamiento y puedan
aplicarse a imágenes de alta resolución. Para ello, hemos presentado un nuevo
paradigma que permite aprovechar modalidades de imagen complementarias no etiquetadas
para el entrenamiento de redes neuronales profundas. Además, también
hemos desarrollado nuevos métodos para el análisis en detalle de las imágenes del
fondo de ojo. En este sentido, esta tesis explora el análisis de estructuras retinianas
relevantes, así como el diagnóstico de diferentes enfermedades de la retina. En
general, los algoritmos desarrollados proporcionan resultados satisfactorios para el
análisis de las imágenes de fondo de ojo, incluso cuando la disponibilidad de datos
de entrenamiento etiquetados es limitada.[Resumo]
As técnicas de imaxe teñen un papel destacado na práctica clínica moderna de
numerosas especialidades médicas. Por exemplo, en oftalmoloxía é común o uso
de diferentes técnicas de imaxe para visualizar e estudar o fondo de ollo. Neste
contexto, os métodos automáticos de análises de imaxe son clave para facilitar o
diagn ostico precoz e o tratamento adecuado de diversas enfermidades. Na actualidade,
os algoritmos de aprendizaxe profunda xa demostraron un notable rendemento
en diferentes tarefas de análises de imaxe. Con todo, estes métodos adoitan necesitar
grandes cantidades de datos etiquetos para o adestramento das redes neuronais
profundas. Isto complica a adopción dos métodos de aprendizaxe profunda, especialmente
en áreas onde os conxuntos masivos de datos etiquetados son máis difíciles
de obter, como é o caso da imaxe médica.
Esta tese ten como obxectivo explorar novos métodos para a análise automática
de imaxe médica, concretamente en oftalmoloxía. Neste sentido, o foco principal
é o desenvolvemento de novos métodos baseados en aprendizaxe profunda que non
requiran grandes cantidades de datos etiquetados para o adestramento e poidan aplicarse
a imaxes de alta resolución. Para iso, presentamos un novo paradigma que
permite aproveitar modalidades de imaxe complementarias non etiquetadas para o
adestramento de redes neuronais profundas. Ademais, tamén desenvolvemos novos
métodos para a análise en detalle das imaxes do fondo de ollo. Neste sentido, esta
tese explora a análise de estruturas retinianas relevantes, así como o diagnóstico de
diferentes enfermidades da retina. En xeral, os algoritmos desenvolvidos proporcionan
resultados satisfactorios para a análise das imaxes de fondo de ollo, mesmo
cando a dispoñibilidade de datos de adestramento etiquetados é limitada
Medical image segmentation and analysis using statistical shape modelling and inter-landmark relationships
The study of anatomical morphology is of great importance to medical imaging, with applications varying from clinical diagnosis to computer-aided surgery. To this end, automated tools are required for accurate extraction of the anatomical boundaries from the image data and detailed interpretation of morphological information. This thesis introduces a novel approach to shape-based analysis of medical images based on Inter- Landmark Descriptors (ILDs). Unlike point coordinates that describe absolute position, these shape variables represent relative configuration of landmarks in the shape. The proposed work is motivated by the inherent difficulties of methods based on landmark coordinates in challenging applications. Through explicit invariance to pose parameters and decomposition of the global shape constraints, this work permits anatomical shape analysis that is resistant to image inhomogeneities and geometrical inconsistencies. Several algorithms are presented to tackle specific image segmentation and analysis problems, including automatic initialisation, optimal feature point search, outlier handling and dynamic abnormality localisation. Detailed validation results are provided based on various cardiovascular magnetic resonance datasets, showing increased robustness and accuracy.Open acces
An End-to-end Deep Learning Approach for Landmark Detection and Matching in Medical Images
Anatomical landmark correspondences in medical images can provide additional
guidance information for the alignment of two images, which, in turn, is
crucial for many medical applications. However, manual landmark annotation is
labor-intensive. Therefore, we propose an end-to-end deep learning approach to
automatically detect landmark correspondences in pairs of two-dimensional (2D)
images. Our approach consists of a Siamese neural network, which is trained to
identify salient locations in images as landmarks and predict matching
probabilities for landmark pairs from two different images. We trained our
approach on 2D transverse slices from 168 lower abdominal Computed Tomography
(CT) scans. We tested the approach on 22,206 pairs of 2D slices with varying
levels of intensity, affine, and elastic transformations. The proposed approach
finds an average of 639, 466, and 370 landmark matches per image pair for
intensity, affine, and elastic transformations, respectively, with spatial
matching errors of at most 1 mm. Further, more than 99% of the landmark pairs
are within a spatial matching error of 2 mm, 4 mm, and 8 mm for image pairs
with intensity, affine, and elastic transformations, respectively. To
investigate the utility of our developed approach in a clinical setting, we
also tested our approach on pairs of transverse slices selected from follow-up
CT scans of three patients. Visual inspection of the results revealed landmark
matches in both bony anatomical regions as well as in soft tissues lacking
prominent intensity gradients.Comment: SPIE Medical Imaging Conference - 202
Self-Supervised Multimodal Reconstruction of Retinal Images Over Paired Datasets
[Abstract]
Data scarcity represents an important constraint for the training of deep neural networks in medical imaging. Medical image labeling, especially if pixel-level annotations are required, is an expensive task that needs expert intervention and usually results in a reduced number of annotated samples. In contrast, extensive amounts of unlabeled data are produced in the daily clinical practice, including paired multimodal images from patients that were subjected to multiple imaging tests. This work proposes a novel self-supervised multimodal reconstruction task that takes advantage of this unlabeled multimodal data for learning about the domain without human supervision. Paired multimodal data is a rich source of clinical information that can be naturally exploited by trying to estimate one image modality from others. This multimodal reconstruction requires the recognition of domain-specific patterns that can be used to complement the training of image analysis tasks in the same domain for which annotated data is scarce.
In this work, a set of experiments is performed using a multimodal setting of retinography and fluorescein angiography pairs that offer complementary information about the eye fundus. The evaluations performed on different public datasets, which include pathological and healthy data samples, demonstrate that a network trained for self-supervised multimodal reconstruction of angiography from retinography achieves unsupervised recognition of important retinal structures. These results indicate that the proposed self-supervised task provides relevant cues for image analysis tasks in the same domain.This work is supported by Instituto de Salud Carlos III, Government of Spain, and the European Regional Development Fund (ERDF) of the European Union (EU) through the DTS18/00136 research project, and by Ministerio de Economía, Industria y Competitividad, Government of Spain, through the DPI2015-69948-R research project. The authors of this work also receive financial support from the ERDF and Xunta de Galicia through Grupo de Referencia Competitiva, Ref. ED431C 2016-047, and from the European Social Fund (ESF) of the EU and Xunta de Galicia through the predoctoral grant contract Ref. ED481A-2017/328. CITIC, Centro de Investigación de Galicia Ref. ED431G 2019/01, receives financial support from Consellería de Educación, Universidade e Formación Profesional, Xunta de Galicia, through the ERDF (80%) and Secretaría Xeral de Universidades (20%)Xunta de Galicia; ED431C 2016-047Xunta de Galicia; ED481A-2017/328Xunta de Galicia; ED431G 2019/0
Development of an Atlas-Based Segmentation of Cranial Nerves Using Shape-Aware Discrete Deformable Models for Neurosurgical Planning and Simulation
Twelve pairs of cranial nerves arise from the brain or brainstem and control our sensory functions such as vision, hearing, smell and taste as well as several motor functions to the head and neck including facial expressions and eye movement. Often, these cranial nerves are difficult to detect in MRI data, and thus represent problems in neurosurgery planning and simulation, due to their thin anatomical structure, in the face of low imaging resolution as well as image artifacts. As a result, they may be at risk in neurosurgical procedures around the skull base, which might have dire consequences such as the loss of eyesight or hearing and facial paralysis. Consequently, it is of great importance to clearly delineate cranial nerves in medical images for avoidance in the planning of neurosurgical procedures and for targeting in the treatment of cranial nerve disorders. In this research, we propose to develop a digital atlas methodology that will be used to segment the cranial nerves from patient image data. The atlas will be created from high-resolution MRI data based on a discrete deformable contour model called 1-Simplex mesh. Each of the cranial nerves will be modeled using its centerline and radius information where the centerline is estimated in a semi-automatic approach by finding a shortest path between two user-defined end points. The cranial nerve atlas is then made more robust by integrating a Statistical Shape Model so that the atlas can identify and segment nerves from images characterized by artifacts or low resolution. To the best of our knowledge, no such digital atlas methodology exists for segmenting nerves cranial nerves from MRI data. Therefore, our proposed system has important benefits to the neurosurgical community
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
Organ-focused mutual information for nonrigid multimodal registration of liver CT and Gd–EOB–DTPA-enhanced MRI
Accurate detection of liver lesions is of great importance in hepatic surgery planning. Recent studies have shown that the detection rate of liver lesions is significantly higher in gadoxetic acid-enhanced magnetic resonance imaging (Gd–EOB–DTPA-enhanced MRI) than in contrast-enhanced portal-phase computed tomography (CT); however, the latter remains essential because of its high specificity, good performance in estimating liver volumes and better vessel visibility. To characterize liver lesions using both the above image modalities, we propose a multimodal nonrigid registration framework using organ-focused mutual information (OF-MI). This proposal tries to improve mutual information (MI) based registration by adding spatial information, benefiting from the availability of expert liver segmentation in clinical protocols. The incorporation of an additional information channel containing liver segmentation information was studied. A dataset of real clinical images and simulated images was used in the validation process. A Gd–EOB–DTPA-enhanced MRI simulation framework is presented. To evaluate results, warping index errors were calculated for the simulated data, and landmark-based and surface-based errors were calculated for the real data. An improvement of the registration accuracy for OF-MI as compared with MI was found for both simulated and real datasets. Statistical significance of the difference was tested and confirmed in the simulated dataset (p < 0.01)
Color Fundus Image Registration Using a Learning-Based Domain-Specific Landmark Detection Methodology
Financiado para publicación en acceso aberto: Universidade da Coruña/CISUG[Abstract] Medical imaging, and particularly retinal imaging, allows to accurately diagnose many eye pathologies as well as some systemic diseases such as hypertension or diabetes. Registering these images is crucial to correctly compare key structures, not only within patients, but also to contrast data with a model or among a population. Currently, this field is dominated by complex classical methods because the novel deep learning methods cannot compete yet in terms of results and commonly used methods are difficult to adapt to the retinal domain. In this work, we propose a novel method to register color fundus images based on previous works which employed classical approaches to detect domain-specific landmarks. Instead, we propose to use deep learning methods for the detection of these highly-specific domain-related landmarks. Our method uses a neural network to detect the bifurcations and crossovers of the retinal blood vessels, whose arrangement and location are unique to each eye and person. This proposal is the first deep learning feature-based registration method in fundus imaging. These keypoints are matched using a method based on RANSAC (Random Sample Consensus) without the requirement to calculate complex descriptors. Our method was tested using the public FIRE dataset, although the landmark detection network was trained using the DRIVE dataset. Our method provides accurate results, a registration score of 0.657 for the whole FIRE dataset (0.908 for category S, 0.293 for category P and 0.660 for category A). Therefore, our proposal can compete with complex classical methods and beat the deep learning methods in the state of the art.This research was funded by Instituto de Salud Carlos III, Government of Spain, DTS18/00 136 research project; Ministerio de Ciencia e Innovación y Universidades, Government of Spain, RTI2018-095 894-B-I00 research project; Consellería de Cultura, Educación e Universidade, Xunta de Galicia through the predoctoral grant contract ref. ED481A 2021/147 and Grupos de Referencia Competitiva, grant ref. ED431C 2020/24; CITIC, Centro de Investigación de Galicia ref. ED431G 2019/01, receives financial support from Consellería de Educación, Universidade e Formación Profesional, Xunta de Galicia, through the ERDF (80%) and Secretaría Xeral de Universidades (20%). The funding institutions had no involvement in the study design, in the collection, analysis and interpretation of data; in the writing of the manuscript; or in the decision to submit the manuscript for publication. Funding for open access charge: Universidade da Coruña/CISUGXunta de Galicia; ED481A 2021/147Xunta de Galicia; ED431C 2020/24Xunta de Galicia; ED431G 2019/0
- …