384 research outputs found

    A Joint Transformation and Residual Image Descriptor for Morphometric Image Analysis using an Equivalence Class Formulation

    Get PDF
    Existing computational anatomy methodologies for morphometric analysis of medical images are often based solely on the shape transformation, typically being a diffeomorphism, that warps these images to a common template or vice versa. However, anatomical differences as well as changes induced by pathology, prevent the warping transformation from producing an exact correspondence. The residual image captures information that is not reflected by the diffeomorphism, and therefore allows us to maintain the entire morphological profile for analysis. In this paper we present a morphological descriptor which combines the warping transformation with the residual image in an equivalence class formulation, to characterize morphology of anatomical structures. Equivalence classes are formed by pairs of transformation and residual, for different levels of smoothness of the warping transformation. These pairs belong to the same equivalence class, since they jointly reconstruct the exact same morphology. Moreover, pattern classification methods are trained on the entire equivalence class, instead of a single pair, in order to become more robust to a variety of factors that affect the warping transformation, including the anatomy being measured. This joint descriptor is evaluated by statistical testing and estimation of class separation by classification, initially for 2-D synthetic images with simulated atrophy and subsequently for a volumetric dataset consisting of schizophrenia patients and healthy controls. Results of class separation indicate that this joint descriptor produces generally better and more robust class separation than using each of the components separately

    Fundus image analysis for automatic screening of ophthalmic pathologies

    Full text link
    En los ultimos años el número de casos de ceguera se ha reducido significativamente. A pesar de este hecho, la Organización Mundial de la Salud estima que un 80% de los casos de pérdida de visión (285 millones en 2010) pueden ser evitados si se diagnostican en sus estadios más tempranos y son tratados de forma efectiva. Para cumplir esta propuesta se pretende que los servicios de atención primaria incluyan un seguimiento oftalmológico de sus pacientes así como fomentar campañas de cribado en centros proclives a reunir personas de alto riesgo. Sin embargo, estas soluciones exigen una alta carga de trabajo de personal experto entrenado en el análisis de los patrones anómalos propios de cada enfermedad. Por lo tanto, el desarrollo de algoritmos para la creación de sistemas de cribado automáticos juga un papel vital en este campo. La presente tesis persigue la identificacion automática del daño retiniano provocado por dos de las patologías más comunes en la sociedad actual: la retinopatía diabética (RD) y la degenaración macular asociada a la edad (DMAE). Concretamente, el objetivo final de este trabajo es el desarrollo de métodos novedosos basados en la extracción de características de la imagen de fondo de ojo y clasificación para discernir entre tejido sano y patológico. Además, en este documento se proponen algoritmos de pre-procesado con el objetivo de normalizar la alta variabilidad existente en las bases de datos publicas de imagen de fondo de ojo y eliminar la contribución de ciertas estructuras retinianas que afectan negativamente en la detección del daño retiniano. A diferencia de la mayoría de los trabajos existentes en el estado del arte sobre detección de patologías en imagen de fondo de ojo, los métodos propuestos a lo largo de este manuscrito evitan la necesidad de segmentación de las lesiones o la generación de un mapa de candidatos antes de la fase de clasificación. En este trabajo, Local binary patterns, perfiles granulométricos y la dimensión fractal se aplican de manera local para extraer información de textura, morfología y tortuosidad de la imagen de fondo de ojo. Posteriormente, esta información se combina de diversos modos formando vectores de características con los que se entrenan avanzados métodos de clasificación formulados para discriminar de manera óptima entre exudados, microaneurismas, hemorragias y tejido sano. Mediante diversos experimentos, se valida la habilidad del sistema propuesto para identificar los signos más comunes de la RD y DMAE. Para ello se emplean bases de datos públicas con un alto grado de variabilidad sin exlcuir ninguna imagen. Además, la presente tesis también cubre aspectos básicos del paradigma de deep learning. Concretamente, se presenta un novedoso método basado en redes neuronales convolucionales (CNNs). La técnica de transferencia de conocimiento se aplica mediante el fine-tuning de las arquitecturas de CNNs más importantes en el estado del arte. La detección y localización de exudados mediante redes neuronales se lleva a cabo en los dos últimos experimentos de esta tesis doctoral. Cabe destacar que los resultados obtenidos mediante la extracción de características "manual" y posterior clasificación se comparan de forma objetiva con las predicciones obtenidas por el mejor modelo basado en CNNs. Los prometedores resultados obtenidos en esta tesis y el bajo coste y portabilidad de las cámaras de adquisión de imagen de retina podrían facilitar la incorporación de los algoritmos desarrollados en este trabajo en un sistema de cribado automático que ayude a los especialistas en la detección de patrones anomálos característicos de las dos enfermedades bajo estudio: RD y DMAE.In last years, the number of blindness cases has been significantly reduced. Despite this promising news, the World Health Organisation estimates that 80% of visual impairment (285 million cases in 2010) could be avoided if diagnosed and treated early. To accomplish this purpose, eye care services need to be established in primary health and screening campaigns should be a common task in centres with people at risk. However, these solutions entail a high workload for trained experts in the analysis of the anomalous patterns of each eye disease. Therefore, the development of algorithms for automatic screening system plays a vital role in this field. This thesis focuses on the automatic identification of the retinal damage provoked by two of the most common pathologies in the current society: diabetic retinopathy (DR) and age-related macular degeneration (AMD). Specifically, the final goal of this work is to develop novel methods, based on fundus image description and classification, to characterise the healthy and abnormal tissue in the retina background. In addition, pre-processing algorithms are proposed with the aim of normalising the high variability of fundus images and removing the contribution of some retinal structures that could hinder in the retinal damage detection. In contrast to the most of the state-of-the-art works in damage detection using fundus images, the methods proposed throughout this manuscript avoid the necessity of lesion segmentation or the candidate map generation before the classification stage. Local binary patterns, granulometric profiles and fractal dimension are locally computed to extract texture, morphological and roughness information from retinal images. Different combinations of this information feed advanced classification algorithms formulated to optimally discriminate exudates, microaneurysms, haemorrhages and healthy tissues. Through several experiments, the ability of the proposed system to identify DR and AMD signs is validated using different public databases with a large degree of variability and without image exclusion. Moreover, this thesis covers the basics of the deep learning paradigm. In particular, a novel approach based on convolutional neural networks is explored. The transfer learning technique is applied to fine-tune the most important state-of-the-art CNN architectures. Exudate detection and localisation tasks using neural networks are carried out in the last two experiments of this thesis. An objective comparison between the hand-crafted feature extraction and classification process and the prediction models based on CNNs is established. The promising results of this PhD thesis and the affordable cost and portability of retinal cameras could facilitate the further incorporation of the developed algorithms in a computer-aided diagnosis (CAD) system to help specialists in the accurate detection of anomalous patterns characteristic of the two diseases under study: DR and AMD.En els últims anys el nombre de casos de ceguera s'ha reduït significativament. A pesar d'este fet, l'Organització Mundial de la Salut estima que un 80% dels casos de pèrdua de visió (285 milions en 2010) poden ser evitats si es diagnostiquen en els seus estadis més primerencs i són tractats de forma efectiva. Per a complir esta proposta es pretén que els servicis d'atenció primària incloguen un seguiment oftalmològic dels seus pacients així com fomentar campanyes de garbellament en centres regentats per persones d'alt risc. No obstant això, estes solucions exigixen una alta càrrega de treball de personal expert entrenat en l'anàlisi dels patrons anòmals propis de cada malaltia. Per tant, el desenrotllament d'algoritmes per a la creació de sistemes de garbellament automàtics juga un paper vital en este camp. La present tesi perseguix la identificació automàtica del dany retiniano provocat per dos de les patologies més comunes en la societat actual: la retinopatia diabètica (RD) i la degenaración macular associada a l'edat (DMAE) . Concretament, l'objectiu final d'este treball és el desenrotllament de mètodes novedodos basats en l'extracció de característiques de la imatge de fons d'ull i classificació per a discernir entre teixit sa i patològic. A més, en este document es proposen algoritmes de pre- processat amb l'objectiu de normalitzar l'alta variabilitat existent en les bases de dades publiques d'imatge de fons d'ull i eliminar la contribució de certes estructures retinianas que afecten negativament en la detecció del dany retiniano. A diferència de la majoria dels treballs existents en l'estat de l'art sobre detecció de patologies en imatge de fons d'ull, els mètodes proposats al llarg d'este manuscrit eviten la necessitat de segmentació de les lesions o la generació d'un mapa de candidats abans de la fase de classificació. En este treball, Local binary patterns, perfils granulometrics i la dimensió fractal s'apliquen de manera local per a extraure informació de textura, morfologia i tortuositat de la imatge de fons d'ull. Posteriorment, esta informació es combina de diversos modes formant vectors de característiques amb els que s'entrenen avançats mètodes de classificació formulats per a discriminar de manera òptima entre exsudats, microaneurismes, hemorràgies i teixit sa. Per mitjà de diversos experiments, es valida l'habilitat del sistema proposat per a identificar els signes més comuns de la RD i DMAE. Per a això s'empren bases de dades públiques amb un alt grau de variabilitat sense exlcuir cap imatge. A més, la present tesi també cobrix aspectes bàsics del paradigma de deep learning. Concretament, es presenta un nou mètode basat en xarxes neuronals convolucionales (CNNs) . La tècnica de transferencia de coneixement s'aplica per mitjà del fine-tuning de les arquitectures de CNNs més importants en l'estat de l'art. La detecció i localització d'exudats per mitjà de xarxes neuronals es du a terme en els dos últims experiments d'esta tesi doctoral. Cal destacar que els resultats obtinguts per mitjà de l'extracció de característiques "manual" i posterior classificació es comparen de forma objectiva amb les prediccions obtingudes pel millor model basat en CNNs. Els prometedors resultats obtinguts en esta tesi i el baix cost i portabilitat de les cambres d'adquisión d'imatge de retina podrien facilitar la incorporació dels algoritmes desenrotllats en este treball en un sistema de garbellament automàtic que ajude als especialistes en la detecció de patrons anomálos característics de les dos malalties baix estudi: RD i DMAE.Colomer Granero, A. (2018). Fundus image analysis for automatic screening of ophthalmic pathologies [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/99745TESI

    Machine Intelligence for Advanced Medical Data Analysis: Manifold Learning Approach

    Get PDF
    In the current work, linear and non-linear manifold learning techniques, specifically Principle Component Analysis (PCA) and Laplacian Eigenmaps, are studied in detail. Their applications in medical image and shape analysis are investigated. In the first contribution, a manifold learning-based multi-modal image registration technique is developed, which results in a unified intensity system through intensity transformation between the reference and sensed images. The transformation eliminates intensity variations in multi-modal medical scans and hence facilitates employing well-studied mono-modal registration techniques. The method can be used for registering multi-modal images with full and partial data. Next, a manifold learning-based scale invariant global shape descriptor is introduced. The proposed descriptor benefits from the capability of Laplacian Eigenmap in dealing with high dimensional data by introducing an exponential weighting scheme. It eliminates the limitations tied to the well-known cotangent weighting scheme, namely dependency on triangular mesh representation and high intra-class quality of 3D models. In the end, a novel descriptive model for diagnostic classification of pulmonary nodules is presented. The descriptive model benefits from structural differences between benign and malignant nodules for automatic and accurate prediction of a candidate nodule. It extracts concise and discriminative features automatically from the 3D surface structure of a nodule using spectral features studied in the previous work combined with a point cloud-based deep learning network. Extensive experiments have been conducted and have shown that the proposed algorithms based on manifold learning outperform several state-of-the-art methods. Advanced computational techniques with a combination of manifold learning and deep networks can play a vital role in effective healthcare delivery by providing a framework for several fundamental tasks in image and shape processing, namely, registration, classification, and detection of features of interest

    Attribute vector guided groupwise registration

    Get PDF
    Groupwise registration has been recently introduced to simultaneously register a group of images by avoiding the selection of a particular template. To achieve this, several methods have been proposed to take advantage of information-theoretic entropy measures based on image intensity. However, simplistic utilization of voxelwise image intensity is not sufficient to establish reliable correspondences, since it lacks important contextual information. Therefore, we explore the notion of attribute vector as the voxel signature, instead of image intensity, to guide the correspondence detection in groupwise registration. In particular, for each voxel, the attribute vector is computed from its multi-scale neighborhoods, in order to capture the geometric information at different scales. The probability density function (PDF) of each element in the attribute vector is then estimated from the local neighborhood, providing a statistical summary of the underlying anatomical structure in that local pattern. Eventually, with the help of Jensen-Shannon (JS) divergence, a group of subjects can be aligned simultaneously by minimizing the sum of JS divergences across the image domain and all attributes. We have employed our groupwise registration algorithm on both real (NIREP NA0 dataset) and simulated data (12 pairs of normal control and simulated atrophic dataset). The experimental results demonstrate that our method yields better registration accuracy, compared with a popular groupwise registration method

    Two and three dimensional segmentation of multimodal imagery

    Get PDF
    The role of segmentation in the realms of image understanding/analysis, computer vision, pattern recognition, remote sensing and medical imaging in recent years has been significantly augmented due to accelerated scientific advances made in the acquisition of image data. This low-level analysis protocol is critical to numerous applications, with the primary goal of expediting and improving the effectiveness of subsequent high-level operations by providing a condensed and pertinent representation of image information. In this research, we propose a novel unsupervised segmentation framework for facilitating meaningful segregation of 2-D/3-D image data across multiple modalities (color, remote-sensing and biomedical imaging) into non-overlapping partitions using several spatial-spectral attributes. Initially, our framework exploits the information obtained from detecting edges inherent in the data. To this effect, by using a vector gradient detection technique, pixels without edges are grouped and individually labeled to partition some initial portion of the input image content. Pixels that contain higher gradient densities are included by the dynamic generation of segments as the algorithm progresses to generate an initial region map. Subsequently, texture modeling is performed and the obtained gradient, texture and intensity information along with the aforementioned initial partition map are used to perform a multivariate refinement procedure, to fuse groups with similar characteristics yielding the final output segmentation. Experimental results obtained in comparison to published/state-of the-art segmentation techniques for color as well as multi/hyperspectral imagery, demonstrate the advantages of the proposed method. Furthermore, for the purpose of achieving improved computational efficiency we propose an extension of the aforestated methodology in a multi-resolution framework, demonstrated on color images. Finally, this research also encompasses a 3-D extension of the aforementioned algorithm demonstrated on medical (Magnetic Resonance Imaging / Computed Tomography) volumes

    Registration of medical images for applications in minimally invasive procedures

    Get PDF
    Il punto di partenza di questa tesi \ue8 l'analisi dei metodi allo stato dell'arte di registrazione delle immagini mediche per verificare se sono adatti ad essere utilizzati per assistere il medico durante una procedura minimamente invasiva , ad esempio una procedura percutanea eseguita manualmente o un intervento teleoperato eseguito per mezzo di un robot . La prima conclusione \ue8 che, anche se ci sono tanti lavori dedicati allo sviluppo di algoritmi di registrazione da applicare nel contesto medico, la maggior parte di essi non sono stati progettati per essere utilizzati nello scenario della sala operatoria (OR) anche perch\ue9, rispetto ad altre applicazioni , OR richiede anche la validazione, prestazioni in tempo reale e la presenza di altri strumenti . Gli algoritmi allo stato dell'arte sono basati su un iterazione in tre fasi : ottimizzazione - trasformazione - valutazione della somiglianza delle immagini registrate. In questa tesi, studiamo la fattibilit\ue0 dell'approccio in tre fasi per applicazioni OR, mostrando i limiti che tale approccio incontra nelle applicazioni che stiamo considerando. Verr\ue0 dimostrato come un metodo semplice si potrebbe utilizzare nella OR. Abbiamo poi sviluppato una teoria che \ue8 adatta a registrare grandi insiemi di dati non strutturati estratti da immagini mediche, tenendo conto dei vincoli della OR . Vista l'impossibilit\ue0 di lavorare con dati medici di tipo DICOM, verr\ue0 impiegato un metodo per registrare dataset composti da insiemi di punti non strutturati. Gli algoritmi proposti sono progettati per trovare la corrispondenza spaziale in forma chiusa tenendo conto del tipo di dati, il vincolo del tempo e la presenza di rumore e /o piccole deformazioni. La teoria e gli algoritmi che abbiamo sviluppato sono derivati dalla teoria delle forme proposta da Kendall (Kendall's shapes) e utilizza un descrittore globale della forma per calcolare le corrispondenze e la distanza tra le strutture coinvolte . Poich\ue9 la registrazione \ue8 solo una componente nelle applicazioni mediche, l' ultima parte della tesi \ue8 dedicata ad alcune applicazioni pratiche in OR che possono beneficiare della procedura di registrazione .The registration of medical images is necessary to establish spatial correspondences across two or more images. Registration is rarely the end-goal, but instead, the results of image registration are used in other tasks. The starting point of this thesis is to analyze which methods at the state of the art of image registration are suitable to be used in assisting a physician during a minimally invasive procedure, such as a percutaneous procedure performed manually or a teleoperated intervention performed by the means of a robot. The first conclusion is that, even if much previous work has been devoted to develop registration algorithms to be applied in the medical context, most of them are not designed to be used in the operating room scenario (OR) because, compared to other applications, the OR requires also a strong validation, real-time performance and the presence of other instruments. Almost all of these algorithms are based on a three phase iteration: optimize-transform-evaluate similarity. In this thesis, we study the feasibility of this three steps approach in the OR, showing the limits that such approach encounter in the applications we are considering. We investigate how could a simple method be realizable and what are the assumptions for such a method to work. We then develop a theory that is suitable to register large sets of unstructured data extracted from medical images keeping into account the constraints of the OR. The use of the whole radiologic information is not feasible in the OR context, therefore the method we are introducing registers processed dataset extracted from the original medical images. The framework we propose is designed to find the spatial correspondence in closed form keeping into account the type of the data, the real-time constraint and the presence of noise and/or small deformations. The theory and algorithms we have developed are in the framework of the shape theory proposed by Kendall (Kendall's shapes) and uses a global descriptor of the shape to compute the correspondences and the distance between shapes. Since the registration is only a component of a medical application, the last part of the thesis is dedicated to some practical applications in the OR that can benefit from the registration procedure

    Visualizing and Predicting the Effects of Rheumatoid Arthritis on Hands

    Get PDF
    This dissertation was inspired by difficult decisions patients of chronic diseases have to make about about treatment options in light of uncertainty. We look at rheumatoid arthritis (RA), a chronic, autoimmune disease that primarily affects the synovial joints of the hands and causes pain and deformities. In this work, we focus on several parts of a computer-based decision tool that patients can interact with using gestures, ask questions about the disease, and visualize possible futures. We propose a hand gesture based interaction method that is easily setup in a doctor\u27s office and can be trained using a custom set of gestures that are least painful. Our system is versatile and can be used for operations like simple selections to navigating a 3D world. We propose a point distribution model (PDM) that is capable of modeling hand deformities that occur due to RA and a generalized fitting method for use on radiographs of hands. Using our shape model, we show novel visualization of disease progression. Using expertly staged radiographs, we propose a novel distance metric learning and embedding technique that can be used to automatically stage an unlabeled radiograph. Given a large set of expertly labeled radiographs, our data-driven approach can be used to extract different modes of deformation specific to a disease

    Connected Attribute Filtering Based on Contour Smoothness

    Get PDF
    corecore