8 research outputs found

    Automatic quantification of mammary glands on non-contrast X-ray CT by using a novel segmentation approach

    Get PDF
    ABSTRACT This paper describes a brand new automatic segmentation method for quantifying volume and density of mammary gland regions on non-contrast CT images. The proposed method uses two processing steps: (1) breast region localization, and (2) breast region decomposition to accomplish a robust mammary gland segmentation task on CT images. The first step detects two minimum bounding boxes of left and right breast regions, respectively, based on a machine-learning approach that adapts to a large variance of the breast appearances on different age levels. The second step divides the whole breast region in each side into mammary gland, fat tissue, and other regions by using spectral clustering technique that focuses on intra-region similarities of each patient and aims to overcome the image variance caused by different scan-parameters. The whole approach is designed as a simple structure with very minimum number of parameters to gain a superior robustness and computational efficiency for real clinical setting. We applied this approach to a dataset of 300 CT scans, which are sampled with the equal number from 30 to 50 years-old-women. Comparing to human annotations, the proposed approach can measure volume and quantify distributions of the CT numbers of mammary gland regions successfully. The experimental results demonstrated that the proposed approach achieves results consistent with manual annotations. Through our proposed framework, an efficient and effective low cost clinical screening scheme may be easily implemented to predict breast cancer risk, especially on those already acquired scans

    Accurate Segmentation of CT Pelvic Organs via Incremental Cascade Learning and Regression-based Deformable Models

    Get PDF
    Accurate segmentation of male pelvic organs from computed tomography (CT) images is important in image guided radiotherapy (IGRT) of prostate cancer. The efficacy of radiation treatment highly depends on the segmentation accuracy of planning and treatment CT images. Clinically manual delineation is still generally performed in most hospitals. However, it is time consuming and suffers large inter-operator variability due to the low tissue contrast of CT images. To reduce the manual efforts and improve the consistency of segmentation, it is desirable to develop an automatic method for rapid and accurate segmentation of pelvic organs from planning and treatment CT images. This dissertation marries machine learning and medical image analysis for addressing two fundamental yet challenging segmentation problems in image guided radiotherapy of prostate cancer. Planning-CT Segmentation. Deformable models are popular methods for planning-CT segmentation. However, they are well known to be sensitive to initialization and ineffective in segmenting organs with complex shapes. To address these limitations, this dissertation investigates a novel deformable model named regression-based deformable model (RDM). Instead of locally deforming the shape model, in RDM the deformation at each model point is explicitly estimated from local image appearance and used to guide deformable segmentation. As the estimated deformation can be long-distance and is spatially adaptive to each model point, RDM is insensitive to initialization and more flexible than conventional deformable models. These properties render it very suitable for CT pelvic organ segmentation, where initialization is difficult to get and organs may have complex shapes. Treatment-CT Segmentation. Most existing methods have two limitations when they are applied to treatment-CT segmentation. First, they have a limited accuracy because they overlook the availability of patient-specific data in the IGRT workflow. Second, they are time consuming and may take minutes or even longer for segmentation. To improve both accuracy and efficiency, this dissertation combines incremental learning with anatomical landmark detection for fast localization of the prostate in treatment CT images. Specifically, cascade classifiers are learned from a population to automatically detect several anatomical landmarks in the image. Based on these landmarks, the prostate is quickly localized by aligning and then fusing previous segmented prostate shapes of the same patient. To improve the performance of landmark detection, a novel learning scheme named "incremental learning with selective memory" is proposed to personalize the population-based cascade classifiers to the patient under treatment. Extensive experiments on a large dataset show that the proposed method achieves comparable accuracy to the state of the art methods while substantially reducing runtime from minutes to just 4 seconds.Doctor of Philosoph

    The Discriminative Generalized Hough Transform for Localization of Highly Variable Objects and its Application for Surveillance Recordings

    Get PDF
    This work is about the localization of arbitrary objects in 2D images in general and the localization of persons in video surveillance recordings in particular. More precisely, it is about localizing specific landmarks. Thereby the possibilities and limitations of localization approaches based on the Generalized Hough Transform (GHT), especially of the Discriminative Generalized Hough Transform (DGHT) will be evaluated. GHT-based approaches determine the number of matching model and feature points and the most likely target point position is given by the highest number of matching model and feature points. Additionally, the DGHT comprises a statistical learning approach to generate optimal DGHT-models achieving good results on medical images. This work will show that the DGHT is not restricted to medical tasks but has issues with large target object variabilities, which are frequent in video surveillance tasks. As all GHT-based approaches also the DGHT only considers the number of matching model-feature-point-combinations, which means that all model points are treated independently. This work will show that model points are not independent of each other and considering them independently will result in high error rates. This drawback is analyzed and a universal solution, which is not only applicable for the DGHT but all GHT-based approaches, is presented. This solution is based on an additional classifier that takes the whole set of matching model-feature-point-combinations into account to estimate a confidence score. On all tested databases, this approach could reduce the error rates drastically by up to 94.9%. Furthermore, this work presents a general approach for combining multiple GHT-models into a deeper model. This can be used to combine the localization results of different object landmarks such as mouth, nose, and eyes. Similar to Convolutional Neural Networks (CNNs) this will split the target object variability into multiple and smaller variabilities. A comparison of GHT-based approaches with CNNs and a description of the advantages, disadvantages, and potential application of both approaches will conclude this work.Diese Arbeit beschäftigt sich im Allgemeinen mit der Lokalisierung von Objekten in 2D Bilddaten und im Speziellen mit der Lokalisierung von Personen in Videoüberwachungsaufnahmen. Genauer gesagt handelt es sich hierbei um die Lokalisierung spezieller Landmarken. Dabei werden die Möglichkeiten und Limiterungen von Lokalisierungsverfahren basierend auf der Generalisierten Hough Transformation (GHT) untersucht, insbesondere die der Diskriminativen Generalisierten Hough Transformation (DGHT). Bei GHT-basierten Ansätze wird die Anzahl an übereinstimmenden Modelpunkten und Merkmalspunkten ermittelt und die wahrscheinlicheste Objekt-Position ergibt sich aus der höchsten Anzahl an übereinstimmenden Model- und Merkmalspunkte. Die DGHT umfasst darüber hinaus noch ein statistisches Lernverfahren, um optimale DGHT-Modele zu erzeugen und erzielte damit auf medizinischen Bilder und Anwendungen sehr gute Erfolge. Wie sich in dieser Arbeit zeigen wird, ist die DGHT nicht auf medizinische Anwendungen beschränkt, hat allerdings Schwierigkeiten große Variabilität der Ziel-Objekte abzudecken, wie sie in Überwachungsszenarien zu erwarten sind. Genau wie alle GHT-basierten Ansätze leidet auch die DGHT unter dem Problem, dass lediglich die Anzahl an übereinstimmenden Model- und Merkmalspunkten ermittelt wird, was bedeutet, dass alle Modelpunkte unabhängig voneinander betrachtet werden. Dass Modelpunkte nicht unabhängig voneinander sind, wird im Laufe dieser Arbeit gezeigt werden, und die unabhängige Betrachtung führt gerade bei sehr variablen Zielobjekten zu einer hohen Fehlerrate. Dieses Problem wird in dieser Arbeit grundlegend untersucht und ein allgemeiner Lösungsansatz vorgestellt, welcher nicht nur für die DGHT sondern grundsätzlich für alle GHT-basierten Verfahren Anwendung finden kann. Die Lösung basiert auf der Integration eines zusätzlichen Klassifikators, welcher die gesamte Menge an übereinstimmenden Model- und Merkmalspunkten betrachtet und anhand dessen ein zusätzliches Konfidenzmaß vergibt. Dadurch konnte auf allen getesteten Datenbanken eine deutliche Reduktion der Fehlerrate erzielt werden von bis zu 94.9%. Darüber hinaus umfasst die Arbeit einen generellen Ansatz zur Kombination mehrere GHT-Model in einem tieferen Model. Dies kann dazu verwendet werden, um die Lokalisierungsergebnisse verschiedener Objekt-Landmarken zu kombinieren, z. B. die von Mund, Nase und Augen. Ähnlich wie auch bei Convolutional Neural Networks (CNNs) ist es damit möglich über mehrere Ebenen unterschiedliche Bereiche zu lokalisieren und somit die Variabilität des Zielobjektes in mehrere, leichter zu handhabenden Variabilitäten aufzuspalten. Abgeschlossen wird die Arbeit durch einen Vergleich von GHT-basierten Ansätzen mit CNNs und einer Beschreibung der Vor- und Nachteile und mögliche Einsatzfelder beider Verfahren

    Computational methods for the analysis of functional 4D-CT chest images.

    Get PDF
    Medical imaging is an important emerging technology that has been intensively used in the last few decades for disease diagnosis and monitoring as well as for the assessment of treatment effectiveness. Medical images provide a very large amount of valuable information that is too huge to be exploited by radiologists and physicians. Therefore, the design of computer-aided diagnostic (CAD) system, which can be used as an assistive tool for the medical community, is of a great importance. This dissertation deals with the development of a complete CAD system for lung cancer patients, which remains the leading cause of cancer-related death in the USA. In 2014, there were approximately 224,210 new cases of lung cancer and 159,260 related deaths. The process begins with the detection of lung cancer which is detected through the diagnosis of lung nodules (a manifestation of lung cancer). These nodules are approximately spherical regions of primarily high density tissue that are visible in computed tomography (CT) images of the lung. The treatment of these lung cancer nodules is complex, nearly 70% of lung cancer patients require radiation therapy as part of their treatment. Radiation-induced lung injury is a limiting toxicity that may decrease cure rates and increase morbidity and mortality treatment. By finding ways to accurately detect, at early stage, and hence prevent lung injury, it will have significant positive consequences for lung cancer patients. The ultimate goal of this dissertation is to develop a clinically usable CAD system that can improve the sensitivity and specificity of early detection of radiation-induced lung injury based on the hypotheses that radiated lung tissues may get affected and suffer decrease of their functionality as a side effect of radiation therapy treatment. These hypotheses have been validated by demonstrating that automatic segmentation of the lung regions and registration of consecutive respiratory phases to estimate their elasticity, ventilation, and texture features to provide discriminatory descriptors that can be used for early detection of radiation-induced lung injury. The proposed methodologies will lead to novel indexes for distinguishing normal/healthy and injured lung tissues in clinical decision-making. To achieve this goal, a CAD system for accurate detection of radiation-induced lung injury that requires three basic components has been developed. These components are the lung fields segmentation, lung registration, and features extraction and tissue classification. This dissertation starts with an exploration of the available medical imaging modalities to present the importance of medical imaging in today’s clinical applications. Secondly, the methodologies, challenges, and limitations of recent CAD systems for lung cancer detection are covered. This is followed by introducing an accurate segmentation methodology of the lung parenchyma with the focus of pathological lungs to extract the volume of interest (VOI) to be analyzed for potential existence of lung injuries stemmed from the radiation therapy. After the segmentation of the VOI, a lung registration framework is introduced to perform a crucial and important step that ensures the co-alignment of the intra-patient scans. This step eliminates the effects of orientation differences, motion, breathing, heart beats, and differences in scanning parameters to be able to accurately extract the functionality features for the lung fields. The developed registration framework also helps in the evaluation and gated control of the radiotherapy through the motion estimation analysis before and after the therapy dose. Finally, the radiation-induced lung injury is introduced, which combines the previous two medical image processing and analysis steps with the features estimation and classification step. This framework estimates and combines both texture and functional features. The texture features are modeled using the novel 7th-order Markov Gibbs random field (MGRF) model that has the ability to accurately models the texture of healthy and injured lung tissues through simultaneously accounting for both vertical and horizontal relative dependencies between voxel-wise signals. While the functionality features calculations are based on the calculated deformation fields, obtained from the 4D-CT lung registration, that maps lung voxels between successive CT scans in the respiratory cycle. These functionality features describe the ventilation, the air flow rate, of the lung tissues using the Jacobian of the deformation field and the tissues’ elasticity using the strain components calculated from the gradient of the deformation field. Finally, these features are combined in the classification model to detect the injured parts of the lung at an early stage and enables an earlier intervention

    Computational Analysis of Fundus Images: Rule-Based and Scale-Space Models

    Get PDF
    Fundus images are one of the most important imaging examinations in modern ophthalmology because they are simple, inexpensive and, above all, noninvasive. Nowadays, the acquisition and storage of highresolution fundus images is relatively easy and fast. Therefore, fundus imaging has become a fundamental investigation in retinal lesion detection, ocular health monitoring and screening programmes. Given the large volume and clinical complexity associated with these images, their analysis and interpretation by trained clinicians becomes a timeconsuming task and is prone to human error. Therefore, there is a growing interest in developing automated approaches that are affordable and have high sensitivity and specificity. These automated approaches need to be robust if they are to be used in the general population to diagnose and track retinal diseases. To be effective, the automated systems must be able to recognize normal structures and distinguish them from pathological clinical manifestations. The main objective of the research leading to this thesis was to develop automated systems capable of recognizing and segmenting retinal anatomical structures and retinal pathological clinical manifestations associated with the most common retinal diseases. In particular, these automated algorithms were developed on the premise of robustness and efficiency to deal with the difficulties and complexity inherent in these images. Four objectives were considered in the analysis of fundus images. Segmentation of exudates, localization of the optic disc, detection of the midline of blood vessels, segmentation of the vascular network and detection of microaneurysms. In addition, we also evaluated the detection of diabetic retinopathy on fundus images using the microaneurysm detection method. An overview of the state of the art is presented to compare the performance of the developed approaches with the main methods described in the literature for each of the previously described objectives. To facilitate the comparison of methods, the state of the art has been divided into rulebased methods and machine learningbased methods. In the research reported in this paper, rulebased methods based on image processing methods were preferred over machine learningbased methods. In particular, scalespace methods proved to be effective in achieving the set goals. Two different approaches to exudate segmentation were developed. The first approach is based on scalespace curvature in combination with the local maximum of a scalespace blob detector and dynamic thresholds. The second approach is based on the analysis of the distribution function of the maximum values of the noise map in combination with morphological operators and adaptive thresholds. Both approaches perform a correct segmentation of the exudates and cope well with the uneven illumination and contrast variations in the fundus images. Optic disc localization was achieved using a new technique called cumulative sum fields, which was combined with a vascular enhancement method. The algorithm proved to be reliable and efficient, especially for pathological images. The robustness of the method was tested on 8 datasets. The detection of the midline of the blood vessels was achieved using a modified corner detector in combination with binary philtres and dynamic thresholding. Segmentation of the vascular network was achieved using a new scalespace blood vessels enhancement method. The developed methods have proven effective in detecting the midline of blood vessels and segmenting vascular networks. The microaneurysm detection method relies on a scalespace microaneurysm detection and labelling system. A new approach based on the neighbourhood of the microaneurysms was used for labelling. Microaneurysm detection enabled the assessment of diabetic retinopathy detection. The microaneurysm detection method proved to be competitive with other methods, especially with highresolution images. Diabetic retinopathy detection with the developed microaneurysm detection method showed similar performance to other methods and human experts. The results of this work show that it is possible to develop reliable and robust scalespace methods that can detect various anatomical structures and pathological features of the retina. Furthermore, the results obtained in this work show that although recent research has focused on machine learning methods, scalespace methods can achieve very competitive results and typically have greater independence from image acquisition. The methods developed in this work may also be relevant for the future definition of new descriptors and features that can significantly improve the results of automated methods.As imagens do fundo do olho são hoje um dos principais exames imagiológicos da oftalmologia moderna, pela sua simplicidade, baixo custo e acima de tudo pelo seu carácter nãoinvasivo. A aquisição e armazenamento de imagens do fundo do olho com alta resolução é também relativamente simples e rápida. Desta forma, as imagens do fundo do olho são um exame fundamental na identificação de alterações retinianas, monitorização da saúde ocular, e em programas de rastreio. Considerando o elevado volume e complexidade clínica associada a estas imagens, a análise e interpretação das mesmas por clínicos treinados tornase uma tarefa morosa e propensa a erros humanos. Assim, há um interesse crescente no desenvolvimento de abordagens automatizadas, acessíveis em custo, e com uma alta sensibilidade e especificidade. Estas devem ser robustas para serem aplicadas à população em geral no diagnóstico e seguimento de doenças retinianas. Para serem eficazes, os sistemas de análise têm que conseguir detetar e distinguir estruturas normais de sinais patológicos. O objetivo principal da investigação que levou a esta tese de doutoramento é o desenvolvimento de sistemas automáticos capazes de detetar e segmentar as estruturas anatómicas da retina, e os sinais patológicos retinianos associados às doenças retinianas mais comuns. Em particular, estes algoritmos automatizados foram desenvolvidos segundo as premissas de robustez e eficácia para lidar com as dificuldades e complexidades inerentes a estas imagens. Foram considerados quatro objetivos de análise de imagens do fundo do olho. São estes, a segmentação de exsudados, a localização do disco ótico, a deteção da linha central venosa dos vasos sanguíneos e segmentação da rede vascular, e a deteção de microaneurismas. De acrescentar que usando o método de deteção de microaneurismas, avaliouse também a capacidade de deteção da retinopatia diabética em imagens do fundo do olho. Para comparar o desempenho das metodologias desenvolvidas neste trabalho, foi realizado um levantamento do estado da arte, onde foram considerados os métodos mais relevantes descritos na literatura para cada um dos objetivos descritos anteriormente. Para facilitar a comparação entre métodos, o estado da arte foi dividido em metodologias de processamento de imagem e baseadas em aprendizagem máquina. Optouse no trabalho de investigação desenvolvido pela utilização de metodologias de análise espacial de imagem em detrimento de metodologias baseadas em aprendizagem máquina. Em particular, as metodologias baseadas no espaço de escalas mostraram ser efetivas na obtenção dos objetivos estabelecidos. Para a segmentação de exsudados foram usadas duas abordagens distintas. A primeira abordagem baseiase na curvatura em espaço de escalas em conjunto com a resposta máxima local de um detetor de manchas em espaço de escalas e limiares dinâmicos. A segunda abordagem baseiase na análise do mapa de distribuição de ruído em conjunto com operadores morfológicos e limiares adaptativos. Ambas as abordagens fazem uma segmentação dos exsudados de elevada precisão, além de lidarem eficazmente com a iluminação nãouniforme e a variação de contraste presente nas imagens do fundo do olho. A localização do disco ótico foi conseguida com uma nova técnica designada por campos de soma acumulativos, combinada com métodos de melhoramento da rede vascular. O algoritmo revela ser fiável e eficiente, particularmente em imagens patológicas. A robustez do método foi verificada pela sua avaliação em oito bases de dados. A deteção da linha central dos vasos sanguíneos foi obtida através de um detetor de cantos modificado em conjunto com filtros binários e limiares dinâmicos. A segmentação da rede vascular foi conseguida com um novo método de melhoramento de vasos sanguíneos em espaço de escalas. Os métodos desenvolvidos mostraram ser eficazes na deteção da linha central dos vasos sanguíneos e na segmentação da rede vascular. Finalmente, o método para a deteção de microaneurismas assenta num formalismo de espaço de escalas na deteção e na rotulagem dos microaneurismas. Para a rotulagem foi utilizada uma nova abordagem da vizinhança dos candidatos a microaneurismas. A deteção de microaneurismas permitiu avaliar também a deteção da retinopatia diabética. O método para a deteção de microaneurismas mostrou ser competitivo quando comparado com outros métodos, em particular em imagens de alta resolução. A deteção da retinopatia diabética exibiu um desempenho semelhante a outros métodos e a especialistas humanos. Os trabalhos descritos nesta tese mostram ser possível desenvolver uma abordagem fiável e robusta em espaço de escalas capaz de detetar diferentes estruturas anatómicas e sinais patológicos da retina. Além disso, os resultados obtidos mostram que apesar de a pesquisa mais recente concentrarse em metodologias de aprendizagem máquina, as metodologias de análise espacial apresentam resultados muito competitivos e tipicamente independentes do equipamento de aquisição das imagens. As metodologias desenvolvidas nesta tese podem ser importantes na definição de novos descritores e características, que podem melhorar significativamente o resultado de métodos automatizados

    Biometrics

    Get PDF
    Biometrics uses methods for unique recognition of humans based upon one or more intrinsic physical or behavioral traits. In computer science, particularly, biometrics is used as a form of identity access management and access control. It is also used to identify individuals in groups that are under surveillance. The book consists of 13 chapters, each focusing on a certain aspect of the problem. The book chapters are divided into three sections: physical biometrics, behavioral biometrics and medical biometrics. The key objective of the book is to provide comprehensive reference and text on human authentication and people identity verification from both physiological, behavioural and other points of view. It aims to publish new insights into current innovations in computer systems and technology for biometrics development and its applications. The book was reviewed by the editor Dr. Jucheng Yang, and many of the guest editors, such as Dr. Girija Chetty, Dr. Norman Poh, Dr. Loris Nanni, Dr. Jianjiang Feng, Dr. Dongsun Park, Dr. Sook Yoon and so on, who also made a significant contribution to the book
    corecore