377 research outputs found

    Automatic Breast Density Classification on Tomosynthesis Images

    Get PDF
    Breast cancer (BC) is the type of cancer that most greatly affects women globally hence its early detection is essential to guarantee an effective treatment. Although digital mammography (DM) is the main method of BC detection, it has low sensitivity with about 30% of positive cases undetected due to the superimposition of breast tissue when crossed by the X-ray beam. Digital breast tomosynthesis (DBT) does not share this limi tation, allowing the visualization of individual breast slices due to its image acquisition system. Consecutively, DBT was the object of this study as a means of determining one of the main risk factors for BC: breast density (BD). This thesis was aimed at developing an algorithm that, taking advantage of the 3D nature of DBT images, automatically clas sifies them in terms of BD. Thus, a quantitative, objective and reproducible classification was obtained, which will contribute to ascertain the risk of BC. The algorithm was developed in MATLAB and later transferred to a user interface that was compiled into an executable application. Using 350 images from the VICTRE database for the first classification phase – group 1 (ACR1+ACR2) versus group 2 (ACR3+ACR4), the highest AUC value of 0,9797 was obtained. In the classification within groups 1 and 2, the AUC obtained was 0,7461 and 0,6736, respectively. The algorithm attained an accuracy of 82% for these images. Sixteen exams provided by Hospital da Luz were also evaluated, with an overall accuracy of 62,5%. Therefore, a user-friendly and intuitive application was created that prioritizes the use of DBT as a diagnostic method and allows an objective classification of BD. This study is a first step towards preparing medical institutions for the compulsoriness of assessing BD, at a time when BC is still a very present pathology that shortens the lives of thousands of people

    IMAGE PROCESSING, SEGMENTATION AND MACHINE LEARNING MODELS TO CLASSIFY AND DELINEATE TUMOR VOLUMES TO SUPPORT MEDICAL DECISION

    Get PDF
    Techniques for processing and analysing images and medical data have become the main’s translational applications and researches in clinical and pre-clinical environments. The advantages of these techniques are the improvement of diagnosis accuracy and the assessment of treatment response by means of quantitative biomarkers in an efficient way. In the era of the personalized medicine, an early and efficacy prediction of therapy response in patients is still a critical issue. In radiation therapy planning, Magnetic Resonance Imaging (MRI) provides high quality detailed images and excellent soft-tissue contrast, while Computerized Tomography (CT) images provides attenuation maps and very good hard-tissue contrast. In this context, Positron Emission Tomography (PET) is a non-invasive imaging technique which has the advantage, over morphological imaging techniques, of providing functional information about the patient’s disease. In the last few years, several criteria to assess therapy response in oncological patients have been proposed, ranging from anatomical to functional assessments. Changes in tumour size are not necessarily correlated with changes in tumour viability and outcome. In addition, morphological changes resulting from therapy occur slower than functional changes. Inclusion of PET images in radiotherapy protocols is desirable because it is predictive of treatment response and provides crucial information to accurately target the oncological lesion and to escalate the radiation dose without increasing normal tissue injury. For this reason, PET may be used for improving the Planning Treatment Volume (PTV). Nevertheless, due to the nature of PET images (low spatial resolution, high noise and weak boundary), metabolic image processing is a critical task. The aim of this Ph.D thesis is to develope smart methodologies applied to the medical imaging field to analyse different kind of problematic related to medical images and data analysis, working closely to radiologist physicians. Various issues in clinical environment have been addressed and a certain amount of improvements has been produced in various fields, such as organs and tissues segmentation and classification to delineate tumors volume using meshing learning techniques to support medical decision. In particular, the following topics have been object of this study: • Technique for Crohn’s Disease Classification using Kernel Support Vector Machine Based; • Automatic Multi-Seed Detection For MR Breast Image Segmentation; • Tissue Classification in PET Oncological Studies; • KSVM-Based System for the Definition, Validation and Identification of the Incisinal Hernia Reccurence Risk Factors; • A smart and operator independent system to delineate tumours in Positron Emission Tomography scans; 3 • Active Contour Algorithm with Discriminant Analysis for Delineating Tumors in Positron Emission Tomography; • K-Nearest Neighbor driving Active Contours to Delineate Biological Tumor Volumes; • Tissue Classification to Support Local Active Delineation of Brain Tumors; • A fully automatic system of Positron Emission Tomography Study segmentation. This work has been developed in collaboration with the medical staff and colleagues at the: • Dipartimento di Biopatologia e Biotecnologie Mediche e Forensi (DIBIMED), University of Palermo • Cannizzaro Hospital of Catania • Istituto di Bioimmagini e Fisiologia Molecolare (IBFM) Centro Nazionale delle Ricerche (CNR) of Cefalù • School of Electrical and Computer Engineering at Georgia Institute of Technology The proposed contributions have produced scientific publications in indexed computer science and medical journals and conferences. They are very useful in terms of PET and MRI image segmentation and may be used daily as a Medical Decision Support Systems to enhance the current methodology performed by healthcare operators in radiotherapy treatments. The future developments of this research concern the integration of data acquired by image analysis with the managing and processing of big data coming from a wide kind of heterogeneous sources

    Medical Image Segmentation: Thresholding and Minimum Spanning Trees

    Get PDF
    I bildesegmentering deles et bilde i separate objekter eller regioner. Det er et essensielt skritt i bildebehandling for å definere interesseområder for videre behandling eller analyse. Oppdelingsprosessen reduserer kompleksiteten til et bilde for å forenkle analysen av attributtene oppnådd etter segmentering. Det forandrer representasjonen av informasjonen i det opprinnelige bildet og presenterer pikslene på en måte som er mer meningsfull og lettere å forstå. Bildesegmentering har forskjellige anvendelser. For medisinske bilder tar segmenteringsprosessen sikte på å trekke ut bildedatasettet for å identifisere områder av anatomien som er relevante for en bestemt studie eller diagnose av pasienten. For eksempel kan man lokalisere berørte eller anormale deler av kroppen. Segmentering av oppfølgingsdata og baseline lesjonssegmentering er også svært viktig for å vurdere behandlingsresponsen. Det er forskjellige metoder som blir brukt for bildesegmentering. De kan klassifiseres basert på hvordan de er formulert og hvordan segmenteringsprosessen utføres. Metodene inkluderer de som er baserte på terskelverdier, graf-baserte, kant-baserte, klynge-baserte, modell-baserte og hybride metoder, og metoder basert på maskinlæring og dyp læring. Andre metoder er baserte på å utvide, splitte og legge sammen regioner, å finne diskontinuiteter i randen, vannskille segmentering, aktive kontuter og graf-baserte metoder. I denne avhandlingen har vi utviklet metoder for å segmentere forskjellige typer medisinske bilder. Vi testet metodene på datasett for hvite blodceller (WBCs) og magnetiske resonansbilder (MRI). De utviklede metodene og analysen som er utført på bildedatasettet er presentert i tre artikler. I artikkel A (Paper A) foreslo vi en metode for segmentering av nukleuser og cytoplasma fra hvite blodceller. Metodene estimerer terskelen for segmentering av nukleuser automatisk basert på lokale minima. Metoden segmenterer WBC-ene før segmentering av cytoplasma avhengig av kompleksiteten til objektene i bildet. For bilder der WBC-ene er godt skilt fra røde blodlegemer (RBC), er WBC-ene segmentert ved å ta gjennomsnittet av nn bilder som allerede var filtrert med en terskelverdi. For bilder der RBC-er overlapper WBC-ene, er hele WBC-ene segmentert ved hjelp av enkle lineære iterative klynger (SLIC) og vannskillemetoder. Cytoplasmaet oppnås ved å trekke den segmenterte nukleusen fra den segmenterte WBC-en. Metoden testes på to forskjellige offentlig tilgjengelige datasett, og resultatene sammenlignes med toppmoderne metoder. I artikkel B (Paper B) foreslo vi en metode for segmentering av hjernesvulster basert på minste dekkende tre-konsepter (minimum spanning tree, MST). Metoden utfører interaktiv segmentering basert på MST. I denne artikkelen er bildet lastet inn i et interaktivt vindu for segmentering av svulsten. Fokusregion og bakgrunn skilles ved å klikke for å dele MST i to trær. Ett av disse trærne representerer fokusregionen og det andre representerer bakgrunnen. Den foreslåtte metoden ble testet ved å segmentere to forskjellige 2D-hjerne T1 vektede magnetisk resonans bildedatasett. Metoden er enkel å implementere og resultatene indikerer at den er nøyaktig og effektiv. I artikkel C (Paper C) foreslår vi en metode som behandler et 3D MRI-volum og deler det i hjernen, ikke-hjernevev og bakgrunnsegmenter. Det er en grafbasert metode som bruker MST til å skille 3D MRI inn i de tre regiontypene. Grafen lages av et forhåndsbehandlet 3D MRI-volum etterfulgt av konstrueringen av MST-en. Segmenteringsprosessen gir tre merkede, sammenkoblende komponenter som omformes tilbake til 3D MRI-form. Etikettene brukes til å segmentere hjernen, ikke-hjernevev og bakgrunn. Metoden ble testet på tre forskjellige offentlig tilgjengelige datasett og resultatene ble sammenlignet med ulike toppmoderne metoder.In image segmentation, an image is divided into separate objects or regions. It is an essential step in image processing to define areas of interest for further processing or analysis. The segmentation process reduces the complexity of an image to simplify the analysis of the attributes obtained after segmentation. It changes the representation of the information in the original image and presents the pixels in a way that is more meaningful and easier to understand. Image segmentation has various applications. For medical images, the segmentation process aims to extract the image data set to identify areas of the anatomy relevant to a particular study or diagnosis of the patient. For example, one can locate affected or abnormal parts of the body. Segmentation of follow-up data and baseline lesion segmentation is also very important to assess the treatment response. There are different methods used for image segmentation. They can be classified based on how they are formulated and how the segmentation process is performed. The methods include those based on threshold values, edge-based, cluster-based, model-based and hybrid methods, and methods based on machine learning and deep learning. Other methods are based on growing, splitting and merging regions, finding discontinuities in the edge, watershed segmentation, active contours and graph-based methods. In this thesis, we have developed methods for segmenting different types of medical images. We tested the methods on datasets for white blood cells (WBCs) and magnetic resonance images (MRI). The developed methods and the analysis performed on the image data set are presented in three articles. In Paper A we proposed a method for segmenting nuclei and cytoplasm from white blood cells. The method estimates the threshold for segmentation of nuclei automatically based on local minima. The method segments the WBCs before segmenting the cytoplasm depending on the complexity of the objects in the image. For images where the WBCs are well separated from red blood cells (RBCs), the WBCs are segmented by taking the average of nn images that were already filtered with a threshold value. For images where RBCs overlap the WBCs, the entire WBCs are segmented using simple linear iterative clustering (SLIC) and watershed methods. The cytoplasm is obtained by subtracting the segmented nucleus from the segmented WBC. The method is tested on two different publicly available datasets, and the results are compared with state of the art methods. In Paper B, we proposed a method for segmenting brain tumors based on minimum spanning tree (MST) concepts. The method performs interactive segmentation based on the MST. In this paper, the image is loaded in an interactive window for segmenting the tumor. The region of interest and the background are selected by clicking to split the MST into two trees. One of these trees represents the region of interest and the other represents the background. The proposed method was tested by segmenting two different 2D brain T1-weighted magnetic resonance image data sets. The method is simple to implement and the results indicate that it is accurate and efficient. In Paper C, we propose a method that processes a 3D MRI volume and partitions it into brain, non-brain tissues, and background segments. It is a graph-based method that uses MST to separate the 3D MRI into the brain, non-brain, and background regions. The graph is made from a preprocessed 3D MRI volume followed by constructing the MST. The segmentation process produces three labeled connected components which are reshaped back to the shape of the 3D MRI. The labels are used to segment the brain, non-brain tissues, and the background. The method was tested on three different publicly available data sets and the results were compared to different state of the art methods.Doktorgradsavhandlin

    Liver segmentation using 3D CT scans.

    Get PDF
    Master of Science in Computer Science. University of KwaZulu-Natal, Durban, 2018.Abstract available in PDF file

    Computer-aided detection and diagnosis of breast cancer in 2D and 3D medical imaging through multifractal analysis

    Get PDF
    This Thesis describes the research work performed in the scope of a doctoral research program and presents its conclusions and contributions. The research activities were carried on in the industry with Siemens S.A. Healthcare Sector, in integration with a research team. Siemens S.A. Healthcare Sector is one of the world biggest suppliers of products, services and complete solutions in the medical sector. The company offers a wide selection of diagnostic and therapeutic equipment and information systems. Siemens products for medical imaging and in vivo diagnostics include: ultrasound, computer tomography, mammography, digital breast tomosynthesis, magnetic resonance, equipment to angiography and coronary angiography, nuclear imaging, and many others. Siemens has a vast experience in Healthcare and at the beginning of this project it was strategically interested in solutions to improve the detection of Breast Cancer, to increase its competitiveness in the sector. The company owns several patents related with self-similarity analysis, which formed the background of this Thesis. Furthermore, Siemens intended to explore commercially the computer- aided automatic detection and diagnosis eld for portfolio integration. Therefore, with the high knowledge acquired by University of Beira Interior in this area together with this Thesis, will allow Siemens to apply the most recent scienti c progress in the detection of the breast cancer, and it is foreseeable that together we can develop a new technology with high potential. The project resulted in the submission of two invention disclosures for evaluation in Siemens A.G., two articles published in peer-reviewed journals indexed in ISI Science Citation Index, two other articles submitted in peer-reviewed journals, and several international conference papers. This work on computer-aided-diagnosis in breast led to innovative software and novel processes of research and development, for which the project received the Siemens Innovation Award in 2012. It was very rewarding to carry on such technological and innovative project in a socially sensitive area as Breast Cancer.No cancro da mama a deteção precoce e o diagnóstico correto são de extrema importância na prescrição terapêutica e caz e e ciente, que potencie o aumento da taxa de sobrevivência à doença. A teoria multifractal foi inicialmente introduzida no contexto da análise de sinal e a sua utilidade foi demonstrada na descrição de comportamentos siológicos de bio-sinais e até na deteção e predição de patologias. Nesta Tese, três métodos multifractais foram estendidos para imagens bi-dimensionais (2D) e comparados na deteção de microcalci cações em mamogramas. Um destes métodos foi também adaptado para a classi cação de massas da mama, em cortes transversais 2D obtidos por ressonância magnética (RM) de mama, em grupos de massas provavelmente benignas e com suspeição de malignidade. Um novo método de análise multifractal usando a lacunaridade tri-dimensional (3D) foi proposto para classi cação de massas da mama em imagens volumétricas 3D de RM de mama. A análise multifractal revelou diferenças na complexidade subjacente às localizações das microcalci cações em relação aos tecidos normais, permitindo uma boa exatidão da sua deteção em mamogramas. Adicionalmente, foram extraídas por análise multifractal características dos tecidos que permitiram identi car os casos tipicamente recomendados para biópsia em imagens 2D de RM de mama. A análise multifractal 3D foi e caz na classi cação de lesões mamárias benignas e malignas em imagens 3D de RM de mama. Este método foi mais exato para esta classi cação do que o método 2D ou o método padrão de análise de contraste cinético tumoral. Em conclusão, a análise multifractal fornece informação útil para deteção auxiliada por computador em mamogra a e diagnóstico auxiliado por computador em imagens 2D e 3D de RM de mama, tendo o potencial de complementar a interpretação dos radiologistas

    Novel Statistical Methodologies in Analysis of Position Emission Tomography Data: Applications in Segmentation, Normalization, and Trajectory Modeling

    Get PDF
    Position emission tomography (PET) is a powerful functional imaging modality with wide uses in fields such as oncology, cardiology, and neurology. Motivated by imaging datasets from a psoriasis clinical trial and a cohort of Alzheimer\u27s disease (AD) patients, several interesting methodological challenges were identified in various steps of quantitative analysis of PET data. In Chapter 1, we consider a classification scenario of bivariate thresholding of a predictor using an upper and lower cutpoints, as motivated by an image segmentation problem of the skin. We introduce a generalization of ROC analysis and the concept of the parameter path in ROC space of a classifier. Using this framework, we define the optimal ROC (OROC) to identify and assess performance of optimal classifiers, and describe a novel nonparametric estimation of OROC which simultaneous estimates the parameter path of the optimal classifier. In simulations, we compare its performance to alternative methods of OROC estimation. In Chapter 2, we develop a novel method to normalize PET images as an essential preprocessing step for quantitative analysis. We propose a method based on application of functional data analysis to image intensity distribution functions, assuming that that individual image density functions are variations from a template density. By modeling the warping functions using a modified function-on-scalar regression, the variations in density functions due to nuisance parameters are estimated and subsequently removed for normalization. Application to our motivating data indicate persistence of residual variations in standardized image densities. In Chapter 3, we propose a nonlinear mixed effects framework to model amyloid-beta (Aβ), an important biomarker in AD. We incorporate the hypothesized functional form of Aβ trajectory by assuming a common trajectory model for all subjects with variations in the location parameter, and a mixture distribution for the random effects of the location parameter address our empirical findings that some subjects may not accumulate Aβ. Using a Bayesian hierarchical model, group differences are specified into the trajectory parameters. We show in simulation studies that the model closely estimates the true parameters under various scenarios, and accurately estimates group differences in the age of onset

    Facilitating Breast Conserving Surgery Using Preoperative MRI

    Get PDF
    Breast cancer is currently considered the most widespread malignancy in women, which costs the lives of approximately 400,000 people annually worldwide. While extremely useful for early detection and diagnosis of breast disease, the application of MRI to pre-operative planning of breast conservative surgeries is complicated due to the differences in the patient's posture at the time of imaging and surgery, respectively. Specifically, while MRI is standardly performed with patients positioned with their face down and their breast unrestricted and pendulous, breast surgeries normally require the patients to lie on their back, in which case the breast undergoes substantial deformations due to the effect of gravity. As a result of these deformations, pre-surgical MRI images frequently do not correspond with the actual anatomy of the breast at the time of surgery, which limits their applicability to pre-surgical planning. Accordingly, to overcome the above problem and make the MRI images align with the actual intra-surgical anatomy of the breast, the images need to be properly warped - a procedure that is known as prone-to-supine image registration. In many cases, this registration is carried out in two steps, prediction and correction. While the former involves bio-mechanical modeling used to describe the principal effect of tissue deformation, the latter refines the preceding results based on the image content. What is more important, however, is the fact that the accuracy of the correction step (and, hence, of the registration process as a whole) is strongly dependent on the accuracy of bio-mechanical modeling, which needs therefore be maximized as much as possible. Consequently, the fundamental objective of this research project has been the development of algorithmic solutions for reliable and accurate prediction. In particular, we propose an automatic detection of the location and geometry of the breast, and a breast image segmentation method to differentiate between adipose and dense tissue that is tractable, stable, and independent of initialization

    Model-Based Approach for Diffuse Glioma Classification, Grading, and Patient Survival Prediction

    Get PDF
    The work in this dissertation proposes model-based approaches for molecular mutations classification of gliomas, grading based on radiomics features and genomics, and prediction of diffuse gliomas clinical outcome in overall patient survival. Diffuse gliomas are types of Central Nervous System (CNS) brain tumors that account for 25.5% of primary brain and CNS tumors and originate from the supportive glial cells. In the 2016 World Health Organization’s (WHO) criteria for CNS brain tumor, a major reclassification of the diffuse gliomas is presented based on gliomas molecular mutations and the growth behavior. Currently, the status of molecular mutations is determined by obtaining viable regions of tumor tissue samples. However, an increasing need to non-invasively analyze the clinical outcome of tumors requires careful modeling and co-analysis of radiomics (i.e., imaging features) and genomics (molecular and proteomics features). The variances in diffuse Lower-grade gliomas (LGG), which are demonstrated by their heterogeneity, can be exemplified by radiographic imaging features (i.e., radiomics). Therefore, radiomics may be suggested as a crucial non-invasive marker in the tumor diagnosis and prognosis. Consequently, we examine radiomics extracted from the multi-resolution fractal representations of the tumor in classifying the molecular mutations of diffuse LGG non-invasively. The proposed radiomics in the decision-tree-based ensemble machine learning molecular prediction model confirm the efficacy of these fractal features in glioma prediction. Furthermore, this dissertation proposes a novel non-invasive statistical model to classify and predict LGG molecular mutations based on radiomics and count-based genomics data. The performance results of the proposed statistical model indicate that fusing radiomics to count-based genomics improves the performance of mutations prediction. Furthermore, the radiomics-based glioblastoma survival prediction framework is proposed in this work. The survival prediction framework includes two survival prediction pipelines that combine different feature selection and regression approaches. The framework is evaluated using two recent widely used benchmark datasets from Brain Tumor Segmentation (BraTS) challenges in 2017 and 2018. The first survival prediction pipeline offered the best overall performance in the 2017 Challenge, and the second survival prediction pipeline offered the best performance using the validation dataset. In summary, in this work, we develop non-invasive computational and statistical models based on radiomics and genomics to investigate overall survival, tumor progression, and the molecular classification in diffuse gliomas. The methods discussed in our study are important steps towards a non-invasive approach to diffuse brain tumor classification, grading, and patient survival prediction that may be recommended prior to invasive tissue sampling in a clinical setting

    Information Fusion of Magnetic Resonance Images and Mammographic Scans for Improved Diagnostic Management of Breast Cancer

    Get PDF
    Medical imaging is critical to non-invasive diagnosis and treatment of a wide spectrum of medical conditions. However, different modalities of medical imaging employ/apply di erent contrast mechanisms and, consequently, provide different depictions of bodily anatomy. As a result, there is a frequent problem where the same pathology can be detected by one type of medical imaging while being missed by others. This problem brings forward the importance of the development of image processing tools for integrating the information provided by different imaging modalities via the process of information fusion. One particularly important example of clinical application of such tools is in the diagnostic management of breast cancer, which is a prevailing cause of cancer-related mortality in women. Currently, the diagnosis of breast cancer relies mainly on X-ray mammography and Magnetic Resonance Imaging (MRI), which are both important throughout different stages of detection, localization, and treatment of the disease. The sensitivity of mammography, however, is known to be limited in the case of relatively dense breasts, while contrast enhanced MRI tends to yield frequent 'false alarms' due to its high sensitivity. Given this situation, it is critical to find reliable ways of fusing the mammography and MRI scans in order to improve the sensitivity of the former while boosting the specificity of the latter. Unfortunately, fusing the above types of medical images is known to be a difficult computational problem. Indeed, while MRI scans are usually volumetric (i.e., 3-D), digital mammograms are always planar (2-D). Moreover, mammograms are invariably acquired under the force of compression paddles, thus making the breast anatomy undergo sizeable deformations. In the case of MRI, on the other hand, the breast is rarely constrained and imaged in a pendulous state. Finally, X-ray mammography and MRI exploit two completely di erent physical mechanisms, which produce distinct diagnostic contrasts which are related in a non-trivial way. Under such conditions, the success of information fusion depends on one's ability to establish spatial correspondences between mammograms and their related MRI volumes in a cross-modal cross-dimensional (CMCD) setting in the presence of spatial deformations (+SD). Solving the problem of information fusion in the CMCD+SD setting is a very challenging analytical/computational problem, still in need of efficient solutions. In the literature, there is a lack of a generic and consistent solution to the problem of fusing mammograms and breast MRIs and using their complementary information. Most of the existing MRI to mammogram registration techniques are based on a biomechanical approach which builds a speci c model for each patient to simulate the effect of mammographic compression. The biomechanical model is not optimal as it ignores the common characteristics of breast deformation across different cases. Breast deformation is essentially the planarization of a 3-D volume between two paddles, which is common in all patients. Regardless of the size, shape, or internal con guration of the breast tissue, one can predict the major part of the deformation only by considering the geometry of the breast tissue. In contrast with complex standard methods relying on patient-speci c biomechanical modeling, we developed a new and relatively simple approach to estimate the deformation and nd the correspondences. We consider the total deformation to consist of two components: a large-magnitude global deformation due to mammographic compression and a residual deformation of relatively smaller amplitude. We propose a much simpler way of predicting the global deformation which compares favorably to FEM in terms of its accuracy. The residual deformation, on the other hand, is recovered in a variational framework using an elastic transformation model. The proposed algorithm provides us with a computational pipeline that takes breast MRIs and mammograms as inputs and returns the spatial transformation which establishes the correspondences between them. This spatial transformation can be applied in different applications, e.g., producing 'MRI-enhanced' mammograms (which is capable of improving the quality of surgical care) and correlating between different types of mammograms. We investigate the performance of our proposed pipeline on the application of enhancing mammograms by means of MRIs and we have shown improvements over the state of the art
    • …
    corecore