457 research outputs found

    Deep Learning for Detection and Segmentation in High-Content Microscopy Images

    Get PDF
    High-content microscopy led to many advances in biology and medicine. This fast emerging technology is transforming cell biology into a big data driven science. Computer vision methods are used to automate the analysis of microscopy image data. In recent years, deep learning became popular and had major success in computer vision. Most of the available methods are developed to process natural images. Compared to natural images, microscopy images pose domain specific challenges such as small training datasets, clustered objects, and class imbalance. In this thesis, new deep learning methods for object detection and cell segmentation in microscopy images are introduced. For particle detection in fluorescence microscopy images, a deep learning method based on a domain-adapted Deconvolution Network is presented. In addition, a method for mitotic cell detection in heterogeneous histopathology images is proposed, which combines a deep residual network with Hough voting. The method is used for grading of whole-slide histology images of breast carcinoma. Moreover, a method for both particle detection and cell detection based on object centroids is introduced, which is trainable end-to-end. It comprises a novel Centroid Proposal Network, a layer for ensembling detection hypotheses over image scales and anchors, an anchor regularization scheme which favours prior anchors over regressed locations, and an improved algorithm for Non-Maximum Suppression. Furthermore, a novel loss function based on Normalized Mutual Information is proposed which can cope with strong class imbalance and is derived within a Bayesian framework. For cell segmentation, a deep neural network with increased receptive field to capture rich semantic information is introduced. Moreover, a deep neural network which combines both paradigms of multi-scale feature aggregation of Convolutional Neural Networks and iterative refinement of Recurrent Neural Networks is proposed. To increase the robustness of the training and improve segmentation, a novel focal loss function is presented. In addition, a framework for black-box hyperparameter optimization for biomedical image analysis pipelines is proposed. The framework has a modular architecture that separates hyperparameter sampling and hyperparameter optimization. A visualization of the loss function based on infimum projections is suggested to obtain further insights into the optimization problem. Also, a transfer learning approach is presented, which uses only one color channel for pre-training and performs fine-tuning on more color channels. Furthermore, an approach for unsupervised domain adaptation for histopathological slides is presented. Finally, Galaxy Image Analysis is presented, a platform for web-based microscopy image analysis. Galaxy Image Analysis workflows for cell segmentation in cell cultures, particle detection in mice brain tissue, and MALDI/H&E image registration have been developed. The proposed methods were applied to challenging synthetic as well as real microscopy image data from various microscopy modalities. It turned out that the proposed methods yield state-of-the-art or improved results. The methods were benchmarked in international image analysis challenges and used in various cooperation projects with biomedical researchers

    Computational Analysis of Tumour Microenvironment in mIHC Stained Diffuse Glioma Samples

    Get PDF
    Healthcare is a sector that has been notoriously stagnant in digital innovation, nevertheless its transformation is imminent. Digital pathology is a field that is being accentuated in light of recent technological development. With capacity to conduct high-resolution tissue imaging and managing output digitally, advanced image analysis and Machine Learning can be subsequently applied. These methods provide means to for instance automating segmentation of region-of-interests, diagnosis and knowledge discovery. Brain malignancies are particularly dire with a high fatality rate and relatively high occurrence in children. Diffuse gliomas are a subtype of brain tumours whose biological behavior range from very indolent to extremely aggressive, which is reflected in grading I - IV. The brain tumour micro-environment (TME) --- local area surrounding cancerous cells with a plethora of immune cells and other structures in interaction --- has emerged as a critical regulator of brain tumour progression. Researchers are interested in immunotherapeutic treatment of brain cancer, since modern approaches are insufficient in treatment of especially the most aggressive tumours. Additionally, the TME is rendered difficult to understand. Multiplex Immunohistochemistry (mIHC) is a novel approach in effectively mapping spatial distribution of cell types in tissue samples using multiple antibodies. In this thesis, we investigate the TME in diffuse glioma mIHC samples for three patient cases with 2-3 differing tumour grades per patient. From the 18 possibilities we selected 6 antigens (markers) of interest for further analysis. In particular, we are interested in how relative proportion of positive antigens and mean distance to nearest blood vessel vary for our selected markers in tumour progression. In order to acquire desired properties, we register each corresponding image, detect nuclei, segment cells and extract structured data from region channel intensities along with their location and distance to nearest blood vessel. Our primary finding is that M2-macrophage and T cell occurrence proportions as well as their mean distance to blood vessel grow with increasing tumour grade. The results could suggest that aforementioned cell types are of low quantity in near vicinity of blood vessels in low tumour grades, and conversely with higher quantities and more homogeneous distribution in aggressive tumours. Despite the several potential error sources and non-standardized processes in the pipeline between tissue extraction and image analysis, our results support pre-existing knowledge in that M2-macrophage proportion has a positive correlation with tumour grade.Terveydenhuollon digitaalinen kehitys on ollut hidasliikkeistä muihin sektoreihin verrattuna. Tästä huolimatta, terveydenhuollon digitaalinen muunnos on välitön ja asiaan liittyvä tutkimus jatkuvaa. Digitaalinen patologia on ala, joka viime aikaisen teknologisen kehityksen myötä on korostunut. Kudoskuvantaminen korkealla resoluutiolla ja näytteiden digitaalinen hallinta on mahdollistanut kehittyneen kuvanalysiin sekä koneoppimisen soveltamisen. Nämä metodit luovat keinot esimerkiksi biologisesti merkittävien alueiden segmentointiin, diagnoosiin ja uuden tieteellisen tiedon tuottamiseen. Aivokasvaimet ovat järkyttäviä, sillä tapauskuolleisuus ja esiintymä nuorissa ovat suhteellisen korkealla. Diffuusigliomat ovat aivokasvainten alatyyppi, jonka sisältämät kasvaimet luokitellaan niiden aggressiivisuuden perusteella eri graduksiin väliltä I - IV. Kasvaimen mikroympäristö (TME), eli syöpäsolujen paikallinen ympäristö sisältäen mm. runsaasti immuunipuolustuksen soluja vuorovaikutuksessa, on osoittautunut merkittäväksi tekijäksi kasvaimen kehityksen suhteen. Aivosyövän tutkimus painottuu immunoterapeuttisiin ratkaisuihin, sillä nykyiset hoitomuodot eivät ole tarpeeksi tehokkaita etenkään kaikista aggressiivisimpien kasvainten hoidossa. Lisäksi mikroympäristö voi olla vaikea ymmärtää. Monikanavainen immunohistokemiallinen värjäys (mIHC) on uudenlainen lähestymistapa solutyyppien spatiaalijakauman kartoittamiseen kudosnäytteissä tehokkaasti hyödyntäen useita vasta-aineita. Tässä opinnäytetyössä tutkitaan diffuusigliooma mIHC-näytteitä kolmelle potilastapaukselle. Jokaista potilasta kohti on 2-3 näytettä eri kasvainlaaduista ja yhteensä 18 mIHC-kanavaa per näyte, joista 6 otettiin tarkasteluun. Tarkalleen ottaen, solutyyppien aktivaatioiden osuudet positiivisten antigeenien perusteella ja keskimääräinen etäisyys lähimpään verisuoneen jokaista ryhmää kohti lasketaan eri kasvaimen laaduissa. Tavoitteen saavuttamiseksi näytteitä vastaavat kuvat rekisteröidään, tumat tunnistetaan, solualueet segmentoidaan ja kerätään jäsenneltyä tietoa alueiden intensiteettikanavista mukaan lukien sijainti ja sijaintia vastaava etäisyys lähimpään verisuoneen. Pääasiallinen löytö on, että M2-makrofagien ja T-solujen suhteelliset osuudet sekä keskimääräinen etäisyys lähimpään verisuoneen nousevat kasvaimen ollessa aggressiivisempi. Tulokset saattavat ehdottaa, että edellämainitut solutyypit ovat vähäisiä ja verisuonten lähellä kun kasvain on hyvänlaatuinen ja vastaavasti suurimilla osuuksilla ja enemmän homogeenisesti jakautunut kun kasvain on aggressiivisempi. Useista virhelähteistä ja kudosanalyysin liittyvistä ei-standardisoiduista prosesseista huolimatta, tuloksemme tukevat ennaltatiedettyä tietoa siitä, että M2-makrofagien osuudella on positiivinen korrelaatio kasvaimen laatuun

    Deep Learning for Classification of Brain Tumor Histopathological Images

    Get PDF
    Histopathological image classification has been at the forefront of medical research. We evaluated several deep and non-deep learning models for brain tumor histopathological image classification. The challenges were characterized by an insufficient amount of training data and identical glioma features. We employed transfer learning to tackle these challenges. We also employed some state-of-the-art non-deep learning classifiers on histogram of gradient features extracted from our images, as well as features extracted using CNN activations. Data augmentation was utilized in our study. We obtained an 82% accuracy with DenseNet-201 as our best for the deep learning models and an 83.8% accuracy with ANN for the non-deep learning classifiers. The average of the diagonals of the confusion matrices for each model was calculated as their accuracy. The performance metrics criteria in this study are our model’s precision in classifying each class and their average classification accuracy. Our result emphasizes the significance of deep learning as an invaluable tool for histopathological image studies

    Micro-Net: A unified model for segmentation of various objects in microscopy images

    Get PDF
    Object segmentation and structure localization are important steps in automated image analysis pipelines for microscopy images. We present a convolution neural network (CNN) based deep learning architecture for segmentation of objects in microscopy images. The proposed network can be used to segment cells, nuclei and glands in fluorescence microscopy and histology images after slight tuning of input parameters. The network trains at multiple resolutions of the input image, connects the intermediate layers for better localization and context and generates the output using multi-resolution deconvolution filters. The extra convolutional layers which bypass the max-pooling operation allow the network to train for variable input intensities and object size and make it robust to noisy data. We compare our results on publicly available data sets and show that the proposed network outperforms recent deep learning algorithms

    Quantitative analysis with machine learning models for multi-parametric brain imaging data

    Get PDF
    Gliomas are considered to be the most common primary adult malignant brain tumor. With the dramatic increases in computational power and improvements in image analysis algorithms, computer-aided medical image analysis has been introduced into clinical applications. Precision tumor grading and genotyping play an indispensable role in clinical diagnosis, treatment and prognosis. Gliomas diagnostic procedures include histopathological imaging tests, molecular imaging scans and tumor grading. Pathologic review of tumor morphology in histologic sections is the traditional method for cancer classification and grading, yet human study has limitations that can result in low reproducibility and inter-observer agreement. Compared with histopathological images, Magnetic resonance (MR) imaging present the different structure and functional features, which might serve as noninvasive surrogates for tumor genotypes. Therefore, computer-aided image analysis has been adopted in clinical application, which might partially overcome these shortcomings due to its capacity to quantitatively and reproducibly measure multilevel features on multi-parametric medical information. Imaging features obtained from a single modal image do not fully represent the disease, so quantitative imaging features, including morphological, structural, cellular and molecular level features, derived from multi-modality medical images should be integrated into computer-aided medical image analysis. The image quality differentiation between multi-modality images is a challenge in the field of computer-aided medical image analysis. In this thesis, we aim to integrate the quantitative imaging data obtained from multiple modalities into mathematical models of tumor prediction response to achieve additional insights into practical predictive value. Our major contributions in this thesis are: 1. Firstly, to resolve the imaging quality difference and observer-dependent in histological image diagnosis, we proposed an automated machine-learning brain tumor-grading platform to investigate contributions of multi-parameters from multimodal data including imaging parameters or features from Whole Slide Images (WSI) and the proliferation marker KI-67. For each WSI, we extract both visual parameters such as morphology parameters and sub-visual parameters including first-order and second-order features. A quantitative interpretable machine learning approach (Local Interpretable Model-Agnostic Explanations) was followed to measure the contribution of features for single case. Most grading systems based on machine learning models are considered “black boxes,” whereas with this system the clinically trusted reasoning could be revealed. The quantitative analysis and explanation may assist clinicians to better understand the disease and accordingly to choose optimal treatments for improving clinical outcomes. 2. Based on the automated brain tumor-grading platform we propose, multimodal Magnetic Resonance Images (MRIs) have been introduced in our research. A new imaging–tissue correlation based approach called RA-PA-Thomics was proposed to predict the IDH genotype. Inspired by the concept of image fusion, we integrate multimodal MRIs and the scans of histopathological images for indirect, fast, and cost saving IDH genotyping. The proposed model has been verified by multiple evaluation criteria for the integrated data set and compared to the results in the prior art. The experimental data set includes public data sets and image information from two hospitals. Experimental results indicate that the model provided improves the accuracy of glioma grading and genotyping

    U-Net and its variants for medical image segmentation: theory and applications

    Full text link
    U-net is an image segmentation technique developed primarily for medical image analysis that can precisely segment images using a scarce amount of training data. These traits provide U-net with a very high utility within the medical imaging community and have resulted in extensive adoption of U-net as the primary tool for segmentation tasks in medical imaging. The success of U-net is evident in its widespread use in all major image modalities from CT scans and MRI to X-rays and microscopy. Furthermore, while U-net is largely a segmentation tool, there have been instances of the use of U-net in other applications. As the potential of U-net is still increasing, in this review we look at the various developments that have been made in the U-net architecture and provide observations on recent trends. We examine the various innovations that have been made in deep learning and discuss how these tools facilitate U-net. Furthermore, we look at image modalities and application areas where U-net has been applied.Comment: 42 pages, in IEEE Acces

    Deep Learning for Semantic Segmentation versus Classification in Computational Pathology: Application to mitosis analysis in Breast Cancer grading

    Get PDF
    Existing computational pathology approaches did not allow, yet, the emergence of effective/efficient computer-aided tools used as a second opinion for pathologists in the daily practice. Focusing on the case of computer-based qualification for breast cancer diagnosis, the present article proposes two deep learning architectures to efficiently and effectively detect and classify mitosis in a histopathological tissue sample. The first method consisted of two parts, entailing a preprocessing of the digital histological image and a free-handcrafted-feature Convolutional Neural Network (CNN) used for binary classification. Results show that the methodology proposed can achieve 95% accuracy in testing with an F1-score of 94.35%, which is higher than the results from the literature using classical image processing techniques and also higher than the approaches using handcrafted features combined with CNNs. The second approach was an end-to-end methodology using semantic segmentation. Results showed that this algorithm can achieve an accuracy higher than 95% in testing and an average Dice index of 0.6 which is higher than the results from the literature using CNNs (0.9 F1-score). Additionally, due to the semantic properties of the deep learning approach, an end-to-end deep learning framework is viable to perform both tasks: detection and classification of mitosis. The results showed the potential of deep learning in the analysis of Whole Slide Images (WSI) and its integration to computer-aided systems. The extension of this work to whole slide images is also addressed in the last two chapters; as well as, some computational key points that are useful when constructing a computer-aided-system inspired by the described technology.Trabajo de investigació
    corecore