413 research outputs found

    Histopathological image analysis : a review

    Get PDF
    Over the past decade, dramatic increases in computational power and improvement in image analysis algorithms have allowed the development of powerful computer-assisted analytical approaches to radiological data. With the recent advent of whole slide digital scanners, tissue histopathology slides can now be digitized and stored in digital image form. Consequently, digitized tissue histopathology has now become amenable to the application of computerized image analysis and machine learning techniques. Analogous to the role of computer-assisted diagnosis (CAD) algorithms in medical imaging to complement the opinion of a radiologist, CAD algorithms have begun to be developed for disease detection, diagnosis, and prognosis prediction to complement the opinion of the pathologist. In this paper, we review the recent state of the art CAD technology for digitized histopathology. This paper also briefly describes the development and application of novel image analysis technology for a few specific histopathology related problems being pursued in the United States and Europe

    Subcellular protein expression models for microsatellite instability in colorectal adenocarcinoma tissue images

    Get PDF
    Background: New bioimaging techniques capable of visualising the co-location of numerous proteins within individual cells have been proposed to study tumour heterogeneity of neighbouring cells within the same tissue specimen. These techniques have highlighted the need to better understand the interplay between proteins in terms of their colocalisation. Results: We recently proposed a cellular-level model of the healthy and cancerous colonic crypt microenvironments. Here, we extend the model to include detailed models of protein expression to generate synthetic multiplex fluorescence data. As a first step, we present models for various cell organelles learned from real immunofluorescence data from the Human Protein Atlas. Comparison between the distribution of various features obtained from the real and synthetic organelles has shown very good agreement. This has included both features that have been used as part of the model input and ones that have not been explicitly considered. We then develop models for six proteins which are important colorectal cancer biomarkers and are associated with microsatellite instability, namely MLH1, PMS2, MSH2, MSH6, P53 and PTEN. The protein models include their complex expression patterns and which cell phenotypes express them. The models have been validated by comparing distributions of real and synthesised parameters and by application of frameworks for analysing multiplex immunofluorescence image data. Conclusions: The six proteins have been chosen as a case study to illustrate how the model can be used to generate synthetic multiplex immunofluorescence data. Further proteins could be included within the model in a similar manner to enable the study of a larger set of proteins of interest and their interactions. To the best of our knowledge, this is the first model for expression of multiple proteins in anatomically intact tissue, rather than within cells in culture

    Automatic Segmentation of Cells of Different Types in Fluorescence Microscopy Images

    Get PDF
    Recognition of different cell compartments, types of cells, and their interactions is a critical aspect of quantitative cell biology. This provides a valuable insight for understanding cellular and subcellular interactions and mechanisms of biological processes, such as cancer cell dissemination, organ development and wound healing. Quantitative analysis of cell images is also the mainstay of numerous clinical diagnostic and grading procedures, for example in cancer, immunological, infectious, heart and lung disease. Computer automation of cellular biological samples quantification requires segmenting different cellular and sub-cellular structures in microscopy images. However, automating this problem has proven to be non-trivial, and requires solving multi-class image segmentation tasks that are challenging owing to the high similarity of objects from different classes and irregularly shaped structures. This thesis focuses on the development and application of probabilistic graphical models to multi-class cell segmentation. Graphical models can improve the segmentation accuracy by their ability to exploit prior knowledge and model inter-class dependencies. Directed acyclic graphs, such as trees have been widely used to model top-down statistical dependencies as a prior for improved image segmentation. However, using trees, a few inter-class constraints can be captured. To overcome this limitation, polytree graphical models are proposed in this thesis that capture label proximity relations more naturally compared to tree-based approaches. Polytrees can effectively impose the prior knowledge on the inclusion of different classes by capturing both same-level and across-level dependencies. A novel recursive mechanism based on two-pass message passing is developed to efficiently calculate closed form posteriors of graph nodes on polytrees. Furthermore, since an accurate and sufficiently large ground truth is not always available for training segmentation algorithms, a weakly supervised framework is developed to employ polytrees for multi-class segmentation that reduces the need for training with the aid of modeling the prior knowledge during segmentation. Generating a hierarchical graph for the superpixels in the image, labels of nodes are inferred through a novel efficient message-passing algorithm and the model parameters are optimized with Expectation Maximization (EM). Results of evaluation on the segmentation of simulated data and multiple publicly available fluorescence microscopy datasets indicate the outperformance of the proposed method compared to state-of-the-art. The proposed method has also been assessed in predicting the possible segmentation error and has been shown to outperform trees. This can pave the way to calculate uncertainty measures on the resulting segmentation and guide subsequent segmentation refinement, which can be useful in the development of an interactive segmentation framework

    Opportunities and challenges for deep learning in cell dynamics research

    Full text link
    With the growth of artificial intelligence (AI), there has been an increase in the adoption of computer vision and deep learning (DL) techniques for the evaluation of microscopy images and movies. This adoption has not only addressed hurdles in quantitative analysis of dynamic cell biological processes, but it has also started supporting advances in drug development, precision medicine and genome-phenome mapping. Here we survey existing AI-based techniques and tools, and open-source datasets, with a specific focus on the computational tasks of segmentation, classification, and tracking of cellular and subcellular structures and dynamics. We summarise long-standing challenges in microscopy video analysis from the computational perspective and review emerging research frontiers and innovative applications for deep learning-guided automation for cell dynamics research

    Tubule Segmentation of Fluorescence Microscopy Images Based on Convolutional Neural Networks With Inhomogeneity Correction

    Get PDF
    Fluorescence microscopy has become a widely used tool for studying various biological structures of in vivo tissue or cells. However, quantitative analysis of these biological structures remains a challenge due to their complexity which is exacerbated by distortions caused by lens aberrations and light scattering. Moreover, manual quantification of such image volumes is an intractable and error-prone process, making the need for automated image analysis methods crucial. This paper describes a segmentation method for tubular structures in fluorescence microscopy images using convolutional neural networks with data augmentation and inhomogeneity correction. The segmentation results of the proposed method are visually and numerically compared with other microscopy segmentation methods. Experimental results indicate that the proposed method has better performance with correctly segmenting and identifying multiple tubular structures compared to other methods

    Virtual labeling of mitochondria in living cells using correlative imaging and physics-guided deep learning

    Get PDF
    Mitochondria play a crucial role in cellular metabolism. This paper presents a novel method to visualize mitochondria in living cells without the use of fluorescent markers. We propose a physics-guided deep learning approach for obtaining virtually labeled micrographs of mitochondria from bright-field images. We integrate a microscope’s point spread function in the learning of an adversarial neural network for improving virtual labeling. We show results (average Pearson correlation 0.86) significantly better than what was achieved by state-of-the-art (0.71) for virtual labeling of mitochondria. We also provide new insights into the virtual labeling problem and suggest additional metrics for quality assessment. The results show that our virtual labeling approach is a powerful way of segmenting and tracking individual mitochondria in bright-field images, results previously achievable only for fluorescently labeled mitochondria

    A Segmentation Method for fluorescence images without a machine learning approach

    Full text link
    Background: Image analysis applications in digital pathology include various methods for segmenting regions of interest. Their identification is one of the most complex steps, and therefore of great interest for the study of robust methods that do not necessarily rely on a machine learning (ML) approach. Method: A fully automatic and optimized segmentation process for different datasets is a prerequisite for classifying and diagnosing Indirect ImmunoFluorescence (IIF) raw data. This study describes a deterministic computational neuroscience approach for identifying cells and nuclei. It is far from the conventional neural network approach, but it is equivalent to their quantitative and qualitative performance, and it is also solid to adversative noise. The method is robust, based on formally correct functions, and does not suffer from tuning on specific data sets. Results: This work demonstrates the robustness of the method against the variability of parameters, such as image size, mode, and signal-to-noise ratio. We validated the method on two datasets (Neuroblastoma and NucleusSegData) using images annotated by independent medical doctors. Conclusions: The definition of deterministic and formally correct methods, from a functional to a structural point of view, guarantees the achievement of optimized and functionally correct results. The excellent performance of our deterministic method (NeuronalAlg) to segment cells and nuclei from fluorescence images was measured with quantitative indicators and compared with those achieved by three published ML approaches.Comment: 25 page

    Locality sensitive modelling approach for object detection, tracking and segmentation in biomedical images

    Get PDF
    Biomedical imaging techniques play an important role in visualisation of e.g., biological structures, tissues, diseases and medical conditions in cellular level. The techniques bring us enormous image datasets for studying biological processes, clinical diagnosis and medical analysis. Thanks to recent advances in computer technology and hardware, automatic analysis of biomedical images becomes more feasible and popular. Although computer scientists have made a great effort in developing advanced imaging processing algorithms, many problems regarding object analysis still remain unsolved due to the diversity of biomedical imaging. In this thesis, we focus on developing object analysis solutions for two entirely different biomedical image types: uorescence microscopy sequences and endometrial histology images. In uorescence microscopy, our task is to track massive uorescent spots with similar appearances and complicated motion pattern in noisy environments over hundreds of frames. In endometrial histology, we are challenged by detecting different types of cells with similar appearance and in terms of colour and morphology. The proposed solutions utilise several novel locality sensitive models which can extract spatial or/and temporal relational features of the objects, i.e., local neighbouring objects exhibiting certain structures or patterns, for overcoming the difficulties of object analysis in uorescence microscopy and endometrial histology

    Quantitative analysis of high-resolution microendoscopic images for diagnosis of neoplasia in patients with Barrett’s esophagus

    Get PDF
    Background and Aims: Previous studies show that microendoscopic images can be interpreted visually to identify the presence of neoplasia in patients with Barrett’s esophagus (BE), but this approach is subjective and requires clinical expertise. This study describes an approach for quantitative image analysis of microendoscopic images to identify neoplastic lesions in patients with BE. Methods: Images were acquired from 230 sites from 58 patients by using a fiberoptic high-resolution microendoscope during standard endoscopic procedures. Images were analyzed by a fully automated image processing algorithm, which automatically selected a region of interest and calculated quantitative image features. Image features were used to develop an algorithm to identify the presence of neoplasia; results were compared with a histopathology diagnosis. Results: A sequential classification algorithm that used image features related to glandular and cellular morphology resulted in a sensitivity of 84% and a specificity of 85%. Applying the algorithm to an independent validation set resulted in a sensitivity of 88% and a specificity of 85%. Conclusions: This pilot study demonstrates that automated analysis of microendoscopic images can provide an objective, quantitative framework to assist clinicians in evaluating esophageal lesions from patients with BE. (Clinical trial registration number: NCT01384227 and NCT02018367.
    • …
    corecore