153 research outputs found

    Automated segmentation of tissue images for computerized IHC analysis

    Get PDF
    This paper presents two automated methods for the segmentation ofimmunohistochemical tissue images that overcome the limitations of themanual approach aswell as of the existing computerized techniques. The first independent method, based on unsupervised color clustering, recognizes automatically the target cancerous areas in the specimen and disregards the stroma; the second method, based on colors separation and morphological processing, exploits automated segmentation of the nuclear membranes of the cancerous cells. Extensive experimental results on real tissue images demonstrate the accuracy of our techniques compared to manual segmentations; additional experiments show that our techniques are more effective in immunohistochemical images than popular approaches based on supervised learning or active contours. The proposed procedure can be exploited for any applications that require tissues and cells exploration and to perform reliable and standardized measures of the activity of specific proteins involved in multi-factorial genetic pathologie

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201

    Multi-modal and multi-dimensional biomedical image data analysis using deep learning

    Get PDF
    There is a growing need for the development of computational methods and tools for automated, objective, and quantitative analysis of biomedical signal and image data to facilitate disease and treatment monitoring, early diagnosis, and scientific discovery. Recent advances in artificial intelligence and machine learning, particularly in deep learning, have revolutionized computer vision and image analysis for many application areas. While processing of non-biomedical signal, image, and video data using deep learning methods has been very successful, high-stakes biomedical applications present unique challenges such as different image modalities, limited training data, need for explainability and interpretability etc. that need to be addressed. In this dissertation, we developed novel, explainable, and attention-based deep learning frameworks for objective, automated, and quantitative analysis of biomedical signal, image, and video data. The proposed solutions involve multi-scale signal analysis for oraldiadochokinesis studies; ensemble of deep learning cascades using global soft attention mechanisms for segmentation of meningeal vascular networks in confocal microscopy; spatial attention and spatio-temporal data fusion for detection of rare and short-term video events in laryngeal endoscopy videos; and a novel discrete Fourier transform driven class activation map for explainable-AI and weakly-supervised object localization and segmentation for detailed vocal fold motion analysis using laryngeal endoscopy videos. Experiments conducted on the proposed methods showed robust and promising results towards automated, objective, and quantitative analysis of biomedical data, that is of great value for potential early diagnosis and effective disease progress or treatment monitoring.Includes bibliographical references

    Advanced Computational Methods for Oncological Image Analysis

    Get PDF
    [Cancer is the second most common cause of death worldwide and encompasses highly variable clinical and biological scenarios. Some of the current clinical challenges are (i) early diagnosis of the disease and (ii) precision medicine, which allows for treatments targeted to specific clinical cases. The ultimate goal is to optimize the clinical workflow by combining accurate diagnosis with the most suitable therapies. Toward this, large-scale machine learning research can define associations among clinical, imaging, and multi-omics studies, making it possible to provide reliable diagnostic and prognostic biomarkers for precision oncology. Such reliable computer-assisted methods (i.e., artificial intelligence) together with clinicians’ unique knowledge can be used to properly handle typical issues in evaluation/quantification procedures (i.e., operator dependence and time-consuming tasks). These technical advances can significantly improve result repeatability in disease diagnosis and guide toward appropriate cancer care. Indeed, the need to apply machine learning and computational intelligence techniques has steadily increased to effectively perform image processing operations—such as segmentation, co-registration, classification, and dimensionality reduction—and multi-omics data integration.

    Representation learning for histopathology image analysis

    Get PDF
    Abstract. Nowadays, automatic methods for image representation and analysis have been successfully applied in several medical imaging problems leading to the emergence of novel research areas like digital pathology and bioimage informatics. The main challenge of these methods is to deal with the high visual variability of biological structures present in the images, which increases the semantic gap between their visual appearance and their high level meaning. Particularly, the visual variability in histopathology images is also related to the noise added by acquisition stages such as magnification, sectioning and staining, among others. Many efforts have focused on the careful selection of the image representations to capture such variability. This approach requires expert knowledge as well as hand-engineered design to build good feature detectors that represent the relevant visual information. Current approaches in classical computer vision tasks have replaced such design by the inclusion of the image representation as a new learning stage called representation learning. This paradigm has outperformed the state-of-the-art results in many pattern recognition tasks like speech recognition, object detection, and image scene classification. The aim of this research was to explore and define a learning-based histopathology image representation strategy with interpretative capabilities. The main contribution was a novel approach to learn the image representation for cancer detection. The proposed approach learns the representation directly from a Basal-cell carcinoma image collection in an unsupervised way and was extended to extract more complex features from low-level representations. Additionally, this research proposed the digital staining module, a complementary interpretability stage to support diagnosis through a visual identification of discriminant and semantic features. Experimental results showed a performance of 92% in F-Score, improving the state-of-the-art representation by 7%. This research concluded that representation learning improves the feature detectors generalization as well as the performance for the basal cell carcinoma detection task. As additional contributions, a bag of features image representation was extended and evaluated for Alzheimer detection, obtaining 95% in terms of equal error classification rate. Also, a novel perspective to learn morphometric measures in cervical cells based on bag of features was presented and evaluated obtaining promising results to predict nuclei and cytoplasm areas.Los métodos automáticos para la representación y análisis de imágenes se han aplicado con éxito en varios problemas de imagen médica que conducen a la aparición de nuevas áreas de investigación como la patología digital. El principal desafío de estos métodos es hacer frente a la alta variabilidad visual de las estructuras biológicas presentes en las imágenes, lo que aumenta el vacío semántico entre su apariencia visual y su significado de alto nivel. Particularmente, la variabilidad visual en imágenes de histopatología también está relacionada con el ruido añadido por etapas de adquisición tales como magnificación, corte y tinción entre otros. Muchos esfuerzos se han centrado en la selección de la representacion de las imágenes para capturar dicha variabilidad. Este enfoque requiere el conocimiento de expertos y el diseño de ingeniería para construir buenos detectores de características que representen la información visual relevante. Los enfoques actuales en tareas de visión por computador han reemplazado ese diseño por la inclusión de la representación en la etapa de aprendizaje. Este paradigma ha superado los resultados del estado del arte en muchas de las tareas de reconocimiento de patrones tales como el reconocimiento de voz, la detección de objetos y la clasificación de imágenes. El objetivo de esta investigación es explorar y definir una estrategia basada en el aprendizaje de la representación para imágenes histopatológicas con capacidades interpretativas. La contribución principal de este trabajo es un enfoque novedoso para aprender la representación de la imagen para la detección de cáncer. El enfoque propuesto aprende la representación directamente de una colección de imágenes de carcinoma basocelular en forma no supervisada que permite extraer características más complejas a partir de las representaciones de bajo nivel. También se propone el módulo de tinción digital, una nueva etapa de interpretabilidad para apoyar el diagnóstico a través de una identificación visual de las funciones discriminantes y semánticas. Los resultados experimentales mostraron un rendimiento del 92% en términos de F-Score, mejorando la representación del estado del arte en un 7%. Esta investigación concluye que el aprendizaje de la representación mejora la generalización de los detectores de características así como el desempeño en la detección de carcinoma basocelular. Como contribuciones adicionales, una representación de bolsa de caracteristicas (BdC) fue ampliado y evaluado para la detección de la enfermedad de Alzheimer, obteniendo un 95% en términos de EER. Además, una nueva perspectiva para aprender medidas morfométricas en las células del cuello uterino basado en BdC fue presentada y evaluada obteniendo resultados prometedores para predecir las areás del nucleo y el citoplasma.Maestrí

    Volumetric Segmentation of Cell Cycle Markers in Confocal Images Using Machine Learning and Deep Learning

    Get PDF
    © Copyright © 2020 Khan, Voß, Pound and French. Understanding plant growth processes is important for many aspects of biology and food security. Automating the observations of plant development—a process referred to as plant phenotyping—is increasingly important in the plant sciences, and is often a bottleneck. Automated tools are required to analyze the data in microscopy images depicting plant growth, either locating or counting regions of cellular features in images. In this paper, we present to the plant community an introduction to and exploration of two machine learning approaches to address the problem of marker localization in confocal microscopy. First, a comparative study is conducted on the classification accuracy of common conventional machine learning algorithms, as a means to highlight challenges with these methods. Second, a 3D (volumetric) deep learning approach is developed and presented, including consideration of appropriate loss functions and training data. A qualitative and quantitative analysis of all the results produced is performed. Evaluation of all approaches is performed on an unseen time-series sequence comprising several individual 3D volumes, capturing plant growth. The comparative analysis shows that the deep learning approach produces more accurate and robust results than traditional machine learning. To accompany the paper, we are releasing the 4D point annotation tool used to generate the annotations, in the form of a plugin for the popular ImageJ (FIJI) software. Network models and example datasets will also be available online
    corecore