46 research outputs found

    Three-Dimensional GPU-Accelerated Active Contours for Automated Localization of Cells in Large Images

    Full text link
    Cell segmentation in microscopy is a challenging problem, since cells are often asymmetric and densely packed. This becomes particularly challenging for extremely large images, since manual intervention and processing time can make segmentation intractable. In this paper, we present an efficient and highly parallel formulation for symmetric three-dimensional (3D) contour evolution that extends previous work on fast two-dimensional active contours. We provide a formulation for optimization on 3D images, as well as a strategy for accelerating computation on consumer graphics hardware. The proposed software takes advantage of Monte-Carlo sampling schemes in order to speed up convergence and reduce thread divergence. Experimental results show that this method provides superior performance for large 2D and 3D cell segmentation tasks when compared to existing methods on large 3D brain images

    Data-driven Representation Learning from Histopathology Image Databases to Support Digital Pathology Analysis

    Get PDF
    Cancer research is a major public health priority in the world due to its high incidence, diversity and mortality. Despite great advances in this area during recent decades, the high incidence and lack of specialists have proven that one of the major challenges is to achieve early diagnosis. Improved early diagnosis, especially in developing countries, plays a crucial role in timely treatment and patient survival. Recent advances in scanner technology for the digitization of pathology slides and the growth of global initiatives to build databases for cancer research have enabled the emergence of digital pathology as a new approach to support pathology workflows. This has led to the development of many computational methods for automatic histopathology image analysis, which in turn has raised new computational challenges due to the high visual variability of histopathology slides, the difficulty in assessing the effectiveness of methods (considering the lack of annotated data from different pathologists and institutions), and the need of interpretable, efficient and feasible methods for practical use. On the other hand, machine learning techniques have focused on exploiting large databases to automatically extract and induce information and knowledge, in the form of patterns and rules, that allow to connect low-level content with its high-level meaning. Several approaches have emerged as opposed to traditional schemes based on handcrafted features for data representation, which nowadays are known as representation learning. The objective of this thesis is the exploration, development and validation of precise, interpretable and efficient computational machine learning methods for automatic representation learning from histopathology image databases to support diagnosis tasks of different types of cancer. The validation of the proposed methods during the thesis development allowed to corroborate their capability in several histopathology image analysis tasks of different types of cancer. These methods achieve good results in terms of accuracy, robustness, reproducibility, interpretability and feasibility suggesting their potential practical application towards translational and personalized medicine.Resumen. La investigación en cáncer es una de las principales prioridades de salud pública en el mundo debido a su alta incidencia, diversidad y mortalidad. A pesar de los grandes avances en el área en las últimas décadas, la alta incidencia y la falta de especialistas ha llevado a que una de las principales problemáticas sea lograr su detección temprana, en especial en países en vías de desarrollo, como quiera a que de ello depende las posibilidades de un tratamiento oportuno y las oportunidades de supervivencia de los pacientes. Los recientes avances en tecnología de escáneres para digitalización de láminas de patología y el crecimiento de iniciativas mundiales para la construcción de bases de datos para la investigación en cáncer, han permitido el surgimiento de la patología digital como un nuevo enfoque para soportar los flujos de trabajo en patología. Esto ha llevado al desarrollo de una gran variedad de métodos computacionales para el análisis automático de imágenes de histopatología, lo cual ha planteado nuevos desafíos computacionales debido a la alta variabilidad visual de las láminas de histopatología; la dificultad para evaluar la efectividad de los métodos por la falta de datos de diferentes instituciones que cuenten con anotaciones por parte de los patólogos, y la necesidad de métodos interpretables, eficientes y factibles para su uso práctico. Por otro lado, el aprendizaje de máquina se ha enfocado en explotar las grandes bases de datos para extraer e inducir de manera automática información y conocimiento, en forma de patrones y reglas, que permita conectar el contenido de bajo nivel con su significado. Diferentes técnicas han surgido en contraposición a los esquemas tradicionales basados en diseño manual de la representación de los datos, en lo que se conoce como aprendizaje de la representación. El propósito de esta tesis fue la exploración, desarrollo y validación de métodos computacionales de aprendizaje de máquina precisos, interpretables y eficientes a partir de bases de datos de imágenes de histopatología para el aprendizaje automático de la representación en tareas de apoyo al diagnóstico de distintos tipos de cáncer. La validación de los distintos métodos propuestos durante el desarrollo de la tesis permitieron corroborar la capacidad de cada uno de ellos en distintivas tareas de análisis de imágenes de histopatología, en diferentes tipos de cáncer, con buenos resultados en términos de exactitud, robustez, reproducibilidad, interpretabilidad y factibilidad, lo cual sugiere su potencial aplicación práctica hacia la medicina traslacional y personalizada.Doctorad

    SenseCare: A Research Platform for Medical Image Informatics and Interactive 3D Visualization

    Full text link
    Clinical research on smart healthcare has an increasing demand for intelligent and clinic-oriented medical image computing algorithms and platforms that support various applications. To this end, we have developed SenseCare research platform for smart healthcare, which is designed to boost translational research on intelligent diagnosis and treatment planning in various clinical scenarios. To facilitate clinical research with Artificial Intelligence (AI), SenseCare provides a range of AI toolkits for different tasks, including image segmentation, registration, lesion and landmark detection from various image modalities ranging from radiology to pathology. In addition, SenseCare is clinic-oriented and supports a wide range of clinical applications such as diagnosis and surgical planning for lung cancer, pelvic tumor, coronary artery disease, etc. SenseCare provides several appealing functions and features such as advanced 3D visualization, concurrent and efficient web-based access, fast data synchronization and high data security, multi-center deployment, support for collaborative research, etc. In this paper, we will present an overview of SenseCare as an efficient platform providing comprehensive toolkits and high extensibility for intelligent image analysis and clinical research in different application scenarios.Comment: 11 pages, 10 figure

    Automatic Segmentation of Cells of Different Types in Fluorescence Microscopy Images

    Get PDF
    Recognition of different cell compartments, types of cells, and their interactions is a critical aspect of quantitative cell biology. This provides a valuable insight for understanding cellular and subcellular interactions and mechanisms of biological processes, such as cancer cell dissemination, organ development and wound healing. Quantitative analysis of cell images is also the mainstay of numerous clinical diagnostic and grading procedures, for example in cancer, immunological, infectious, heart and lung disease. Computer automation of cellular biological samples quantification requires segmenting different cellular and sub-cellular structures in microscopy images. However, automating this problem has proven to be non-trivial, and requires solving multi-class image segmentation tasks that are challenging owing to the high similarity of objects from different classes and irregularly shaped structures. This thesis focuses on the development and application of probabilistic graphical models to multi-class cell segmentation. Graphical models can improve the segmentation accuracy by their ability to exploit prior knowledge and model inter-class dependencies. Directed acyclic graphs, such as trees have been widely used to model top-down statistical dependencies as a prior for improved image segmentation. However, using trees, a few inter-class constraints can be captured. To overcome this limitation, polytree graphical models are proposed in this thesis that capture label proximity relations more naturally compared to tree-based approaches. Polytrees can effectively impose the prior knowledge on the inclusion of different classes by capturing both same-level and across-level dependencies. A novel recursive mechanism based on two-pass message passing is developed to efficiently calculate closed form posteriors of graph nodes on polytrees. Furthermore, since an accurate and sufficiently large ground truth is not always available for training segmentation algorithms, a weakly supervised framework is developed to employ polytrees for multi-class segmentation that reduces the need for training with the aid of modeling the prior knowledge during segmentation. Generating a hierarchical graph for the superpixels in the image, labels of nodes are inferred through a novel efficient message-passing algorithm and the model parameters are optimized with Expectation Maximization (EM). Results of evaluation on the segmentation of simulated data and multiple publicly available fluorescence microscopy datasets indicate the outperformance of the proposed method compared to state-of-the-art. The proposed method has also been assessed in predicting the possible segmentation error and has been shown to outperform trees. This can pave the way to calculate uncertainty measures on the resulting segmentation and guide subsequent segmentation refinement, which can be useful in the development of an interactive segmentation framework

    Machine learning-based automated segmentation with a feedback loop for 3D synchrotron micro-CT

    Get PDF
    Die Entwicklung von Synchrotronlichtquellen der dritten Generation hat die Grundlage für die Untersuchung der 3D-Struktur opaker Proben mit einer Auflösung im Mikrometerbereich und höher geschaffen. Dies führte zur Entwicklung der Röntgen-Synchrotron-Mikro-Computertomographie, welche die Schaffung von Bildgebungseinrichtungen zur Untersuchung von Proben verschiedenster Art förderte, z.B. von Modellorganismen, um die Physiologie komplexer lebender Systeme besser zu verstehen. Die Entwicklung moderner Steuerungssysteme und Robotik ermöglichte die vollständige Automatisierung der Röntgenbildgebungsexperimente und die Kalibrierung der Parameter des Versuchsaufbaus während des Betriebs. Die Weiterentwicklung der digitalen Detektorsysteme führte zu Verbesserungen der Auflösung, des Dynamikbereichs, der Empfindlichkeit und anderer wesentlicher Eigenschaften. Diese Verbesserungen führten zu einer beträchtlichen Steigerung des Durchsatzes des Bildgebungsprozesses, aber auf der anderen Seite begannen die Experimente eine wesentlich größere Datenmenge von bis zu Dutzenden von Terabyte zu generieren, welche anschließend manuell verarbeitet wurden. Somit ebneten diese technischen Fortschritte den Weg für die Durchführung effizienterer Hochdurchsatzexperimente zur Untersuchung einer großen Anzahl von Proben, welche Datensätze von besserer Qualität produzierten. In der wissenschaftlichen Gemeinschaft besteht daher ein hoher Bedarf an einem effizienten, automatisierten Workflow für die Röntgendatenanalyse, welcher eine solche Datenlast bewältigen und wertvolle Erkenntnisse für die Fachexperten liefern kann. Die bestehenden Lösungen für einen solchen Workflow sind nicht direkt auf Hochdurchsatzexperimente anwendbar, da sie für Ad-hoc-Szenarien im Bereich der medizinischen Bildgebung entwickelt wurden. Daher sind sie nicht für Hochdurchsatzdatenströme optimiert und auch nicht in der Lage, die hierarchische Beschaffenheit von Proben zu nutzen. Die wichtigsten Beiträge der vorliegenden Arbeit sind ein neuer automatisierter Analyse-Workflow, der für die effiziente Verarbeitung heterogener Röntgendatensätze hierarchischer Natur geeignet ist. Der entwickelte Workflow basiert auf verbesserten Methoden zur Datenvorverarbeitung, Registrierung, Lokalisierung und Segmentierung. Jede Phase eines Arbeitsablaufs, die eine Trainingsphase beinhaltet, kann automatisch feinabgestimmt werden, um die besten Hyperparameter für den spezifischen Datensatz zu finden. Für die Analyse von Faserstrukturen in Proben wurde eine neue, hochgradig parallelisierbare 3D-Orientierungsanalysemethode entwickelt, die auf einem neuartigen Konzept der emittierenden Strahlen basiert und eine präzisere morphologische Analyse ermöglicht. Alle entwickelten Methoden wurden gründlich an synthetischen Datensätzen validiert, um ihre Anwendbarkeit unter verschiedenen Abbildungsbedingungen quantitativ zu bewerten. Es wurde gezeigt, dass der Workflow in der Lage ist, eine Reihe von Datensätzen ähnlicher Art zu verarbeiten. Darüber hinaus werden die effizienten CPU/GPU-Implementierungen des entwickelten Workflows und der Methoden vorgestellt und der Gemeinschaft als Module für die Sprache Python zur Verfügung gestellt. Der entwickelte automatisierte Analyse-Workflow wurde erfolgreich für Mikro-CT-Datensätze angewandt, die in Hochdurchsatzröntgenexperimenten im Bereich der Entwicklungsbiologie und Materialwissenschaft gewonnen wurden. Insbesondere wurde dieser Arbeitsablauf für die Analyse der Medaka-Fisch-Datensätze angewandt, was eine automatisierte Segmentierung und anschließende morphologische Analyse von Gehirn, Leber, Kopfnephronen und Herz ermöglichte. Darüber hinaus wurde die entwickelte Methode der 3D-Orientierungsanalyse bei der morphologischen Analyse von Polymergerüst-Datensätzen eingesetzt, um einen Herstellungsprozess in Richtung wünschenswerter Eigenschaften zu lenken

    Deep Learning for Detection and Segmentation in High-Content Microscopy Images

    Get PDF
    High-content microscopy led to many advances in biology and medicine. This fast emerging technology is transforming cell biology into a big data driven science. Computer vision methods are used to automate the analysis of microscopy image data. In recent years, deep learning became popular and had major success in computer vision. Most of the available methods are developed to process natural images. Compared to natural images, microscopy images pose domain specific challenges such as small training datasets, clustered objects, and class imbalance. In this thesis, new deep learning methods for object detection and cell segmentation in microscopy images are introduced. For particle detection in fluorescence microscopy images, a deep learning method based on a domain-adapted Deconvolution Network is presented. In addition, a method for mitotic cell detection in heterogeneous histopathology images is proposed, which combines a deep residual network with Hough voting. The method is used for grading of whole-slide histology images of breast carcinoma. Moreover, a method for both particle detection and cell detection based on object centroids is introduced, which is trainable end-to-end. It comprises a novel Centroid Proposal Network, a layer for ensembling detection hypotheses over image scales and anchors, an anchor regularization scheme which favours prior anchors over regressed locations, and an improved algorithm for Non-Maximum Suppression. Furthermore, a novel loss function based on Normalized Mutual Information is proposed which can cope with strong class imbalance and is derived within a Bayesian framework. For cell segmentation, a deep neural network with increased receptive field to capture rich semantic information is introduced. Moreover, a deep neural network which combines both paradigms of multi-scale feature aggregation of Convolutional Neural Networks and iterative refinement of Recurrent Neural Networks is proposed. To increase the robustness of the training and improve segmentation, a novel focal loss function is presented. In addition, a framework for black-box hyperparameter optimization for biomedical image analysis pipelines is proposed. The framework has a modular architecture that separates hyperparameter sampling and hyperparameter optimization. A visualization of the loss function based on infimum projections is suggested to obtain further insights into the optimization problem. Also, a transfer learning approach is presented, which uses only one color channel for pre-training and performs fine-tuning on more color channels. Furthermore, an approach for unsupervised domain adaptation for histopathological slides is presented. Finally, Galaxy Image Analysis is presented, a platform for web-based microscopy image analysis. Galaxy Image Analysis workflows for cell segmentation in cell cultures, particle detection in mice brain tissue, and MALDI/H&E image registration have been developed. The proposed methods were applied to challenging synthetic as well as real microscopy image data from various microscopy modalities. It turned out that the proposed methods yield state-of-the-art or improved results. The methods were benchmarked in international image analysis challenges and used in various cooperation projects with biomedical researchers

    Computational methods in Connectomics

    Get PDF
    corecore