1,662 research outputs found

    A Comprehensive Overview of Computational Nuclei Segmentation Methods in Digital Pathology

    Full text link
    In the cancer diagnosis pipeline, digital pathology plays an instrumental role in the identification, staging, and grading of malignant areas on biopsy tissue specimens. High resolution histology images are subject to high variance in appearance, sourcing either from the acquisition devices or the H\&E staining process. Nuclei segmentation is an important task, as it detects the nuclei cells over background tissue and gives rise to the topology, size, and count of nuclei which are determinant factors for cancer detection. Yet, it is a fairly time consuming task for pathologists, with reportedly high subjectivity. Computer Aided Diagnosis (CAD) tools empowered by modern Artificial Intelligence (AI) models enable the automation of nuclei segmentation. This can reduce the subjectivity in analysis and reading time. This paper provides an extensive review, beginning from earlier works use traditional image processing techniques and reaching up to modern approaches following the Deep Learning (DL) paradigm. Our review also focuses on the weak supervision aspect of the problem, motivated by the fact that annotated data is scarce. At the end, the advantages of different models and types of supervision are thoroughly discussed. Furthermore, we try to extrapolate and envision how future research lines will potentially be, so as to minimize the need for labeled data while maintaining high performance. Future methods should emphasize efficient and explainable models with a transparent underlying process so that physicians can trust their output.Comment: 47 pages, 27 figures, 9 table

    Non-Classical Nucleation Phenomena Study And Following Process Monitoring and Optimization in Solution Crystallization Process

    Get PDF
    Nucleation is a crucial step in the solution crystallization process. Despite their good development, classical nucleation theory and two-step nucleation theory cannot explain all the nucleation phenomena, especially for the non-classical nucleation phenomena which include oiling out, gelation and non-monotonic nucleation. Accordingly, for the non-classical nucleation systems, the crystallization processes are seldom designed based on the nucleation monitoring and supervision. In this thesis, crystallization process optimization was conducted to study the mechanism of non-classical nucleation phenomena and in-line process monitoring technology development. Two kinds of non-classical nucleation phenomena with non-monotonic nucleation rate and gel formation were investigated, and accordingly, two nucleation pathways that self-induced nucleation and jellylike phase mediated nucleation were proposed based on the analysis of in-line spectral monitoring and off-line sample characterizations. Results indicated the agitation level would affect the pre-nucleation clusters’ existence in the non-monotonic nucleation system, and the properties of solvent determined the formation of jellylike phase and the transformation to crystals. Motion-based objects tracking model and the state-of-the-art neural network Mask R-CNN were introduced to monitor the onset of nucleation and following the crystallization process. Combined with a cost-effective camera probe, the developed real-time tracking system can detect the nucleation onset accurately even with ultrasonic irradiation and can extract much more information during the whole crystallization process. Subsequently, ultrasonic irradiation and seeding were used to optimize a non-classical nucleation system that accompanied oiling out phenomenon. Different frequencies and intensities of ultrasonic irradiation and seeds addition time were screened to optimize the nucleation step, which proved their effectiveness of promoting nucleation and narrowing the metastable zone widths of oiling out and nucleation. A fine-tuning of nucleation step was carried out in a mixed suspension mixed product removal (MSMPR)-tubular crystallizer series. The nucleation step was optimized in the MSMPR stage with the aid of principal component analysis, which enabled the growth of crystals in the tubular crystallizer with preferred polymorphism, shape, and size. The study in this thesis provides insights into non-classical nucleation mechanism and nucleation based crystallization process design and optimization

    Computational modelling of imaging markers to support the diagnosis and monitoring of multiple sclerosis

    Get PDF
    Multiple sclerosis is a leading cause of neurological disability in young adults which affects more than 2.5 million people worldwide. An important substrate of disability accrual is the loss of neurons and connections between them (neurodegeneration) which can be captured by serial brain imaging, especially in the cerebral grey matter. In this thesis in four separate subprojects, I aimed to assess the strength of imaging-derived grey matter volume as a biomarker in the diagnosis, predicting the evolution of multiple sclerosis, and developing a staging system to stratify patients. In total, I retrospectively studied 1701 subjects, of whom 1548 had longitudinal brain imaging data. I used advanced computational models to investigate cross-sectional and longitudinal datasets. In the cross-sectional study, I demonstrated that grey matter volumes could distinguish multiple sclerosis from another demyelinating disorder (neuromyelitis optica) with an accuracy of 74%. In longitudinal studies, I showed that over time the deep grey matter nuclei had the fastest rate of volume loss (up to 1.66% annual loss) across the brain regions in multiple sclerosis. The volume of the deep grey matter was the strongest predictor of disability progression. I found that multiple sclerosis affects different brain areas with a specific temporal order (or sequence) that starts with the deep grey matter nuclei, posterior cingulate cortex, precuneus, and cerebellum. Finally, with multivariate mechanistic and causal modelling, I showed that brain volume loss causes disability and cognitive worsening which can be delayed with a potential neuroprotective treatment (simvastatin). This work provides conclusive evidence that grey matter volume loss affects some brain regions more severely, can predict future disability progression, can be used as an outcome measure in phase II clinical trials, and causes clinical and cognitive worsening. This thesis also provides a subject staging system based on which patients can be scored during multiple sclerosis

    A graph-based cell tracking algorithm with few manually tunable parameters and automated segmentation error correction

    Get PDF
    Automatic cell segmentation and tracking enables to gain quantitative insights into the processes driving cell migration. To investigate new data with minimal manual effort, cell tracking algorithms should be easy to apply and reduce manual curation time by providing automatic correction of segmentation errors. Current cell tracking algorithms, however, are either easy to apply to new data sets but lack automatic segmentation error correction, or have a vast set of parameters that needs either manual tuning or annotated data for parameter tuning. In this work, we propose a tracking algorithm with only few manually tunable parameters and automatic segmentation error correction. Moreover, no training data is needed. We compare the performance of our approach to three well-performing tracking algorithms from the Cell Tracking Challenge on data sets with simulated, degraded segmentation—including false negatives, over- and under-segmentation errors. Our tracking algorithm can correct false negatives, over- and under-segmentation errors as well as a mixture of the aforementioned segmentation errors. On data sets with under-segmentation errors or a mixture of segmentation errors our approach performs best. Moreover, without requiring additional manual tuning, our approach ranks several times in the top 3 on the 6(th) edition of the Cell Tracking Challenge

    Computer Vision Approaches for Mapping Gene Expression onto Lineage Trees

    Get PDF
    This project concerns studying the early development of living organisms. This period is accompanied by dynamic morphogenetic events. There is an increase in the number of cells, changes in the shape of cells and specification of cell fate during this time. Typically, in order to capture the dynamic morphological changes, one can employ a form of microscopy imaging such as Selective Plane Illumination Microscopy (SPIM) which offers a single-cell resolution across time, and hence allows observing the positions, velocities and trajectories of most cells in a developing embryo. Unfortunately, the dynamic genetic activity which underlies these morphological changes and influences cellular fate decision, is captured only as static snapshots and often requires processing (sequencing or imaging) multiple distinct individuals. In order to set the stage for characterizing the factors which influence cellular fate, one must bring the data arising from the above-mentioned static snapshots of multiple individuals and the data arising from SPIM imaging of other distinct individual(s) which characterizes the changes in morphology, into the same frame of reference. In this project, a computational pipeline is established, which achieves the aforementioned goal of mapping data from these various imaging modalities and specimens to a canonical frame of reference. This pipeline relies on the three core building blocks of Instance Segmentation, Tracking and Registration. In this dissertation work, I introduce EmbedSeg which is my solution to performing instance segmentation of 2D and 3D (volume) image data. Next, I introduce LineageTracer which is my solution to performing tracking of a time-lapse (2d+t, 3d+t) recording. Finally, I introduce PlatyMatch which is my solution to performing registration of volumes. Errors from the application of these building blocks accumulate which produces a noisy observation estimate of gene expression for the digitized cells in the canonical frame of reference. These noisy estimates are processed to infer the underlying hidden state by using a Hidden Markov Model (HMM) formulation. Lastly, for wider dissemination of these methods, one requires an effective visualization strategy. A few details about the employed approach are also discussed in the dissertation work. The pipeline was designed keeping imaging volume data in mind, but can easily be extended to incorporate other data modalities, if available, such as single cell RNA Sequencing (scRNA-Seq) (more details are provided in the Discussion chapter). The methods elucidated in this dissertation would provide a fertile playground for several experiments and analyses in the future. Some of such potential experiments and current weaknesses of the computational pipeline are also discussed additionally in the Discussion Chapter

    Semi-automated learning strategies for large-scale segmentation of histology and other big bioimaging stacks and volumes

    Get PDF
    Labelled high-resolution datasets are becoming increasingly common and necessary in different areas of biomedical imaging. Examples include: serial histology and ex-vivo MRI for atlas building, OCT for studying the human brain, and micro X-ray for tissue engineering. Labelling such datasets, typically, requires manual delineation of a very detailed set of regions of interest on a large number of sections or slices. This process is tedious, time-consuming, not reproducible and rather inefficient due to the high similarity of adjacent sections. In this thesis, I explore the potential of a semi-automated slice level segmentation framework and a suggestive region level framework which aim to speed up the segmentation process of big bioimaging datasets. The thesis includes two well validated, published, and widely used novel methods and one algorithm which did not yield an improvement compared to the current state-of the-art. The slice-wise method, SmartInterpol, consists of a probabilistic model for semi-automated segmentation of stacks of 2D images, in which the user manually labels a sparse set of sections (e.g., one every n sections), and lets the algorithm complete the segmentation for other sections automatically. The proposed model integrates in a principled manner two families of segmentation techniques that have been very successful in brain imaging: multi-atlas segmentation and convolutional neural networks. Labelling every structure on a sparse set of slices is not necessarily optimal, therefore I also introduce a region level active learning framework which requires the labeller to annotate one region of interest on one slice at the time. The framework exploits partial annotations, weak supervision, and realistic estimates of class and section-specific annotation effort in order to greatly reduce the time it takes to produce accurate segmentations for large histological datasets. Although both frameworks have been created targeting histological datasets, they have been successfully applied to other big bioimaging datasets, reducing labelling effort by up to 60−70% without compromising accuracy

    Machine Learning Approaches for Semantic Segmentation on Partly-Annotated Medical Images

    Get PDF
    Semantic segmentation of medical images plays a crucial role in assisting medical practitioners in providing accurate and swift diagnoses; nevertheless, deep neural networks require extensive labelled data to learn and generalise appropriately. This is a major issue in medical imagery because most of the datasets are not fully annotated. Training models with partly-annotated datasets generate plenty of predictions that belong to correct unannotated areas that are categorised as false positives; as a result, standard segmentation metrics and objective functions do not work correctly, affecting the overall performance of the models. In this thesis, the semantic segmentation of partly-annotated medical datasets is extensively and thoroughly studied. The general objective is to improve the segmentation results of medical images via innovative supervised and semi-supervised approaches. The main contributions of this work are the following. Firstly, a new metric, specifically designed for this kind of dataset, can provide a reliable score to partly-annotated datasets with positive expert feedback in their generated predictions by exploiting all the confusion matrix values except the false positives. Secondly, an innovative approach to generating better pseudo-labels when applying co-training with the disagreement selection strategy. This method expands the pixels in disagreement utilising the combined predictions as a guide. Thirdly, original attention mechanisms based on disagreement are designed for two cases: intra-model and inter-model. These attention modules leverage the disagreement between layers (from the same or different model instances) to enhance the overall learning process and generalisation of the models. Lastly, innovative deep supervision methods improve the segmentation results by training neural networks one subnetwork at a time following the order of the supervision branches. The methods are thoroughly evaluated on several histopathological datasets showing significant improvements
    • …
    corecore