923 research outputs found

    DefectNET: multi-class fault detection on highly-imbalanced datasets

    Full text link
    As a data-driven method, the performance of deep convolutional neural networks (CNN) relies heavily on training data. The prediction results of traditional networks give a bias toward larger classes, which tend to be the background in the semantic segmentation task. This becomes a major problem for fault detection, where the targets appear very small on the images and vary in both types and sizes. In this paper we propose a new network architecture, DefectNet, that offers multi-class (including but not limited to) defect detection on highly-imbalanced datasets. DefectNet consists of two parallel paths, which are a fully convolutional network and a dilated convolutional network to detect large and small objects respectively. We propose a hybrid loss maximising the usefulness of a dice loss and a cross entropy loss, and we also employ the leaky rectified linear unit (ReLU) to deal with rare occurrence of some targets in training batches. The prediction results show that our DefectNet outperforms state-of-the-art networks for detecting multi-class defects with the average accuracy improvement of approximately 10% on a wind turbine

    Nucleus segmentation : towards automated solutions

    Get PDF
    Single nucleus segmentation is a frequent challenge of microscopy image processing, since it is the first step of many quantitative data analysis pipelines. The quality of tracking single cells, extracting features or classifying cellular phenotypes strongly depends on segmentation accuracy. Worldwide competitions have been held, aiming to improve segmentation, and recent years have definitely brought significant improvements: large annotated datasets are now freely available, several 2D segmentation strategies have been extended to 3D, and deep learning approaches have increased accuracy. However, even today, no generally accepted solution and benchmarking platform exist. We review the most recent single-cell segmentation tools, and provide an interactive method browser to select the most appropriate solution.Peer reviewe

    Learning Invariant Representations of Images for Computational Pathology

    Get PDF

    Probing the Unseen Depths of the Hepatic Microarchitecture via Multimodal Microscopy

    Get PDF
    Multimodal microscopy combines the advantages and strengths of different imaging modalities in order to holistically characterise the organisation of biological organisms and their comprising constituents under healthy and diseased conditions, down to the spatial resolution required to understand the morphology and function of such structures. Given the profound advantages conferred by such an approach, this work broadly aimed to develop and exploit various multimodal and multi-dimensional imaging modalities in a complimentary, combined and/or correlative manner – namely, three-dimensional scanning electron microscopy, transmission electron tomography, bright-field light microscopy, confocal laser scanning microscopy and X-ray micro-computed tomography – in order to characterise and collect new information on the normal and pathological microarchitecture of rodent and human liver tissue in 3-D under various experimental conditions. The data reported in this work includes a comparative analysis of a variety of sample preparation protocols applied to rat liver tissue to determine the suitability of such protocols for the application of serial block-face scanning electron microscopy (SBF-SEM). Next, 3-D modelling and morphometric analysis (utilising the premier SBF-SEM protocol) was performed in order to visualise and quantify key features of the hepatic microarchitecture. We further outline a large-volume correlative light and electron microscopy approach utilising selective molecular probes for confocal laser scanning microscopy (actin, lipids and nuclei), combined with the 3-D ultrastructure of the same structures of interest, as revealed by SBF-SEM (Chapter 2). Development of a straightforward combinatorial sample preparation approach, followed by a swift multimodal imaging approach – combining X-ray micro-computed tomography, bright-field light microscopy and serial section scanning electron microscopy – facilitated the cross correlation of structure-function information on the same sample across diverse length scales (Chapter 3). Next, we outline a novel “silver filler pre-embedding approach” in order to reduce artefactual charging, minimise dataset acquisition time and improve resolution and contrast in rat liver tissue prepared for SBF-SEM (Chapter 4). Next, we employ a complementary imaging approach involving serial section scanning electron microscopy and transmission electron tomography in order to comparatively analyse the structure and morphometric parameters of thousands of normal- and giant mitochondria in human patients diagnosed with non-alcoholic fatty liver disease. In so doing, we reveal functional alterations associated with mitochondrial gigantism and propose a mechanism for their formation (Chapter 5). Finally, the significance of the results obtained, and major scientific advances reported in this work are discussed in-depth against the relevant literature. This is proceeded by the future outlooks and research that remains to be done, followed by the main conclusions of this Ph.D thesis (Chapter 6). In summary, our findings firmly establish the immense importance and value of contemporary multimodal microscopy modalities in modern life science research, for holistically revealing cellular structures along the vast length scales amongst which they exist, under healthy and clinically relevant pathological conditions

    Cell Segmentation and Tracking using CNN-Based Distance Predictions and a Graph-Based Matching Strategy

    Get PDF
    The accurate segmentation and tracking of cells in microscopy image sequences is an important task in biomedical research, e.g., for studying the development of tissues, organs or entire organisms. However, the segmentation of touching cells in images with a low signal-to-noise-ratio is still a challenging problem. In this paper, we present a method for the segmentation of touching cells in microscopy images. By using a novel representation of cell borders, inspired by distance maps, our method is capable to utilize not only touching cells but also close cells in the training process. Furthermore, this representation is notably robust to annotation errors and shows promising results for the segmentation of microscopy images containing in the training data underrepresented or not included cell types. For the prediction of the proposed neighbor distances, an adapted U-Net convolutional neural network (CNN) with two decoder paths is used. In addition, we adapt a graph-based cell tracking algorithm to evaluate our proposed method on the task of cell tracking. The adapted tracking algorithm includes a movement estimation in the cost function to re-link tracks with missing segmentation masks over a short sequence of frames. Our combined tracking by detection method has proven its potential in the IEEE ISBI 2020 Cell Tracking Challenge (http://celltrackingchallenge.net/) where we achieved as team KIT-Sch-GE multiple top three rankings including two top performances using a single segmentation model for the diverse data sets.Comment: 25 pages, 14 figures, methods of the team KIT-Sch-GE for the IEEE ISBI 2020 Cell Tracking Challeng

    OpSeF : Open Source Python Framework for Collaborative Instance Segmentation of Bioimages

    Get PDF
    Various pre-trained deep learning models for the segmentation of bioimages have been made available as developer-to-end-user solutions. They are optimized for ease of use and usually require neither knowledge of machine learning nor coding skills. However, individually testing these tools is tedious and success is uncertain. Here, we present the Open Segmentation Framework (OpSeF), a Python framework for deep learning-based instance segmentation. OpSeF aims at facilitating the collaboration of biomedical users with experienced image analysts. It builds on the analysts' knowledge in Python, machine learning, and workflow design to solve complex analysis tasks at any scale in a reproducible, well-documented way. OpSeF defines standard inputs and outputs, thereby facilitating modular workflow design and interoperability with other software. Users play an important role in problem definition, quality control, and manual refinement of results. OpSeF semi-automates preprocessing, convolutional neural network (CNN)-based segmentation in 2D or 3D, and postprocessing. It facilitates benchmarking of multiple models in parallel. OpSeF streamlines the optimization of parameters for pre- and postprocessing such, that an available model may frequently be used without retraining. Even if sufficiently good results are not achievable with this approach, intermediate results can inform the analysts in the selection of the most promising CNN-architecture in which the biomedical user might invest the effort of manually labeling training data. We provide Jupyter notebooks that document sample workflows based on various image collections. Analysts may find these notebooks useful to illustrate common segmentation challenges, as they prepare the advanced user for gradually taking over some of their tasks and completing their projects independently. The notebooks may also be used to explore the analysis options available within OpSeF in an interactive way and to document and share final workflows. Currently, three mechanistically distinct CNN-based segmentation methods, the U-Net implementation used in Cellprofiler 3.0, StarDist, and Cellpose have been integrated within OpSeF. The addition of new networks requires little; the addition of new models requires no coding skills. Thus, OpSeF might soon become both an interactive model repository, in which pre-trained models might be shared, evaluated, and reused with ease.Peer reviewe
    • …
    corecore