5,567 research outputs found
Visual parameter optimisation for biomedical image processing
Background: Biomedical image processing methods require users to optimise input parameters to ensure high quality
output. This presents two challenges. First, it is difficult to optimise multiple input parameters for multiple
input images. Second, it is difficult to achieve an understanding of underlying algorithms, in particular, relationships
between input and output.
Results: We present a visualisation method that transforms users’ ability to understand algorithm behaviour by
integrating input and output, and by supporting exploration of their relationships. We discuss its application to a
colour deconvolution technique for stained histology images and show how it enabled a domain expert to
identify suitable parameter values for the deconvolution of two types of images, and metrics to quantify
deconvolution performance. It also enabled a breakthrough in understanding by invalidating an underlying
assumption about the algorithm.
Conclusions: The visualisation method presented here provides analysis capability for multiple inputs and outputs
in biomedical image processing that is not supported by previous analysis software. The analysis supported by our
method is not feasible with conventional trial-and-error approaches
Cascaded Graph Convolution Approach for Nuclei Detection in Histopathology Images
Nuclei detection in histopathology images of can-cerous tissue stained with conventional hematoxylin and eosin stain is a challenging task due to the complexity and diversity of cell data. Deep learning techniques have produced encouraging results in the field of nuclei detection, where the main emphasis is on classification and regression-based methods. Recent research has demonstrated that regression-based techniques outperform classification. In this paper, we propose a classification model based on graph convolutions to classify nuclei, and similar models to detect nuclei using cascaded architecture. With nearly 29,000 annotated nuclei in a large dataset of cancer histology images, we evaluated the Convolutional Neural Network (CNN) and Graph Convolutional Networks (GCN) based approaches. Our findings demonstrate that graph convolutions perform better with a cascaded GCN architecture and are more stable than centre-of-pixel approach. We have compared our two-fold evaluation quantitative results with CNN-based models such as Spatial Constrained Convolutional Neural Network (SC-CNN) and Centre-of-Pixel Convolutional Neural Network (CP-CNN). We used two different loss functions, binary cross-entropy and focal loss function, and also investigated the behaviour of CP-CNN and GCN models to observe the effectiveness of CNN and GCN operators. The compared quantitative F1 score of cascaded-GCN shows an improvement of 6% compared to state-of-the-art methods
Fuzzy-based Propagation of Prior Knowledge to Improve Large-Scale Image Analysis Pipelines
Many automatically analyzable scientific questions are well-posed and offer a
variety of information about the expected outcome a priori. Although often
being neglected, this prior knowledge can be systematically exploited to make
automated analysis operations sensitive to a desired phenomenon or to evaluate
extracted content with respect to this prior knowledge. For instance, the
performance of processing operators can be greatly enhanced by a more focused
detection strategy and the direct information about the ambiguity inherent in
the extracted data. We present a new concept for the estimation and propagation
of uncertainty involved in image analysis operators. This allows using simple
processing operators that are suitable for analyzing large-scale 3D+t
microscopy images without compromising the result quality. On the foundation of
fuzzy set theory, we transform available prior knowledge into a mathematical
representation and extensively use it enhance the result quality of various
processing operators. All presented concepts are illustrated on a typical
bioimage analysis pipeline comprised of seed point detection, segmentation,
multiview fusion and tracking. Furthermore, the functionality of the proposed
approach is validated on a comprehensive simulated 3D+t benchmark data set that
mimics embryonic development and on large-scale light-sheet microscopy data of
a zebrafish embryo. The general concept introduced in this contribution
represents a new approach to efficiently exploit prior knowledge to improve the
result quality of image analysis pipelines. Especially, the automated analysis
of terabyte-scale microscopy data will benefit from sophisticated and efficient
algorithms that enable a quantitative and fast readout. The generality of the
concept, however, makes it also applicable to practically any other field with
processing strategies that are arranged as linear pipelines.Comment: 39 pages, 12 figure
- …