8,126 research outputs found
Leveraging Domain Knowledge to Improve Microscopy Image Segmentation with Lifted Multicuts
The throughput of electron microscopes has increased significantly in recent
years, enabling detailed analysis of cell morphology and ultrastructure.
Analysis of neural circuits at single-synapse resolution remains the flagship
target of this technique, but applications to cell and developmental biology
are also starting to emerge at scale. The amount of data acquired in such
studies makes manual instance segmentation, a fundamental step in many analysis
pipelines, impossible. While automatic segmentation approaches have improved
significantly thanks to the adoption of convolutional neural networks, their
accuracy still lags behind human annotations and requires additional manual
proof-reading. A major hindrance to further improvements is the limited field
of view of the segmentation networks preventing them from exploiting the
expected cell morphology or other prior biological knowledge which humans use
to inform their segmentation decisions. In this contribution, we show how such
domain-specific information can be leveraged by expressing it as long-range
interactions in a graph partitioning problem known as the lifted multicut
problem. Using this formulation, we demonstrate significant improvement in
segmentation accuracy for three challenging EM segmentation problems from
neuroscience and cell biology
NuClick: A Deep Learning Framework for Interactive Segmentation of Microscopy Images
Object segmentation is an important step in the workflow of computational
pathology. Deep learning based models generally require large amount of labeled
data for precise and reliable prediction. However, collecting labeled data is
expensive because it often requires expert knowledge, particularly in medical
imaging domain where labels are the result of a time-consuming analysis made by
one or more human experts. As nuclei, cells and glands are fundamental objects
for downstream analysis in computational pathology/cytology, in this paper we
propose a simple CNN-based approach to speed up collecting annotations for
these objects which requires minimum interaction from the annotator. We show
that for nuclei and cells in histology and cytology images, one click inside
each object is enough for NuClick to yield a precise annotation. For
multicellular structures such as glands, we propose a novel approach to provide
the NuClick with a squiggle as a guiding signal, enabling it to segment the
glandular boundaries. These supervisory signals are fed to the network as
auxiliary inputs along with RGB channels. With detailed experiments, we show
that NuClick is adaptable to the object scale, robust against variations in the
user input, adaptable to new domains, and delivers reliable annotations. An
instance segmentation model trained on masks generated by NuClick achieved the
first rank in LYON19 challenge. As exemplar outputs of our framework, we are
releasing two datasets: 1) a dataset of lymphocyte annotations within IHC
images, and 2) a dataset of segmented WBCs in blood smear images
Gland Segmentation in Histopathology Images Using Random Forest Guided Boundary Construction
Grading of cancer is important to know the extent of its spread. Prior to
grading, segmentation of glandular structures is important. Manual segmentation
is a time consuming process and is subject to observer bias. Hence, an
automated process is required to segment the gland structures. These glands
show a large variation in shape size and texture. This makes the task
challenging as the glands cannot be segmented using mere morphological
operations and conventional segmentation mechanisms. In this project we propose
a method which detects the boundary epithelial cells of glands and then a novel
approach is used to construct the complete gland boundary. The region enclosed
within the boundary can then be obtained to get the segmented gland regions.Comment: 7 pages, 8 figure
GRED: Graph-Regularized 3D Shape Reconstruction from Highly Anisotropic and Noisy Images
Analysis of microscopy images can provide insight into many biological
processes. One particularly challenging problem is cell nuclear segmentation in
highly anisotropic and noisy 3D image data. Manually localizing and segmenting
each and every cell nuclei is very time consuming, which remains a bottleneck
in large scale biological experiments. In this work we present a tool for
automated segmentation of cell nuclei from 3D fluorescent microscopic data. Our
tool is based on state-of-the-art image processing and machine learning
techniques and supports a friendly graphical user interface (GUI). We show that
our tool is as accurate as manual annotation but greatly reduces the time for
the registration
Automated Mouse Organ Segmentation: A Deep Learning Based Solution
The analysis of animal cross section images, such as cross sections of
laboratory mice, is critical in assessing the effect of experimental drugs such
as the biodistribution of candidate compounds in preclinical drug development
stage. Tissue distribution of radiolabeled candidate therapeutic compounds can
be quantified using techniques like Quantitative Whole-Body Autoradiography
(QWBA).QWBA relies, among other aspects, on the accurate segmentation or
identification of key organs of interest in the animal cross section image such
as the brain, spine, heart, liver and others. We present a deep learning based
organ segmentation solution to this problem, using which we can achieve
automated organ segmentation with high precision (dice coefficient in the
0.83-0.95 range depending on organ) for the key organs of interest.Comment: 8 page
Microscopic Nuclei Classification, Segmentation and Detection with improved Deep Convolutional Neural Network (DCNN) Approaches
Due to cellular heterogeneity, cell nuclei classification, segmentation, and
detection from pathological images are challenging tasks. In the last few
years, Deep Convolutional Neural Networks (DCNN) approaches have been shown
state-of-the-art (SOTA) performance on histopathological imaging in different
studies. In this work, we have proposed different advanced DCNN models and
evaluated for nuclei classification, segmentation, and detection. First, the
Densely Connected Recurrent Convolutional Network (DCRN) model is used for
nuclei classification. Second, Recurrent Residual U-Net (R2U-Net) is applied
for nuclei segmentation. Third, the R2U-Net regression model which is named
UD-Net is used for nuclei detection from pathological images. The experiments
are conducted with different datasets including Routine Colon Cancer(RCC)
classification and detection dataset, and Nuclei Segmentation Challenge 2018
dataset. The experimental results show that the proposed DCNN models provide
superior performance compared to the existing approaches for nuclei
classification, segmentation, and detection tasks. The results are evaluated
with different performance metrics including precision, recall, Dice
Coefficient (DC), Means Squared Errors (MSE), F1-score, and overall accuracy.
We have achieved around 3.4% and 4.5% better F-1 score for nuclei
classification and detection tasks compared to recently published DCNN based
method. In addition, R2U-Net shows around 92.15% testing accuracy in term of
DC. These improved methods will help for pathological practices for better
quantitative analysis of nuclei in Whole Slide Images(WSI) which ultimately
will help for better understanding of different types of cancer in clinical
workflow.Comment: 18 pages, 16 figures, 3 Table
Recommended from our members
Radial feature descriptors for cell classification and recommendation
This paper introduces computational tools for cell classification into normal and abnormal, as well as content-based-image-retrieval (CBIR) for cell recommendation. It also proposes the radial feature descriptors (RFD), which define evenly interspaced segments around the nucleus, and proportional to the convexity of the nuclear boundary. Experiments consider Herlev and CRIC image databases as input to classification via Random Forest and bootstrap; we compare 14 different feature sets by means of False Negative Rate (FNR) and Kappa (κ), obtaining FNR =0.02 and κ=0.89 for Herlev, and FNR =0.14 and κ=0.78 for CRIC. Next, we sort and rank cell images using convolutional neural networks and evaluate performance with the Mean Average Precision (MAP), achieving MAP =0.84 and MAP =0.82 for Herlev and CRIC, respectively. Cell classification show encouraging results regarding RFD, including its sensitivity to intensity variation around the nuclear membrane as it bypasses cytoplasm segmentation
Weakly Supervised Deep Nuclei Segmentation Using Partial Points Annotation in Histopathology Images
Nuclei segmentation is a fundamental task in histopathology image analysis.
Typically, such segmentation tasks require significant effort to manually
generate accurate pixel-wise annotations for fully supervised training. To
alleviate such tedious and manual effort, in this paper we propose a novel
weakly supervised segmentation framework based on partial points annotation,
i.e., only a small portion of nuclei locations in each image are labeled. The
framework consists of two learning stages. In the first stage, we design a
semi-supervised strategy to learn a detection model from partially labeled
nuclei locations. Specifically, an extended Gaussian mask is designed to train
an initial model with partially labeled data. Then, selftraining with
background propagation is proposed to make use of the unlabeled regions to
boost nuclei detection and suppress false positives. In the second stage, a
segmentation model is trained from the detected nuclei locations in a
weakly-supervised fashion. Two types of coarse labels with complementary
information are derived from the detected points and are then utilized to train
a deep neural network. The fully-connected conditional random field loss is
utilized in training to further refine the model without introducing extra
computational complexity during inference. The proposed method is extensively
evaluated on two nuclei segmentation datasets. The experimental results
demonstrate that our method can achieve competitive performance compared to the
fully supervised counterpart and the state-of-the-art methods while requiring
significantly less annotation effort.Comment: 12 page
Recommended from our members
DeephESC 2.0: Deep Generative Multi Adversarial Networks for improving the classification of hESC.
Human embryonic stem cells (hESC), derived from the blastocysts, provide unique cellular models for numerous potential applications. They have great promise in the treatment of diseases such as Parkinson's, Huntington's, diabetes mellitus, etc. hESC are a reliable developmental model for early embryonic growth because of their ability to divide indefinitely (pluripotency), and differentiate, or functionally change, into any adult cell type. Their adaptation to toxicological studies is particularly attractive as pluripotent stem cells can be used to model various stages of prenatal development. Automated detection and classification of human embryonic stem cell in videos is of great interest among biologists for quantified analysis of various states of hESC in experimental work. Currently video annotation is done by hand, a process which is very time consuming and exhaustive. To solve this problem, this paper introduces DeephESC 2.0 an automated machine learning approach consisting of two parts: (a) Generative Multi Adversarial Networks (GMAN) for generating synthetic images of hESC, (b) a hierarchical classification system consisting of Convolution Neural Networks (CNN) and Triplet CNNs to classify phase contrast hESC images into six different classes namely: Cell clusters, Debris, Unattached cells, Attached cells, Dynamically Blebbing cells and Apoptically Blebbing cells. The approach is totally non-invasive and does not require any chemical or staining of hESC. DeephESC 2.0 is able to classify hESC images with an accuracy of 93.23% out performing state-of-the-art approaches by at least 20%. Furthermore, DeephESC 2.0 is able to generate large number of synthetic images which can be used for augmenting the dataset. Experimental results show that training DeephESC 2.0 exclusively on a large amount of synthetic images helps to improve the performance of the classifier on original images from 93.23% to 94.46%. This paper also evaluates the quality of the generated synthetic images using the Structural SIMilarity (SSIM) index, Peak Signal to Noise ratio (PSNR) and statistical p-value metrics and compares them with state-of-the-art approaches for generating synthetic images. DeephESC 2.0 saves hundreds of hours of manual labor which would otherwise be spent on manually/semi-manually annotating more and more videos
Image Segmentation and Classification for Sickle Cell Disease using Deformable U-Net
Reliable cell segmentation and classification from biomedical images is a
crucial step for both scientific research and clinical practice. A major
challenge for more robust segmentation and classification methods is the large
variations in the size, shape and viewpoint of the cells, combining with the
low image quality caused by noise and artifacts. To address this issue, in this
work we propose a learning-based, simultaneous cell segmentation and
classification method based on the deep U-Net structure with deformable
convolution layers. The U-Net architecture for deep learning has been shown to
offer a precise localization for image semantic segmentation. Moreover,
deformable convolution layer enables the free form deformation of the feature
learning process, thus makes the whole network more robust to various cell
morphologies and image settings. The proposed method is tested on microscopic
red blood cell images from patients with sickle cell disease. The results show
that U-Net with deformable convolution achieves the highest accuracy for
segmentation and classification, comparing with original U-Net structure
- …