19 research outputs found
Cell Detection with Star-convex Polygons
Automatic detection and segmentation of cells and nuclei in microscopy images
is important for many biological applications. Recent successful learning-based
approaches include per-pixel cell segmentation with subsequent pixel grouping,
or localization of bounding boxes with subsequent shape refinement. In
situations of crowded cells, these can be prone to segmentation errors, such as
falsely merging bordering cells or suppressing valid cell instances due to the
poor approximation with bounding boxes. To overcome these issues, we propose to
localize cell nuclei via star-convex polygons, which are a much better shape
representation as compared to bounding boxes and thus do not need shape
refinement. To that end, we train a convolutional neural network that predicts
for every pixel a polygon for the cell instance at that position. We
demonstrate the merits of our approach on two synthetic datasets and one
challenging dataset of diverse fluorescence microscopy images.Comment: Conference paper at MICCAI 201
A benchmark for epithelial cell tracking
Segmentation and tracking of epithelial cells in light microscopy (LM) movies of developing tissue is an abundant task in cell- and developmental biology. Epithelial cells are densely packed cells that form a honeycomb-like grid. This dense packing distinguishes membrane-stained epithelial cells from the types of objects recent cell tracking benchmarks have focused on, like cell nuclei and freely moving individual cells. While semi-automated tools for segmentation and tracking of epithelial cells are available to biologists, common tools rely on classical watershed based segmentation and engineered tracking heuristics, and entail a tedious phase of manual curation. However, a different kind of densely packed cell imagery has become a focus of recent computer vision research, namely electron microscopy (EM) images of neurons. In this work we explore the benefits of two recent neuron EM segmentation methods for epithelial cell tracking in light microscopy. In particular we adapt two different deep learning approaches for neuron segmentation, namely Flood Filling Networks and MALA, to epithelial cell tracking. We benchmark these on a dataset of eight movies with up to 200 frames. We compare to Moral Lineage Tracing, a combinatorial optimization approach that recently claimed state of the art results for epithelial cell tracking. Furthermore, we compare to Tissue Analyzer, an off-the-shelf tool used by Biologists that serves as our baseline
Powys
SIGLEAvailable from British Library Document Supply Centre-DSC:q96/29994 / BLDSC - British Library Document Supply CentreGBUnited Kingdo
Learning Shape Representation on Sparse Point Clouds for Volumetric Image Segmentation
Volumetric image segmentation with convolutional neural networks (CNNs)
encounters several challenges, which are specific to medical images. Among
these challenges are large volumes of interest, high class imbalances, and
difficulties in learning shape representations. To tackle these challenges, we
propose to improve over traditional CNN-based volumetric image segmentation
through point-wise classification of point clouds. The sparsity of point clouds
allows processing of entire image volumes, balancing highly imbalanced
segmentation problems, and explicitly learning an anatomical shape. We build
upon PointCNN, a neural network proposed to process point clouds, and propose
here to jointly encode shape and volumetric information within the point cloud
in a compact and computationally effective manner. We demonstrate how this
approach can then be used to refine CNN-based segmentation, which yields
significantly improved results in our experiments on the difficult task of
peripheral nerve segmentation from magnetic resonance neurography images. By
synthetic experiments, we further show the capability of our approach in
learning an explicit anatomical shape representation.Comment: Accepted at MICCAI 201
TGIF : topological gap in-fill for vascular networks
This paper describes a new approach for the reconstruction of complete 3-D arterial trees from partially incomplete image data. We utilize a physiologically motivated simulation framework to iteratively generate artificial, yet physiologically meaningful, vasculatures for the correction of vascular connectivity. The generative approach is guided by a simplified angiogenesis model, while at the same time topological and morphological evidence extracted from the image data is considered to form functionally adequate tree models. We evaluate the effectiveness of our method on four synthetic datasets using different metrics to assess topological and functional differences. Our experiments show that the proposed generative approach is superior to state-of-the-art approaches that only consider topology for vessel reconstruction and performs consistently well across different problem sizes and topologies