22 research outputs found
An Instance Segmentation Dataset of Yeast Cells in Microstructures
Extracting single-cell information from microscopy data requires accurate
instance-wise segmentations. Obtaining pixel-wise segmentations from microscopy
imagery remains a challenging task, especially with the added complexity of
microstructured environments. This paper presents a novel dataset for
segmenting yeast cells in microstructures. We offer pixel-wise instance
segmentation labels for both cells and trap microstructures. In total, we
release 493 densely annotated microscopy images. To facilitate a unified
comparison between novel segmentation algorithms, we propose a standardized
evaluation strategy for our dataset. The aim of the dataset and evaluation
strategy is to facilitate the development of new cell segmentation approaches.
The dataset is publicly available at
https://christophreich1996.github.io/yeast_in_microstructures_dataset/ .Comment: IEEE EMBC 2023 (in press), Christoph Reich and Tim Prangemeier - both
authors contributed equall
Multi-StyleGAN: Towards Image-Based Simulation of Time-Lapse Live-Cell Microscopy
Time-lapse fluorescent microscopy (TLFM) combined with predictive
mathematical modelling is a powerful tool to study the inherently dynamic
processes of life on the single-cell level. Such experiments are costly,
complex and labour intensive. A complimentary approach and a step towards in
silico experimentation, is to synthesise the imagery itself. Here, we propose
Multi-StyleGAN as a descriptive approach to simulate time-lapse fluorescence
microscopy imagery of living cells, based on a past experiment. This novel
generative adversarial network synthesises a multi-domain sequence of
consecutive timesteps. We showcase Multi-StyleGAN on imagery of multiple live
yeast cells in microstructured environments and train on a dataset recorded in
our laboratory. The simulation captures underlying biophysical factors and time
dependencies, such as cell morphology, growth, physical interactions, as well
as the intensity of a fluorescent reporter protein. An immediate application is
to generate additional training and validation data for feature extraction
algorithms or to aid and expedite development of advanced experimental
techniques such as online monitoring or control of cells.
Code and dataset is available at
https://git.rwth-aachen.de/bcs/projects/tp/multi-stylegan.Comment: revised -- accepted to MICCAI 2021. (Tim Prangemeier and Christoph
Reich --- both authors contributed equally
Multiclass Yeast Segmentation in Microstructured Environments with Deep Learning
Cell segmentation is a major bottleneck in extracting quantitative
single-cell information from microscopy data. The challenge is exasperated in
the setting of microstructured environments. While deep learning approaches
have proven useful for general cell segmentation tasks, existing segmentation
tools for the yeast-microstructure setting rely on traditional machine learning
approaches. Here we present convolutional neural networks trained for
multiclass segmenting of individual yeast cells and discerning these from
cell-similar microstructures. We give an overview of the datasets recorded for
training, validating and testing the networks, as well as a typical use-case.
We showcase the method's contribution to segmenting yeast in microstructured
environments with a typical synthetic biology application in mind. The models
achieve robust segmentation results, outperforming the previous
state-of-the-art in both accuracy and speed. The combination of fast and
accurate segmentation is not only beneficial for a posteriori data processing,
it also makes online monitoring of thousands of trapped cells or closed-loop
optimal experimental design feasible from an image processing perspective.Comment: IEEE CIBCB 2020 (accepted
Deep Video Codec Control
Lossy video compression is commonly used when transmitting and storing video
data. Unified video codecs (e.g., H.264 or H.265) remain the de facto standard,
despite the availability of advanced (neural) compression approaches.
Transmitting videos in the face of dynamic network bandwidth conditions
requires video codecs to adapt to vastly different compression strengths. Rate
control modules augment the codec's compression such that bandwidth constraints
are satisfied and video distortion is minimized. While, both standard video
codes and their rate control modules are developed to minimize video distortion
w.r.t. human quality assessment, preserving the downstream performance of deep
vision models is not considered. In this paper, we present the first end-to-end
learnable deep video codec control considering both bandwidth constraints and
downstream vision performance, while not breaking existing standardization. We
demonstrate for two common vision tasks (semantic segmentation and optical flow
estimation) and on two different datasets that our deep codec control better
preserves downstream performance than using 2-pass average bit rate control
while meeting dynamic bandwidth constraints and adhering to standardizations.Comment: 22 pages, 26 figures, 6 table
Histopathological Image Classification based on Self-Supervised Vision Transformer and Weak Labels
Whole Slide Image (WSI) analysis is a powerful method to facilitate the
diagnosis of cancer in tissue samples. Automating this diagnosis poses various
issues, most notably caused by the immense image resolution and limited
annotations. WSIs commonly exhibit resolutions of 100Kx100K pixels. Annotating
cancerous areas in WSIs on the pixel level is prohibitively labor-intensive and
requires a high level of expert knowledge. Multiple instance learning (MIL)
alleviates the need for expensive pixel-level annotations. In MIL, learning is
performed on slide-level labels, in which a pathologist provides information
about whether a slide includes cancerous tissue. Here, we propose Self-ViT-MIL,
a novel approach for classifying and localizing cancerous areas based on
slide-level annotations, eliminating the need for pixel-wise annotated training
data. Self-ViT- MIL is pre-trained in a self-supervised setting to learn rich
feature representation without relying on any labels. The recent Vision
Transformer (ViT) architecture builds the feature extractor of Self-ViT-MIL.
For localizing cancerous regions, a MIL aggregator with global attention is
utilized. To the best of our knowledge, Self-ViT- MIL is the first approach to
introduce self-supervised ViTs in MIL-based WSI analysis tasks. We showcase the
effectiveness of our approach on the common Camelyon16 dataset. Self-ViT-MIL
surpasses existing state-of-the-art MIL-based approaches in terms of accuracy
and area under the curve (AUC)
Multiclass Yeast Segmentation in Microstructured Environments with Deep Learning
Cell segmentation is a major bottleneck in extracting quantitative single-cell information from microscopy data. The challenge is exasperated in the setting of microstructured environments. While deep learning approaches have proven useful for general cell segmentation tasks, existing segmentation tools for the yeast-microstructure setting rely on traditional machine learning approaches. Here we present convolutional neural networks trained for multiclass segmenting of individual yeast cells and discerning these from cell-similar microstructures. We give an overview of the datasets recorded for training, validating and testing the networks, as well as a typical use-case. We showcase the method's contribution to segmenting yeast in microstructured environments with a typical synthetic biology application in mind. The models achieve robust segmentation results, outperforming the previous state-of-the-art in both accuracy and speed. The combination of fast and accurate segmentation is not only beneficial for a posteriori data processing, it also makes online monitoring of thousands of trapped cells or closed-loop optimal experimental design feasible from an image processing perspective