57 research outputs found
Better Research Software Tools to Elevate the Rate of Scientific Discovery -- or why we need to invest in research software engineering
In the past decade, enormous progress has been made in advancing the
state-of-the-art in bioimage analysis - a young computational field that works
in close collaboration with the life sciences on the quantitative analysis of
scientific image data. In many cases, tremendous effort has been spent to
package these new advances into usable software tools and, as a result, users
can nowadays routinely apply cutting-edge methods to their analysis problems
using software tools such as ilastik [1], cellprofiler [2], Fiji/ImageJ2 [3,4]
and its many modern plugins that build on the BigDataViewer ecosystem [5], and
many others. Such software tools have now become part of a critical
infrastructure for science [6]. Unfortunately, overshadowed by the few
exceptions that have had long-lasting impact, many other potentially useful
tools fail to find their way into the hands of users. While there are many
reasons for this, we believe that at least some of the underlying problems,
which we discuss in more detail below, can be mitigated. In this opinion piece,
we specifically argue that embedding teams of research software engineers
(RSEs) within imaging and image analysis core facilities would be a major step
towards sustainable bioimage analysis software.Comment: 8 pages, 0 figure
DeepContrast: Deep Tissue Contrast Enhancement using Synthetic Data Degradations and OOD Model Predictions
Microscopy images are crucial for life science research, allowing detailed
inspection and characterization of cellular and tissue-level structures and
functions. However, microscopy data are unavoidably affected by image
degradations, such as noise, blur, or others. Many such degradations also
contribute to a loss of image contrast, which becomes especially pronounced in
deeper regions of thick samples. Today, best performing methods to increase the
quality of images are based on Deep Learning approaches, which typically
require ground truth (GT) data during training. Our inability to counteract
blurring and contrast loss when imaging deep into samples prevents the
acquisition of such clean GT data. The fact that the forward process of
blurring and contrast loss deep into tissue can be modeled, allowed us to
propose a new method that can circumvent the problem of unobtainable GT data.
To this end, we first synthetically degraded the quality of microscopy images
even further by using an approximate forward model for deep tissue image
degradations. Then we trained a neural network that learned the inverse of this
degradation function from our generated pairs of raw and degraded images. We
demonstrated that networks trained in this way can be used out-of-distribution
(OOD) to improve the quality of less severely degraded images, e.g. the raw
data imaged in a microscope. Since the absolute level of degradation in such
microscopy images can be stronger than the additional degradation introduced by
our forward model, we also explored the effect of iterative predictions. Here,
we observed that in each iteration the measured image contrast kept improving
while detailed structures in the images got increasingly removed. Therefore,
dependent on the desired downstream analysis, a balance between contrast
improvement and retention of image details has to be found.Comment: 8 pages, 7 figures, 1 tabl
Efficient Algorithms for Moral Lineage Tracing
Lineage tracing, the joint segmentation and tracking of living cells as they
move and divide in a sequence of light microscopy images, is a challenging
task. Jug et al. have proposed a mathematical abstraction of this task, the
moral lineage tracing problem (MLTP), whose feasible solutions define both a
segmentation of every image and a lineage forest of cells. Their branch-and-cut
algorithm, however, is prone to many cuts and slow convergence for large
instances. To address this problem, we make three contributions: (i) we devise
the first efficient primal feasible local search algorithms for the MLTP, (ii)
we improve the branch-and-cut algorithm by separating tighter cutting planes
and by incorporating our primal algorithms, (iii) we show in experiments that
our algorithms find accurate solutions on the problem instances of Jug et al.
and scale to larger instances, leveraging moral lineage tracing to practical
significance.Comment: Accepted at ICCV 201
Fully Unsupervised Probabilistic Noise2Void
Image denoising is the first step in many biomedical image analysis pipelines
and Deep Learning (DL) based methods are currently best performing. A new
category of DL methods such as Noise2Void or Noise2Self can be used fully
unsupervised, requiring nothing but the noisy data. However, this comes at the
price of reduced reconstruction quality. The recently proposed Probabilistic
Noise2Void (PN2V) improves results, but requires an additional noise model for
which calibration data needs to be acquired. Here, we present improvements to
PN2V that (i) replace histogram based noise models by parametric noise models,
and (ii) show how suitable noise models can be created even in the absence of
calibration data. This is a major step since it actually renders PN2V fully
unsupervised. We demonstrate that all proposed improvements are not only
academic but indeed relevant.Comment: Accepted at ISBI 202
{\mu}Split: efficient image decomposition for microscopy data
We present {\mu}Split, a dedicated approach for trained image decomposition
in the context of fluorescence microscopy images. We find that best results
using regular deep architectures are achieved when large image patches are used
during training, making memory consumption the limiting factor to further
improving performance. We therefore introduce lateral contextualization (LC), a
memory efficient way to train powerful networks and show that LC leads to
consistent and significant improvements on the task at hand. We integrate LC
with U-Nets, Hierarchical AEs, and Hierarchical VAEs, for which we formulate a
modified ELBO loss. Additionally, LC enables training deeper hierarchical
models than otherwise possible and, interestingly, helps to reduce tiling
artefacts that are inherently impossible to avoid when using tiled VAE
predictions. We apply {\mu}Split to five decomposition tasks, one on a
synthetic dataset, four others derived from real microscopy data. LC achieves
SOTA results (average improvements to the best baseline of 2.36 dB PSNR), while
simultaneously requiring considerably less GPU memory.Comment: Published at ICCV 2023. 10 pages, 7 figures, 9 pages supplement, 8
supplementary figure
DenoiSeg: Joint Denoising and Segmentation
Microscopy image analysis often requires the segmentation of objects, but
training data for this task is typically scarce and hard to obtain. Here we
propose DenoiSeg, a new method that can be trained end-to-end on only a few
annotated ground truth segmentations. We achieve this by extending Noise2Void,
a self-supervised denoising scheme that can be trained on noisy images alone,
to also predict dense 3-class segmentations. The reason for the success of our
method is that segmentation can profit from denoising, especially when
performed jointly within the same network. The network becomes a denoising
expert by seeing all available raw data, while co-learning to segment, even if
only a few segmentation labels are available. This hypothesis is additionally
fueled by our observation that the best segmentation results on high quality
(very low noise) raw data are obtained when moderate amounts of synthetic noise
are added. This renders the denoising-task non-trivial and unleashes the
desired co-learning effect. We believe that DenoiSeg offers a viable way to
circumvent the tremendous hunger for high quality training data and effectively
enables few-shot learning of dense segmentations.Comment: 10 pages, 4 figures, 2 pages supplement (4 figures
- …