73 research outputs found
Better Research Software Tools to Elevate the Rate of Scientific Discovery -- or why we need to invest in research software engineering
In the past decade, enormous progress has been made in advancing the
state-of-the-art in bioimage analysis - a young computational field that works
in close collaboration with the life sciences on the quantitative analysis of
scientific image data. In many cases, tremendous effort has been spent to
package these new advances into usable software tools and, as a result, users
can nowadays routinely apply cutting-edge methods to their analysis problems
using software tools such as ilastik [1], cellprofiler [2], Fiji/ImageJ2 [3,4]
and its many modern plugins that build on the BigDataViewer ecosystem [5], and
many others. Such software tools have now become part of a critical
infrastructure for science [6]. Unfortunately, overshadowed by the few
exceptions that have had long-lasting impact, many other potentially useful
tools fail to find their way into the hands of users. While there are many
reasons for this, we believe that at least some of the underlying problems,
which we discuss in more detail below, can be mitigated. In this opinion piece,
we specifically argue that embedding teams of research software engineers
(RSEs) within imaging and image analysis core facilities would be a major step
towards sustainable bioimage analysis software.Comment: 8 pages, 0 figure
DeepContrast: Deep Tissue Contrast Enhancement using Synthetic Data Degradations and OOD Model Predictions
Microscopy images are crucial for life science research, allowing detailed
inspection and characterization of cellular and tissue-level structures and
functions. However, microscopy data are unavoidably affected by image
degradations, such as noise, blur, or others. Many such degradations also
contribute to a loss of image contrast, which becomes especially pronounced in
deeper regions of thick samples. Today, best performing methods to increase the
quality of images are based on Deep Learning approaches, which typically
require ground truth (GT) data during training. Our inability to counteract
blurring and contrast loss when imaging deep into samples prevents the
acquisition of such clean GT data. The fact that the forward process of
blurring and contrast loss deep into tissue can be modeled, allowed us to
propose a new method that can circumvent the problem of unobtainable GT data.
To this end, we first synthetically degraded the quality of microscopy images
even further by using an approximate forward model for deep tissue image
degradations. Then we trained a neural network that learned the inverse of this
degradation function from our generated pairs of raw and degraded images. We
demonstrated that networks trained in this way can be used out-of-distribution
(OOD) to improve the quality of less severely degraded images, e.g. the raw
data imaged in a microscope. Since the absolute level of degradation in such
microscopy images can be stronger than the additional degradation introduced by
our forward model, we also explored the effect of iterative predictions. Here,
we observed that in each iteration the measured image contrast kept improving
while detailed structures in the images got increasingly removed. Therefore,
dependent on the desired downstream analysis, a balance between contrast
improvement and retention of image details has to be found.Comment: 8 pages, 7 figures, 1 tabl
Efficient Algorithms for Moral Lineage Tracing
Lineage tracing, the joint segmentation and tracking of living cells as they
move and divide in a sequence of light microscopy images, is a challenging
task. Jug et al. have proposed a mathematical abstraction of this task, the
moral lineage tracing problem (MLTP), whose feasible solutions define both a
segmentation of every image and a lineage forest of cells. Their branch-and-cut
algorithm, however, is prone to many cuts and slow convergence for large
instances. To address this problem, we make three contributions: (i) we devise
the first efficient primal feasible local search algorithms for the MLTP, (ii)
we improve the branch-and-cut algorithm by separating tighter cutting planes
and by incorporating our primal algorithms, (iii) we show in experiments that
our algorithms find accurate solutions on the problem instances of Jug et al.
and scale to larger instances, leveraging moral lineage tracing to practical
significance.Comment: Accepted at ICCV 201
Fully Unsupervised Probabilistic Noise2Void
Image denoising is the first step in many biomedical image analysis pipelines
and Deep Learning (DL) based methods are currently best performing. A new
category of DL methods such as Noise2Void or Noise2Self can be used fully
unsupervised, requiring nothing but the noisy data. However, this comes at the
price of reduced reconstruction quality. The recently proposed Probabilistic
Noise2Void (PN2V) improves results, but requires an additional noise model for
which calibration data needs to be acquired. Here, we present improvements to
PN2V that (i) replace histogram based noise models by parametric noise models,
and (ii) show how suitable noise models can be created even in the absence of
calibration data. This is a major step since it actually renders PN2V fully
unsupervised. We demonstrate that all proposed improvements are not only
academic but indeed relevant.Comment: Accepted at ISBI 202
Prediction of designer-recombinases for DNA editing with generative deep learning
Site-specific tyrosine-type recombinases are effective tools for genome engineering, with the first engineered variants having demonstrated therapeutic potential. So far, adaptation to new DNA target site selectivity of designerrecombinases has been achieved mostly through iterative cycles of directed molecular evolution. While effective, directed molecular evolution methods are laborious and time consuming. Here we present RecGen (Recombinase Generator), an algorithm for the intelligent generation of designerrecombinases. We gather the sequence information of over one million Crelike recombinase sequences evolved for 89 different target sites with whichwe train Conditional Variational Autoencoders for recombinase generation. Experimental validation demonstrates that the algorithm can predict recombinase sequences with activity on novel target-sites, indicating that RecGen is useful to accelerate the development of future designer-recombinases
- âŚ