30,516 research outputs found
Automatic Segmentation of Fluorescence Lifetime Microscopy Images of Cells Using Multi-Resolution Community Detection
We have developed an automatic method for segmenting fluorescence lifetime
(FLT) imaging microscopy (FLIM) images of cells inspired by a multi-resolution
community detection (MCD) based network segmentation method. The image
processing problem is framed as identifying segments with respective average
FLTs against a background in FLIM images. The proposed method segments a FLIM
image for a given resolution of the network composed using image pixels as the
nodes and similarity between the pixels as the edges. In the resulting
segmentation, low network resolution leads to larger segments and high network
resolution leads to smaller segments. Further, the mean-square error (MSE) in
estimating the FLT segments in a FLIM image using the proposed method was found
to be consistently decreasing with increasing resolution of the corresponding
network. The proposed MCD method outperformed a popular spectral clustering
based method in performing FLIM image segmentation. The spectral segmentation
method introduced noisy segments in its output at high resolution. It was
unable to offer a consistent decrease in MSE with increasing resolution.Comment: 21 pages, 6 figure
Superpixels: An Evaluation of the State-of-the-Art
Superpixels group perceptually similar pixels to create visually meaningful
entities while heavily reducing the number of primitives for subsequent
processing steps. As of these properties, superpixel algorithms have received
much attention since their naming in 2003. By today, publicly available
superpixel algorithms have turned into standard tools in low-level vision. As
such, and due to their quick adoption in a wide range of applications,
appropriate benchmarks are crucial for algorithm selection and comparison.
Until now, the rapidly growing number of algorithms as well as varying
experimental setups hindered the development of a unifying benchmark. We
present a comprehensive evaluation of 28 state-of-the-art superpixel algorithms
utilizing a benchmark focussing on fair comparison and designed to provide new
insights relevant for applications. To this end, we explicitly discuss
parameter optimization and the importance of strictly enforcing connectivity.
Furthermore, by extending well-known metrics, we are able to summarize
algorithm performance independent of the number of generated superpixels,
thereby overcoming a major limitation of available benchmarks. Furthermore, we
discuss runtime, robustness against noise, blur and affine transformations,
implementation details as well as aspects of visual quality. Finally, we
present an overall ranking of superpixel algorithms which redefines the
state-of-the-art and enables researchers to easily select appropriate
algorithms and the corresponding implementations which themselves are made
publicly available as part of our benchmark at
davidstutz.de/projects/superpixel-benchmark/
Location Dependent Dirichlet Processes
Dirichlet processes (DP) are widely applied in Bayesian nonparametric
modeling. However, in their basic form they do not directly integrate
dependency information among data arising from space and time. In this paper,
we propose location dependent Dirichlet processes (LDDP) which incorporate
nonparametric Gaussian processes in the DP modeling framework to model such
dependencies. We develop the LDDP in the context of mixture modeling, and
develop a mean field variational inference algorithm for this mixture model.
The effectiveness of the proposed modeling framework is shown on an image
segmentation task
Deep clustering: Discriminative embeddings for segmentation and separation
We address the problem of acoustic source separation in a deep learning
framework we call "deep clustering." Rather than directly estimating signals or
masking functions, we train a deep network to produce spectrogram embeddings
that are discriminative for partition labels given in training data. Previous
deep network approaches provide great advantages in terms of learning power and
speed, but previously it has been unclear how to use them to separate signals
in a class-independent way. In contrast, spectral clustering approaches are
flexible with respect to the classes and number of items to be segmented, but
it has been unclear how to leverage the learning power and speed of deep
networks. To obtain the best of both worlds, we use an objective function that
to train embeddings that yield a low-rank approximation to an ideal pairwise
affinity matrix, in a class-independent way. This avoids the high cost of
spectral factorization and instead produces compact clusters that are amenable
to simple clustering methods. The segmentations are therefore implicitly
encoded in the embeddings, and can be "decoded" by clustering. Preliminary
experiments show that the proposed method can separate speech: when trained on
spectrogram features containing mixtures of two speakers, and tested on
mixtures of a held-out set of speakers, it can infer masking functions that
improve signal quality by around 6dB. We show that the model can generalize to
three-speaker mixtures despite training only on two-speaker mixtures. The
framework can be used without class labels, and therefore has the potential to
be trained on a diverse set of sound types, and to generalize to novel sources.
We hope that future work will lead to segmentation of arbitrary sounds, with
extensions to microphone array methods as well as image segmentation and other
domains.Comment: Originally submitted on June 5, 201
- …