1,552 research outputs found
Traction force microscopy with optimized regularization and automated Bayesian parameter selection for comparing cells
Adherent cells exert traction forces on to their environment, which allows
them to migrate, to maintain tissue integrity, and to form complex
multicellular structures. This traction can be measured in a perturbation-free
manner with traction force microscopy (TFM). In TFM, traction is usually
calculated via the solution of a linear system, which is complicated by
undersampled input data, acquisition noise, and large condition numbers for
some methods. Therefore, standard TFM algorithms either employ data filtering
or regularization. However, these approaches require a manual selection of
filter- or regularization parameters and consequently exhibit a substantial
degree of subjectiveness. This shortcoming is particularly serious when cells
in different conditions are to be compared because optimal noise suppression
needs to be adapted for every situation, which invariably results in systematic
errors. Here, we systematically test the performance of new methods from
computer vision and Bayesian inference for solving the inverse problem in TFM.
We compare two classical schemes, L1- and L2-regularization, with three
previously untested schemes, namely Elastic Net regularization, Proximal
Gradient Lasso, and Proximal Gradient Elastic Net. Overall, we find that
Elastic Net regularization, which combines L1 and L2 regularization,
outperforms all other methods with regard to accuracy of traction
reconstruction. Next, we develop two methods, Bayesian L2 regularization and
Advanced Bayesian L2 regularization, for automatic, optimal L2 regularization.
Using artificial data and experimental data, we show that these methods enable
robust reconstruction of traction without requiring a difficult selection of
regularization parameters specifically for each data set. Thus, Bayesian
methods can mitigate the considerable uncertainty inherent in comparing
cellular traction forces
First order algorithms in variational image processing
Variational methods in imaging are nowadays developing towards a quite
universal and flexible tool, allowing for highly successful approaches on tasks
like denoising, deblurring, inpainting, segmentation, super-resolution,
disparity, and optical flow estimation. The overall structure of such
approaches is of the form ; where the functional is a data fidelity term also
depending on some input data and measuring the deviation of from such
and is a regularization functional. Moreover is a (often linear)
forward operator modeling the dependence of data on an underlying image, and
is a positive regularization parameter. While is often
smooth and (strictly) convex, the current practice almost exclusively uses
nonsmooth regularization functionals. The majority of successful techniques is
using nonsmooth and convex functionals like the total variation and
generalizations thereof or -norms of coefficients arising from scalar
products with some frame system. The efficient solution of such variational
problems in imaging demands for appropriate algorithms. Taking into account the
specific structure as a sum of two very different terms to be minimized,
splitting algorithms are a quite canonical choice. Consequently this field has
revived the interest in techniques like operator splittings or augmented
Lagrangians. Here we shall provide an overview of methods currently developed
and recent results as well as some computational studies providing a comparison
of different methods and also illustrating their success in applications.Comment: 60 pages, 33 figure
Online convex optimization meets sparsity
Tracking time-varying sparse signals is a recent problem
with widespread applications. Techniques derived from compressed
sensing, Lasso, and Kalman filtering have been proposed in the literature,
which mainly present two drawbacks: the prior knowledge of specific
evolution models and the lack of theoretical guarantees. In this work, we
propose a new perspective on the problem, based on the theory on online
convex optimization, which has been developed in the machine learning
community. We exploit a strongly convex model, and we develop online
algorithms, for which we are able to provide a dynamic regret analysis. A
few simulations that support the theoretical results are finally presented
Social-sparsity brain decoders: faster spatial sparsity
Spatially-sparse predictors are good models for brain decoding: they give
accurate predictions and their weight maps are interpretable as they focus on a
small number of regions. However, the state of the art, based on total
variation or graph-net, is computationally costly. Here we introduce sparsity
in the local neighborhood of each voxel with social-sparsity, a structured
shrinkage operator. We find that, on brain imaging classification problems,
social-sparsity performs almost as well as total-variation models and better
than graph-net, for a fraction of the computational cost. It also very clearly
outlines predictive regions. We give details of the model and the algorithm.Comment: in Pattern Recognition in NeuroImaging, Jun 2016, Trento, Italy. 201
- …