8,937 research outputs found
The removal of the polarization errors in low frequency dielectric spectroscopy
Electrode polarization error is the biggest problem when measuring the low
frequency dielectric properties of electrolytes or suspensions of particles,
including cells, in electrolytes. We present a simple and robust method to
remove the polarization error, which we demonstrate to work on weak and strong
ionic electrolytes as well as on cell suspensions. The method assumes no
particular behavior of the electrode polarization impedance; it makes use of
the fact that the effect dies out with frequency. The method allows for direct
measurement of the polarization impedance, whose behavior with the applied
voltages, electrode distance and ionic concentration is investigated
A Douglas-Rachford type primal-dual method for solving inclusions with mixtures of composite and parallel-sum type monotone operators
In this paper we propose two different primal-dual splitting algorithms for
solving inclusions involving mixtures of composite and parallel-sum type
monotone operators which rely on an inexact Douglas-Rachford splitting method,
however applied in different underlying Hilbert spaces. Most importantly, the
algorithms allow to process the bounded linear operators and the set-valued
operators occurring in the formulation of the monotone inclusion problem
separately at each iteration, the latter being individually accessed via their
resolvents. The performances of the primal-dual algorithms are emphasized via
some numerical experiments on location and image deblurring problems
Backward Penalty Schemes for Monotone Inclusion Problems
In this paper we are concerned with solving monotone inclusion problems
expressed by the sum of a set-valued maximally monotone operator with a
single-valued maximally monotone one and the normal cone to the nonempty set of
zeros of another set-valued maximally monotone operator. Depending on the
nature of the single-valued operator, we will propose two iterative penalty
schemes, both addressing the set-valued operators via backward steps. The
single-valued operator will be evaluated via a single forward step if it is
cocoercive, and via two forward steps if it is monotone and Lipschitz
continuous. The latter situation represents the starting point for dealing with
complexly structured monotone inclusion problems from algorithmic point of
view.Comment: arXiv admin note: text overlap with arXiv:1306.035
An extension of the variational inequality approach for nonlinear ill-posed problems
Convergence rates results for Tikhonov regularization of nonlinear ill-posed
operator equations in abstract function spaces require the handling of both
smoothness conditions imposed on the solution and structural conditions
expressing the character of nonlinearity. Recently, the distinguished role of
variational inequalities holding on some level sets was outlined for obtaining
convergence rates results. When lower rates are expected such inequalities
combine the smoothness properties of solution and forward operator in a
sophisticated manner. In this paper, using a Banach space setting we are going
to extend the variational inequality approach from H\"older rates to more
general rates including the case of logarithmic convergence rates.Comment: 17 pages, submitted to "Journal of Integral Equations and
Applications
Conditional stability versus ill-posedness for operator equations with monotone operators in Hilbert space
In the literature on singular perturbation (Lavrentiev regularization) for
the stable approximate solution of operator equations with monotone operators
in the Hilbert space the phenomena of conditional stability and local
well-posedness and ill-posedness are rarely investigated. Our goal is to
present some studies which try to bridge this gap. So we discuss the impact of
conditional stability on error estimates and convergence rates for the
Lavrentiev regularization and distinguish for linear problems well-posedness
and ill-posedness in a specific manner motivated by a saturation result. The
role of the regularization error in the noise-free case, called bias, is a
crucial point in the paper for nonlinear and linear problems. In particular,
for linear operator equations general convergence rates, including logarithmic
rates, are derived by means of the method of approximate source conditions.
This allows us to extend well-known convergence rates results for the
Lavrentiev regularization that were based on general source conditions to the
case of non-selfadjoint linear monotone forward operators for which general
source conditions fail. Examples presenting the self-adjoint multiplication
operator as well as the non-selfadjoint fractional integral operator and
Ces\`aro operator illustrate the theoretical results. Extensions to the
nonlinear case under specific conditions on the nonlinearity structure complete
the paper.Comment: 24 page
Convergence analysis for a primal-dual monotone + skew splitting algorithm with applications to total variation minimization
In this paper we investigate the convergence behavior of a primal-dual
splitting method for solving monotone inclusions involving mixtures of
composite, Lipschitzian and parallel sum type operators proposed by Combettes
and Pesquet in [7]. Firstly, in the particular case of convex minimization
problems, we derive convergence rates for the sequence of objective function
values by making use of conjugate duality techniques. Secondly, we propose for
the general monotone inclusion problem two new schemes which accelerate the
sequences of primal and/or dual iterates, provided strong monotonicity
assumptions for some of the involved operators are fulfilled. Finally, we apply
the theoretical achievements in the context of different types of image
restoration problems solved via total variation regularization
A forward-backward-forward differential equation and its asymptotic properties
In this paper, we approach the problem of finding the zeros of the sum of a
maximally monotone operator and a monotone and Lipschitz continuous one in a
real Hilbert space via an implicit forward-backward-forward dynamical system
with nonconstant relaxation parameters and stepsizes of the resolvents. Besides
proving existence and uniqueness of strong global solutions for the
differential equation under consideration, we show weak convergence of the
generated trajectories and, under strong monotonicity assumptions, strong
convergence with exponential rate. In the particular setting of minimizing the
sum of a proper, convex and lower semicontinuous function with a smooth convex
one, we provide a rate for the convergence of the objective function along the
ergodic trajectory to its minimum value
On the acceleration of the double smoothing technique for unconstrained convex optimization problems
In this article we investigate the possibilities of accelerating the double
smoothing technique when solving unconstrained nondifferentiable convex
optimization problems. This approach relies on the regularization in two steps
of the Fenchel dual problem associated to the problem to be solved into an
optimization problem having a differentiable strongly convex objective function
with Lipschitz continuous gradient. The doubly regularized dual problem is then
solved via a fast gradient method. The aim of this paper is to show how do the
properties of the functions in the objective of the primal problem influence
the implementation of the double smoothing approach and its rate of
convergence. The theoretical results are applied to linear inverse problems by
making use of different regularization functionals.Comment: 22 pages. arXiv admin note: text overlap with arXiv:1203.207
A variable smoothing algorithm for solving convex optimization problems
In this article we propose a method for solving unconstrained optimization
problems with convex and Lipschitz continuous objective functions. By making
use of the Moreau envelopes of the functions occurring in the objective, we
smooth the latter to a convex and differentiable function with Lipschitz
continuous gradient by using both variable and constant smoothing parameters.
The resulting problem is solved via an accelerated first-order method and this
allows us to recover approximately the optimal solutions to the initial
optimization problem with a rate of convergence of order \O(\tfrac{\ln k}{k})
for variable smoothing and of order \O(\tfrac{1}{k}) for constant smoothing.
Some numerical experiments employing the variable smoothing method in image
processing and in supervised learning classification are also presented.Comment: 23 page
Variable smoothing for convex optimization problems using stochastic gradients
We aim to solve a structured convex optimization problem, where a nonsmooth
function is composed with a linear operator. When opting for full splitting
schemes, usually, primal-dual type methods are employed as they are effective
and also well studied. However, under the additional assumption of Lipschitz
continuity of the nonsmooth function which is composed with the linear operator
we can derive novel algorithms through regularization via the Moreau envelope.
Furthermore, we tackle large scale problems by means of stochastic oracle
calls, very similar to stochastic gradient techniques. Applications to total
variational denoising and deblurring are provided
- …