8,816 research outputs found
Weak Minimizers, Minimizers and Variational Inequalities for set valued Functions. A blooming wreath?
In the literature, necessary and sufficient conditions in terms of
variational inequalities are introduced to characterize minimizers of convex
set valued functions with values in a conlinear space. Similar results are
proved for a weaker concept of minimizers and weaker variational inequalities.
The implications are proved using scalarization techniques that eventually
provide original problems, not fully equivalent to the set-valued counterparts.
Therefore, we try, in the course of this note, to close the network among the
various notions proposed. More specifically, we prove that a minimizer is
always a weak minimizer, and a solution to the stronger variational inequality
always also a solution to the weak variational inequality of the same type. As
a special case we obtain a complete characterization of efficiency and weak
efficiency in vector optimization by set-valued variational inequalities and
their scalarizations. Indeed this might eventually prove the usefulness of the
set-optimization approach to renew the study of vector optimization
Necessary conditions for variational regularization schemes
We study variational regularization methods in a general framework, more
precisely those methods that use a discrepancy and a regularization functional.
While several sets of sufficient conditions are known to obtain a
regularization method, we start with an investigation of the converse question:
How could necessary conditions for a variational method to provide a
regularization method look like? To this end, we formalize the notion of a
variational scheme and start with comparison of three different instances of
variational methods. Then we focus on the data space model and investigate the
role and interplay of the topological structure, the convergence notion and the
discrepancy functional. Especially, we deduce necessary conditions for the
discrepancy functional to fulfill usual continuity assumptions. The results are
applied to discrepancy functionals given by Bregman distances and especially to
the Kullback-Leibler divergence.Comment: To appear in Inverse Problem
Functional Liftings of Vectorial Variational Problems with Laplacian Regularization
We propose a functional lifting-based convex relaxation of variational
problems with Laplacian-based second-order regularization. The approach rests
on ideas from the calibration method as well as from sublabel-accurate
continuous multilabeling approaches, and makes these approaches amenable for
variational problems with vectorial data and higher-order regularization, as is
common in image processing applications. We motivate the approach in the
function space setting and prove that, in the special case of absolute
Laplacian regularization, it encompasses the discretization-first
sublabel-accurate continuous multilabeling approach as a special case. We
present a mathematical connection between the lifted and original functional
and discuss possible interpretations of minimizers in the lifted function
space. Finally, we exemplarily apply the proposed approach to 2D image
registration problems.Comment: 12 pages, 3 figures; accepted at the conference "Scale Space and
Variational Methods" in Hofgeismar, Germany 201
Second-order subdifferential calculus with applications to tilt stability in optimization
The paper concerns the second-order generalized differentiation theory of
variational analysis and new applications of this theory to some problems of
constrained optimization in finitedimensional spaces. The main attention is
paid to the so-called (full and partial) second-order subdifferentials of
extended-real-valued functions, which are dual-type constructions generated by
coderivatives of frst-order subdifferential mappings. We develop an extended
second-order subdifferential calculus and analyze the basic second-order
qualification condition ensuring the fulfillment of the principal secondorder
chain rule for strongly and fully amenable compositions. The calculus results
obtained in this way and computing the second-order subdifferentials for
piecewise linear-quadratic functions and their major specifications are applied
then to the study of tilt stability of local minimizers for important classes
of problems in constrained optimization that include, in particular, problems
of nonlinear programming and certain classes of extended nonlinear programs
described in composite terms
On the minimization of Dirichlet eigenvalues
Results are obtained for two minimization problems: and where , is the 'th eigenvalue of the
Dirichlet Laplacian acting in , denotes the Lebesgue
measure of , denotes the perimeter of ,
and where is in a suitable class set functions. The latter
include for example the perimeter of , and the moment of inertia of
with respect to its center of mass.Comment: 15 page
- …