We present a novel approach to nonlinear constrained Tikhonov regularization
from the viewpoint of optimization theory. A second-order sufficient optimality
condition is suggested as a nonlinearity condition to handle the nonlinearity
of the forward operator. The approach is exploited to derive convergence rates
results for a priori as well as a posteriori choice rules, e.g., discrepancy
principle and balancing principle, for selecting the regularization parameter.
The idea is further illustrated on a general class of parameter identification
problems, for which (new) source and nonlinearity conditions are derived and
the structural property of the nonlinearity term is revealed. A number of
examples including identifying distributed parameters in elliptic differential
equations are presented.Comment: 21 pages, to appear in Inverse Problem