1,554 research outputs found
Computation of the Binding Energies in the Inverse Problem Framework
We formalized the nuclear mass problem in the inverse problem framework. This
approach allows us to infer the underlying model parameters from experimental
observation, rather than to predict the observations from the model parameters.
The inverse problem was formulated for the numericaly generalized the
semi-empirical mass formula of Bethe and von Weizs\"{a}cker. It was solved in
step by step way based on the AME2012 nuclear database.
The solution of the overdetermined system of nonlinear equations has been
obtained with the help of the Aleksandrov's auto-regularization method of
Gauss-Newton type for ill-posed problems. In the obtained generalized model the
corrections to the binding energy depend on nine proton (2, 8, 14, 20, 28, 50,
82, 108, 124) and ten neutron (2, 8, 14, 20, 28, 50, 82, 124, 152, 202) magic
numbers as well on the asymptotic boundaries of their influence. These results
help us to evaluate the borders of the nuclear landscape and show their limit.
The efficiency of the applied approach was checked by comparing relevant
results with the results obtained independently.Comment: 9 pages, 1 figure, Proceedings of the International Symposium on
Exotic Nuclei EXON-2016, Kazan, Russia, 4-10 September 2016. based on
arXiv:1602.0677
Optimization Methods for Inverse Problems
Optimization plays an important role in solving many inverse problems.
Indeed, the task of inversion often either involves or is fully cast as a
solution of an optimization problem. In this light, the mere non-linear,
non-convex, and large-scale nature of many of these inversions gives rise to
some very challenging optimization problems. The inverse problem community has
long been developing various techniques for solving such optimization tasks.
However, other, seemingly disjoint communities, such as that of machine
learning, have developed, almost in parallel, interesting alternative methods
which might have stayed under the radar of the inverse problem community. In
this survey, we aim to change that. In doing so, we first discuss current
state-of-the-art optimization methods widely used in inverse problems. We then
survey recent related advances in addressing similar challenges in problems
faced by the machine learning community, and discuss their potential advantages
for solving inverse problems. By highlighting the similarities among the
optimization challenges faced by the inverse problem and the machine learning
communities, we hope that this survey can serve as a bridge in bringing
together these two communities and encourage cross fertilization of ideas.Comment: 13 page
A novel two-point gradient method for Regularization of inverse problems in Banach spaces
In this paper, we introduce a novel two-point gradient method for solving the
ill-posed problems in Banach spaces and study its convergence analysis. The
method is based on the well known iteratively regularized Landweber iteration
method together with an extrapolation strategy. The general formulation of
iteratively regularized Landweber iteration method in Banach spaces excludes
the use of certain functions such as total variation like penalty functionals,
functions etc. The novel scheme presented in this paper allows to use
such non-smooth penalty terms that can be helpful in practical applications
involving the reconstruction of several important features of solutions such as
piecewise constancy and sparsity. We carefully discuss the choices for
important parameters, such as combination parameters and step sizes involved in
the design of the method. Additionally, we discuss an example to validate our
assumptions.Comment: Submitted in Applicable Analysi
Projected Newton Method for noise constrained Tikhonov regularization
Tikhonov regularization is a popular approach to obtain a meaningful solution
for ill-conditioned linear least squares problems. A relatively simple way of
choosing a good regularization parameter is given by Morozov's discrepancy
principle. However, most approaches require the solution of the Tikhonov
problem for many different values of the regularization parameter, which is
computationally demanding for large scale problems. We propose a new and
efficient algorithm which simultaneously solves the Tikhonov problem and finds
the corresponding regularization parameter such that the discrepancy principle
is satisfied. We achieve this by formulating the problem as a nonlinear system
of equations and solving this system using a line search method. We obtain a
good search direction by projecting the problem onto a low dimensional Krylov
subspace and computing the Newton direction for the projected problem. This
projected Newton direction, which is significantly less computationally
expensive to calculate than the true Newton direction, is then combined with a
backtracking line search to obtain a globally convergent algorithm, which we
refer to as the Projected Newton method. We prove convergence of the algorithm
and illustrate the improved performance over current state-of-the-art solvers
with some numerical experiments
- …