58 research outputs found
A descent algorithm for the optimal control of ReLU neural network informed PDEs based on approximate directional derivatives
We propose and analyze a numerical algorithm for solving a class of optimal control problems for learning-informed semilinear partial differential equations. The latter is a class of PDEs with constituents that are in principle unknown and are approximated by nonsmooth ReLU neural networks. We first show that a direct smoothing of the ReLU network with the aim to make use of classical numerical solvers can have certain disadvantages, namely potentially introducing multiple solutions for the corresponding state equation. This motivates us to devise a numerical algorithm that treats directly the nonsmooth optimal control problem, by employing a descent algorithm inspired by a bundle-free method. Several numerical examples are provided and the efficiency of the algorithm is shown
Implementation of a continuation method for nonlinear complementarity problems via normal maps
Ankara : Department of Industrial Engineering and Institute of Engineering and Sciences, Bilkent Univ., 1997.Thesis (Master's) -- Bilkent University, 1997.Includes bibliographical references.In this thesis, a continuation method for nonlinear complementarity
problems via normal maps that is developed by Chen, Harker and Pinar [8]
is implemented. This continuation method uses the smooth function to
approximate the normal map reformulation of nonlinear complementarity
problems. The algorithm is implemented and tested with two different plussmoothing
functions namely interior point plus-smooth function and piecewise
quadratic plus-smoothing function. These two functions are compared. Testing
of the algorithm is made with several known problems.Erkan, AliM.S
ACQUIRE: an inexact iteratively reweighted norm approach for TV-based Poisson image restoration
We propose a method, called ACQUIRE, for the solution of constrained
optimization problems modeling the restoration of images corrupted by Poisson
noise. The objective function is the sum of a generalized Kullback-Leibler
divergence term and a TV regularizer, subject to nonnegativity and possibly
other constraints, such as flux conservation. ACQUIRE is a line-search method
that considers a smoothed version of TV, based on a Huber-like function, and
computes the search directions by minimizing quadratic approximations of the
problem, built by exploiting some second-order information. A classical
second-order Taylor approximation is used for the Kullback-Leibler term and an
iteratively reweighted norm approach for the smoothed TV term. We prove that
the sequence generated by the method has a subsequence converging to a
minimizer of the smoothed problem and any limit point is a minimizer.
Furthermore, if the problem is strictly convex, the whole sequence is
convergent. We note that convergence is achieved without requiring the exact
minimization of the quadratic subproblems; low accuracy in this minimization
can be used in practice, as shown by numerical results. Experiments on
reference test problems show that our method is competitive with
well-established methods for TV-based Poisson image restoration, in terms of
both computational efficiency and image quality.Comment: 37 pages, 13 figure
A trust region-type normal map-based semismooth Newton method for nonsmooth nonconvex composite optimization
We propose a novel trust region method for solving a class of nonsmooth and
nonconvex composite-type optimization problems. The approach embeds inexact
semismooth Newton steps for finding zeros of a normal map-based stationarity
measure for the problem in a trust region framework. Based on a new merit
function and acceptance mechanism, global convergence and transition to fast
local q-superlinear convergence are established under standard conditions. In
addition, we verify that the proposed trust region globalization is compatible
with the Kurdyka-{\L}ojasiewicz (KL) inequality yielding finer convergence
results. We further derive new normal map-based representations of the
associated second-order optimality conditions that have direct connections to
the local assumptions required for fast convergence. Finally, we study the
behavior of our algorithm when the Hessian matrix of the smooth part of the
objective function is approximated by BFGS updates. We successfully link the KL
theory, properties of the BFGS approximations, and a Dennis-Mor{\'e}-type
condition to show superlinear convergence of the quasi-Newton version of our
method. Numerical experiments on sparse logistic regression and image
compression illustrate the efficiency of the proposed algorithm.Comment: 56 page
A New Noninterior Continuation Method for Solving a System of Equalities and Inequalities
By using slack variables and minimum function, we first reformulate the system of equalities and inequalities as a system of nonsmooth equations, and, using smoothing technique, we construct the smooth operator. A new noninterior continuation method is proposed to solve the system of smooth equations. It shows that any accumulation point of the iteration sequence generated by our algorithm is a solution of the system of equalities and inequalities. Some numerical experiments show the
feasibility and efficiency of the algorithm
- …