118 research outputs found
Extragradient methods for elliptic inverse problems and image denoising
Numerous mathematical models in applied mathematics can be expressed as a partial differential equation involving certain coefficients. These coefficients are known and they describe some physical properties of the model. The direct problem in this context is to solve the partial differential equation. By contrast, an inverse problem asks for the identification of the variable coefficients when a certain measurement of a solution of the partial differential equation is available. One of the most commonly used approaches for solving this inverse problem is by posing a constrained minimization problem which can be written as a variational inequality. The main contribution of this thesis is to employ various variants of extragradient methods to solve the inverse problem of parameter identification by posing it as a variational inequality. We present a thorough comparison of projected gradient method, scaled projected gradient method and several extragradient methods including the Marcotte variants, He-Goldstein type method, the projection- contraction methods proposed by Solodov and Tseng, and the hyperplane method developed by Iusem. We also test the performance of the extragradient methods for the image debluring problem
A tool for the design of public transportation services
In this paper a model is described in order to
determine the number of lines of a public transportation service,
the layout of their lines amongst a set of candidates, their service
capacity, and the resulting assignment of passengers to these
facilities so as to minimize the total costs of the system. The
model takes into account the delays for passengers that queue at
the stations reflecting congestion effects of the transport service
system and also, the abandonment of waiting queues at stations
by passengers. Passengers are assumed to choose the lines they
ride on by selecting the most convenient service line following a
user equilibrium formulation.Peer ReviewedPostprint (published version
A Lemke-like algorithm for the Multiclass Network Equilibrium Problem
14 pagesWe consider a nonatomic congestion game on a connected graph, with several classes of players. Each player wants to go from its origin vertex to its destination vertex at the minimum cost and all players of a given class share the same characteristics: cost functions on each arc, and origin-destination pair. Under some mild conditions, it is known that a Nash equilibrium exists, but the computation of an equilibrium in the multiclass case is an open problem for general functions. We consider the specific case where the cost functions are affine and propose an extension of Lemke's algorithm able to solve this problem. At the same time, it provides a constructive proof of the existence of an equilibrium in this case
Smoothing Methods for Nonlinear Complementarity Problems
International audienceIn this paper, we present a new smoothing approach to solve general nonlinear complementarity problems. Under the P0 condition on the original problems, we prove some existence and convergence results . We also present an error estimate under a new and general monotonicity condition. The numerical tests confirm the efficiency of our proposed methods
A contracting ellipsoid method for variational inequality problems
Includes bibliographical references (p. 50-53).This research has been supported in part by Grant #ECS-83-16224 from the Operations Research and Systems Theory Program of the National Science Foundation
Lipschitz regularized gradient flows and latent generative particles
Lipschitz regularized f-divergences are constructed by imposing a bound on
the Lipschitz constant of the discriminator in the variational representation.
They interpolate between the Wasserstein metric and f-divergences and provide a
flexible family of loss functions for non-absolutely continuous (e.g.
empirical) distributions, possibly with heavy tails. We construct Lipschitz
regularized gradient flows on the space of probability measures based on these
divergences. Examples of such gradient flows are Lipschitz regularized
Fokker-Planck and porous medium partial differential equations (PDEs) for the
Kullback-Leibler and alpha-divergences, respectively. The regularization
corresponds to imposing a Courant-Friedrichs-Lewy numerical stability condition
on the PDEs. For empirical measures, the Lipschitz regularization on gradient
flows induces a numerically stable transporter/discriminator particle
algorithm, where the generative particles are transported along the gradient of
the discriminator. The gradient structure leads to a regularized Fisher
information (particle kinetic energy) used to track the convergence of the
algorithm. The Lipschitz regularized discriminator can be implemented via
neural network spectral normalization and the particle algorithm generates
approximate samples from possibly high-dimensional distributions known only
from data. Notably, our particle algorithm can generate synthetic data even in
small sample size regimes. A new data processing inequality for the regularized
divergence allows us to combine our particle algorithm with representation
learning, e.g. autoencoder architectures. The resulting algorithm yields
markedly improved generative properties in terms of efficiency and quality of
the synthetic samples. From a statistical mechanics perspective the encoding
can be interpreted dynamically as learning a better mobility for the generative
particles
A Primal-Dual Algorithmic Framework for Constrained Convex Minimization
We present a primal-dual algorithmic framework to obtain approximate
solutions to a prototypical constrained convex optimization problem, and
rigorously characterize how common structural assumptions affect the numerical
efficiency. Our main analysis technique provides a fresh perspective on
Nesterov's excessive gap technique in a structured fashion and unifies it with
smoothing and primal-dual methods. For instance, through the choices of a dual
smoothing strategy and a center point, our framework subsumes decomposition
algorithms, augmented Lagrangian as well as the alternating direction
method-of-multipliers methods as its special cases, and provides optimal
convergence rates on the primal objective residual as well as the primal
feasibility gap of the iterates for all.Comment: This paper consists of 54 pages with 7 tables and 12 figure
- …