3,534 research outputs found
Optimal Transport, Convection, Magnetic Relaxation and Generalized Boussinesq equations
We establish a connection between Optimal Transport Theory and classical
Convection Theory for geophysical flows. Our starting point is the model
designed few years ago by Angenent, Haker and Tannenbaum to solve some Optimal
Transport problems. This model can be seen as a generalization of the
Darcy-Boussinesq equations, which is a degenerate version of the
Navier-Stokes-Boussinesq (NSB) equations. In a unified framework, we relate
different variants of the NSB equations (in particular what we call the
generalized Hydrostatic-Boussinesq equations) to various models involving
Optimal Transport (and the related Monge-Ampere equation. This includes the 2D
semi-geostrophic equations and some fully non-linear versions of the so-called
high-field limit of the Vlasov-Poisson system and of the Keller-Segel for
Chemotaxis. Finally, we show how a ``stringy'' generalization of the AHT model
can be related to the magnetic relaxation model studied by Arnold and Moffatt
to obtain stationary solutions of the Euler equations with prescribed topology
A new graph perspective on max-min fairness in Gaussian parallel channels
In this work we are concerned with the problem of achieving max-min fairness
in Gaussian parallel channels with respect to a general performance function,
including channel capacity or decoding reliability as special cases. As our
central results, we characterize the laws which determine the value of the
achievable max-min fair performance as a function of channel sharing policy and
power allocation (to channels and users). In particular, we show that the
max-min fair performance behaves as a specialized version of the Lovasz
function, or Delsarte bound, of a certain graph induced by channel sharing
combinatorics. We also prove that, in addition to such graph, merely a certain
2-norm distance dependent on the allowable power allocations and used
performance functions, is sufficient for the characterization of max-min fair
performance up to some candidate interval. Our results show also a specific
role played by odd cycles in the graph induced by the channel sharing policy
and we present an interesting relation between max-min fairness in parallel
channels and optimal throughput in an associated interference channel.Comment: 41 pages, 8 figures. submitted to IEEE Transactions on Information
Theory on August the 6th, 200
Algorithms for nonnegative matrix factorization with the beta-divergence
This paper describes algorithms for nonnegative matrix factorization (NMF)
with the beta-divergence (beta-NMF). The beta-divergence is a family of cost
functions parametrized by a single shape parameter beta that takes the
Euclidean distance, the Kullback-Leibler divergence and the Itakura-Saito
divergence as special cases (beta = 2,1,0, respectively). The proposed
algorithms are based on a surrogate auxiliary function (a local majorization of
the criterion function). We first describe a majorization-minimization (MM)
algorithm that leads to multiplicative updates, which differ from standard
heuristic multiplicative updates by a beta-dependent power exponent. The
monotonicity of the heuristic algorithm can however be proven for beta in (0,1)
using the proposed auxiliary function. Then we introduce the concept of
majorization-equalization (ME) algorithm which produces updates that move along
constant level sets of the auxiliary function and lead to larger steps than MM.
Simulations on synthetic and real data illustrate the faster convergence of the
ME approach. The paper also describes how the proposed algorithms can be
adapted to two common variants of NMF : penalized NMF (i.e., when a penalty
function of the factors is added to the criterion function) and convex-NMF
(when the dictionary is assumed to belong to a known subspace).Comment: \`a para\^itre dans Neural Computatio
An asymptotically superlinearly convergent semismooth Newton augmented Lagrangian method for Linear Programming
Powerful interior-point methods (IPM) based commercial solvers, such as
Gurobi and Mosek, have been hugely successful in solving large-scale linear
programming (LP) problems. The high efficiency of these solvers depends
critically on the sparsity of the problem data and advanced matrix
factorization techniques. For a large scale LP problem with data matrix
that is dense (possibly structured) or whose corresponding normal matrix
has a dense Cholesky factor (even with re-ordering), these solvers may require
excessive computational cost and/or extremely heavy memory usage in each
interior-point iteration. Unfortunately, the natural remedy, i.e., the use of
iterative methods based IPM solvers, although can avoid the explicit
computation of the coefficient matrix and its factorization, is not practically
viable due to the inherent extreme ill-conditioning of the large scale normal
equation arising in each interior-point iteration. To provide a better
alternative choice for solving large scale LPs with dense data or requiring
expensive factorization of its normal equation, we propose a semismooth Newton
based inexact proximal augmented Lagrangian ({\sc Snipal}) method. Different
from classical IPMs, in each iteration of {\sc Snipal}, iterative methods can
efficiently be used to solve simpler yet better conditioned semismooth Newton
linear systems. Moreover, {\sc Snipal} not only enjoys a fast asymptotic
superlinear convergence but is also proven to enjoy a finite termination
property. Numerical comparisons with Gurobi have demonstrated encouraging
potential of {\sc Snipal} for handling large-scale LP problems where the
constraint matrix has a dense representation or has a dense
factorization even with an appropriate re-ordering.Comment: Due to the limitation "The abstract field cannot be longer than 1,920
characters", the abstract appearing here is slightly shorter than that in the
PDF fil
- …