148 research outputs found
Distributed Dictionary Learning
The paper studies distributed Dictionary Learning (DL) problems where the
learning task is distributed over a multi-agent network with time-varying
(nonsymmetric) connectivity. This formulation is relevant, for instance, in
big-data scenarios where massive amounts of data are collected/stored in
different spatial locations and it is unfeasible to aggregate and/or process
all the data in a fusion center, due to resource limitations, communication
overhead or privacy considerations. We develop a general distributed
algorithmic framework for the (nonconvex) DL problem and establish its
asymptotic convergence. The new method hinges on Successive Convex
Approximation (SCA) techniques coupled with i) a gradient tracking mechanism
instrumental to locally estimate the missing global information; and ii) a
consensus step, as a mechanism to distribute the computations among the agents.
To the best of our knowledge, this is the first distributed algorithm with
provable convergence for the DL problem and, more in general, bi-convex
optimization problems over (time-varying) directed graphs
Parallel Selective Algorithms for Big Data Optimization
We propose a decomposition framework for the parallel optimization of the sum
of a differentiable (possibly nonconvex) function and a (block) separable
nonsmooth, convex one. The latter term is usually employed to enforce structure
in the solution, typically sparsity. Our framework is very flexible and
includes both fully parallel Jacobi schemes and Gauss- Seidel (i.e.,
sequential) ones, as well as virtually all possibilities "in between" with only
a subset of variables updated at each iteration. Our theoretical convergence
results improve on existing ones, and numerical results on LASSO, logistic
regression, and some nonconvex quadratic problems show that the new method
consistently outperforms existing algorithms.Comment: This work is an extended version of the conference paper that has
been presented at IEEE ICASSP'14. The first and the second author contributed
equally to the paper. This revised version contains new numerical results on
non convex quadratic problem
Flexible Parallel Algorithms for Big Data Optimization
We propose a decomposition framework for the parallel optimization of the sum
of a differentiable function and a (block) separable nonsmooth, convex one. The
latter term is typically used to enforce structure in the solution as, for
example, in Lasso problems. Our framework is very flexible and includes both
fully parallel Jacobi schemes and Gauss-Seidel (Southwell-type) ones, as well
as virtually all possibilities in between (e.g., gradient- or Newton-type
methods) with only a subset of variables updated at each iteration. Our
theoretical convergence results improve on existing ones, and numerical results
show that the new method compares favorably to existing algorithms.Comment: submitted to IEEE ICASSP 201
Hybrid Random/Deterministic Parallel Algorithms for Nonconvex Big Data Optimization
We propose a decomposition framework for the parallel optimization of the sum
of a differentiable {(possibly nonconvex)} function and a nonsmooth (possibly
nonseparable), convex one. The latter term is usually employed to enforce
structure in the solution, typically sparsity. The main contribution of this
work is a novel \emph{parallel, hybrid random/deterministic} decomposition
scheme wherein, at each iteration, a subset of (block) variables is updated at
the same time by minimizing local convex approximations of the original
nonconvex function. To tackle with huge-scale problems, the (block) variables
to be updated are chosen according to a \emph{mixed random and deterministic}
procedure, which captures the advantages of both pure deterministic and random
update-based schemes. Almost sure convergence of the proposed scheme is
established. Numerical results show that on huge-scale problems the proposed
hybrid random/deterministic algorithm outperforms both random and deterministic
schemes.Comment: The order of the authors is alphabetica
Ghost Penalties in Nonconvex Constrained Optimization: Diminishing Stepsizes and Iteration Complexity
We consider nonconvex constrained optimization problems and propose a new
approach to the convergence analysis based on penalty functions. We make use of
classical penalty functions in an unconventional way, in that penalty functions
only enter in the theoretical analysis of convergence while the algorithm
itself is penalty-free. Based on this idea, we are able to establish several
new results, including the first general analysis for diminishing stepsize
methods in nonconvex, constrained optimization, showing convergence to
generalized stationary points, and a complexity study for SQP-type algorithms.Comment: To appear on Mathematics of Operations Researc
Distributed workload control for federated service discovery
The diffusion of the internet paradigm in each aspect of human life continuously fosters the widespread of new technologies and related services. In the Future Internet scenario, where 5G telecommunication facilities will interact with the internet of things world, analyzing in real time big amounts of data to feed a potential infinite set of services belonging to different administrative domains, the role of a federated service discovery will become crucial. In this paper the authors propose a distributed workload control algorithm to handle efficiently the service discovery requests, with the aim of minimizing the overall latencies experienced by the requesting user agents. The authors propose an algorithm based on the Wardrop equilibrium, which is a gametheoretical concept, applied to the federated service discovery domain. The proposed solution has been implemented and its performance has been assessed adopting different network topologies and metrics. An open source simulation environment has been created allowing other researchers to test the proposed solution
Distributed Power Allocation with Rate Constraints in Gaussian Parallel Interference Channels
This paper considers the minimization of transmit power in Gaussian parallel
interference channels, subject to a rate constraint for each user. To derive
decentralized solutions that do not require any cooperation among the users, we
formulate this power control problem as a (generalized) Nash equilibrium game.
We obtain sufficient conditions that guarantee the existence and nonemptiness
of the solution set to our problem. Then, to compute the solutions of the game,
we propose two distributed algorithms based on the single user waterfilling
solution: The \emph{sequential} and the \emph{simultaneous} iterative
waterfilling algorithms, wherein the users update their own strategies
sequentially and simultaneously, respectively. We derive a unified set of
sufficient conditions that guarantee the uniqueness of the solution and global
convergence of both algorithms. Our results are applicable to all practical
distributed multipoint-to-multipoint interference systems, either wired or
wireless, where a quality of service in terms of information rate must be
guaranteed for each link.Comment: Paper submitted to IEEE Transactions on Information Theory, February
17, 2007. Revised January 11, 200
Stochastic Approximation for Expectation Objective and Expectation Inequality-Constrained Nonconvex Optimization
Stochastic Approximation has been a prominent set of tools for solving
problems with noise and uncertainty. Increasingly, it becomes important to
solve optimization problems wherein there is noise in both a set of constraints
that a practitioner requires the system to adhere to, as well as the objective,
which typically involves some empirical loss. We present the first stochastic
approximation approach for solving this class of problems using the Ghost
framework of incorporating penalty functions for analysis of a sequential
convex programming approach together with a Monte Carlo estimator of nonlinear
maps. We provide almost sure convergence guarantees and demonstrate the
performance of the procedure on some representative examples
Real and Complex Monotone Communication Games
Noncooperative game-theoretic tools have been increasingly used to study many
important resource allocation problems in communications, networking, smart
grids, and portfolio optimization. In this paper, we consider a general class
of convex Nash Equilibrium Problems (NEPs), where each player aims to solve an
arbitrary smooth convex optimization problem. Differently from most of current
works, we do not assume any specific structure for the players' problems, and
we allow the optimization variables of the players to be matrices in the
complex domain. Our main contribution is the design of a novel class of
distributed (asynchronous) best-response- algorithms suitable for solving the
proposed NEPs, even in the presence of multiple solutions. The new methods,
whose convergence analysis is based on Variational Inequality (VI) techniques,
can select, among all the equilibria of a game, those that optimize a given
performance criterion, at the cost of limited signaling among the players. This
is a major departure from existing best-response algorithms, whose convergence
conditions imply the uniqueness of the NE. Some of our results hinge on the use
of VI problems directly in the complex domain; the study of these new kind of
VIs also represents a noteworthy innovative contribution. We then apply the
developed methods to solve some new generalizations of SISO and MIMO games in
cognitive radios and femtocell systems, showing a considerable performance
improvement over classical pure noncooperative schemes.Comment: to appear on IEEE Transactions in Information Theor
- …