769,175 research outputs found
Distributed Partitioned Big-Data Optimization via Asynchronous Dual Decomposition
In this paper we consider a novel partitioned framework for distributed
optimization in peer-to-peer networks. In several important applications the
agents of a network have to solve an optimization problem with two key
features: (i) the dimension of the decision variable depends on the network
size, and (ii) cost function and constraints have a sparsity structure related
to the communication graph. For this class of problems a straightforward
application of existing consensus methods would show two inefficiencies: poor
scalability and redundancy of shared information. We propose an asynchronous
distributed algorithm, based on dual decomposition and coordinate methods, to
solve partitioned optimization problems. We show that, by exploiting the
problem structure, the solution can be partitioned among the nodes, so that
each node just stores a local copy of a portion of the decision variable
(rather than a copy of the entire decision vector) and solves a small-scale
local problem
Non-Local Probes Do Not Help with Graph Problems
This work bridges the gap between distributed and centralised models of
computing in the context of sublinear-time graph algorithms. A priori, typical
centralised models of computing (e.g., parallel decision trees or centralised
local algorithms) seem to be much more powerful than distributed
message-passing algorithms: centralised algorithms can directly probe any part
of the input, while in distributed algorithms nodes can only communicate with
their immediate neighbours. We show that for a large class of graph problems,
this extra freedom does not help centralised algorithms at all: for example,
efficient stateless deterministic centralised local algorithms can be simulated
with efficient distributed message-passing algorithms. In particular, this
enables us to transfer existing lower bound results from distributed algorithms
to centralised local algorithms
Cooperative Control and Potential Games
We present a view of cooperative control using the language of learning in games. We review the game-theoretic concepts of potential and weakly acyclic games, and demonstrate how several cooperative control problems, such as consensus and dynamic sensor coverage, can be formulated in these settings. Motivated by this connection, we build upon game-theoretic concepts to better accommodate a broader class of cooperative control problems. In particular, we extend existing learning algorithms to accommodate restricted action sets caused by the limitations of agent capabilities and group based decision making. Furthermore, we also introduce a new class of games called sometimes weakly acyclic games for time-varying objective functions and action sets, and provide distributed algorithms for convergence to an equilibrium
Solving Multiclass Learning Problems via Error-Correcting Output Codes
Multiclass learning problems involve finding a definition for an unknown
function f(x) whose range is a discrete set containing k > 2 values (i.e., k
``classes''). The definition is acquired by studying collections of training
examples of the form [x_i, f (x_i)]. Existing approaches to multiclass learning
problems include direct application of multiclass algorithms such as the
decision-tree algorithms C4.5 and CART, application of binary concept learning
algorithms to learn individual binary functions for each of the k classes, and
application of binary concept learning algorithms with distributed output
representations. This paper compares these three approaches to a new technique
in which error-correcting codes are employed as a distributed output
representation. We show that these output representations improve the
generalization performance of both C4.5 and backpropagation on a wide range of
multiclass learning tasks. We also demonstrate that this approach is robust
with respect to changes in the size of the training sample, the assignment of
distributed representations to particular classes, and the application of
overfitting avoidance techniques such as decision-tree pruning. Finally, we
show that---like the other methods---the error-correcting code technique can
provide reliable class probability estimates. Taken together, these results
demonstrate that error-correcting output codes provide a general-purpose method
for improving the performance of inductive learning programs on multiclass
problems.Comment: See http://www.jair.org/ for any accompanying file
Optimal scaling of the ADMM algorithm for distributed quadratic programming
This paper presents optimal scaling of the alternating directions method of
multipliers (ADMM) algorithm for a class of distributed quadratic programming
problems. The scaling corresponds to the ADMM step-size and relaxation
parameter, as well as the edge-weights of the underlying communication graph.
We optimize these parameters to yield the smallest convergence factor of the
algorithm. Explicit expressions are derived for the step-size and relaxation
parameter, as well as for the corresponding convergence factor. Numerical
simulations justify our results and highlight the benefits of optimally scaling
the ADMM algorithm.Comment: Submitted to the IEEE Transactions on Signal Processing. Prior work
was presented at the 52nd IEEE Conference on Decision and Control, 201
Distributed Proximal-Correction Algorithm for the Sum of Maximal Monotone Operators in Multi-Agent Network
This paper focuses on a class of inclusion problems of maximal monotone
operators in a multi-agent network, where each agent is characterized by an
operator that is not available to any other agents, but the agents can
cooperate by exchanging information with their neighbors according to a given
communication topology. All agents aim at finding a common decision vector that
is the solution to the sum of agents' operators. This class of problems is
motivated by distributed convex optimization with coupled constraints. In this
paper, we propose a distributed proximal point method with a cumulative
correction term (named Proximal-Correction Algorithm) for this class of
inclusion problems of operators. It's proved that the Proximal-Correction
Algorithm converges for any value of a constant penalty parameter. In order to
make the Proximal-Correction ALgorithm computationally implementable for a wide
variety of distributed optimization problems, we adopt two inexact criteria for
calculating the proximal steps of the algorithm. Under each of these two
criteria, the convergence of Proximal-Correction Algorithm can be guaranteed,
and the linear convergence rate is established when the stronger one is
satisfied. In numerical simulations, both exact and inexact versions of
Proximal-Correction Algorithm are executed for a distributed convex
optimization problem with coupled constraints. Compared with several
alternative algorithms in the literature, the exact and inexact versions of
Proximal-Correction both exhibit good numerical performance
OpTiX-II: A Software Environment for MCDM based on Distributed and Parallel Computing
The intention of the paper is to give an introduction to the OpTiX-II Software Environment, which supports the parallel and distributed solution of decision problems which can be represented as mathematical nonlinear programming tasks. First, a brief summary of nonsequential solution concepts for this class of decision problems on multiprocessor systems will be given. The focus of attention will be put on coarse-grained parallelization and its implementation on multi-computer clusters. The conceptual design objectives for the OpTiX-II Software Environment will be presented as well as the implementation on a workstation cluster, a transputer system and a multiprocessor workstation (shared memory). The OpTiX-II system supports the steps from the formulation of decision problems to their solution on networks of (parallel) computers. In order to demonstrate the use of OpTiX-II, the solution of a decision problem from the field of structural design is discussed and some numerical test results are supplied
- …