19,927 research outputs found
Distributed optimization with arbitrary local solvers
With the growth of data and necessity for distributed optimization methods,
solvers that work well on a single machine must be re-designed to leverage
distributed computation. Recent work in this area has been limited by focusing
heavily on developing highly specific methods for the distributed environment.
These special-purpose methods are often unable to fully leverage the
competitive performance of their well-tuned and customized single machine
counterparts. Further, they are unable to easily integrate improvements that
continue to be made to single machine methods. To this end, we present a
framework for distributed optimization that both allows the flexibility of
arbitrary solvers to be used on each (single) machine locally, and yet
maintains competitive performance against other state-of-the-art
special-purpose distributed methods. We give strong primal-dual convergence
rate guarantees for our framework that hold for arbitrary local solvers. We
demonstrate the impact of local solver selection both theoretically and in an
extensive experimental comparison. Finally, we provide thorough implementation
details for our framework, highlighting areas for practical performance gains
CoCoA: A General Framework for Communication-Efficient Distributed Optimization
The scale of modern datasets necessitates the development of efficient
distributed optimization methods for machine learning. We present a
general-purpose framework for distributed computing environments, CoCoA, that
has an efficient communication scheme and is applicable to a wide variety of
problems in machine learning and signal processing. We extend the framework to
cover general non-strongly-convex regularizers, including L1-regularized
problems like lasso, sparse logistic regression, and elastic net
regularization, and show how earlier work can be derived as a special case. We
provide convergence guarantees for the class of convex regularized loss
minimization objectives, leveraging a novel approach in handling
non-strongly-convex regularizers and non-smooth loss functions. The resulting
framework has markedly improved performance over state-of-the-art methods, as
we illustrate with an extensive set of experiments on real distributed
datasets
L1-Regularized Distributed Optimization: A Communication-Efficient Primal-Dual Framework
Despite the importance of sparsity in many large-scale applications, there
are few methods for distributed optimization of sparsity-inducing objectives.
In this paper, we present a communication-efficient framework for
L1-regularized optimization in the distributed environment. By viewing
classical objectives in a more general primal-dual setting, we develop a new
class of methods that can be efficiently distributed and applied to common
sparsity-inducing models, such as Lasso, sparse logistic regression, and
elastic net-regularized problems. We provide theoretical convergence guarantees
for our framework, and demonstrate its efficiency and flexibility with a
thorough experimental comparison on Amazon EC2. Our proposed framework yields
speedups of up to 50x as compared to current state-of-the-art methods for
distributed L1-regularized optimization
GHOST: Building blocks for high performance sparse linear algebra on heterogeneous systems
While many of the architectural details of future exascale-class high
performance computer systems are still a matter of intense research, there
appears to be a general consensus that they will be strongly heterogeneous,
featuring "standard" as well as "accelerated" resources. Today, such resources
are available as multicore processors, graphics processing units (GPUs), and
other accelerators such as the Intel Xeon Phi. Any software infrastructure that
claims usefulness for such environments must be able to meet their inherent
challenges: massive multi-level parallelism, topology, asynchronicity, and
abstraction. The "General, Hybrid, and Optimized Sparse Toolkit" (GHOST) is a
collection of building blocks that targets algorithms dealing with sparse
matrix representations on current and future large-scale systems. It implements
the "MPI+X" paradigm, has a pure C interface, and provides hybrid-parallel
numerical kernels, intelligent resource management, and truly heterogeneous
parallelism for multicore CPUs, Nvidia GPUs, and the Intel Xeon Phi. We
describe the details of its design with respect to the challenges posed by
modern heterogeneous supercomputers and recent algorithmic developments.
Implementation details which are indispensable for achieving high efficiency
are pointed out and their necessity is justified by performance measurements or
predictions based on performance models. The library code and several
applications are available as open source. We also provide instructions on how
to make use of GHOST in existing software packages, together with a case study
which demonstrates the applicability and performance of GHOST as a component
within a larger software stack.Comment: 32 pages, 11 figure
Paraiso : An Automated Tuning Framework for Explicit Solvers of Partial Differential Equations
We propose Paraiso, a domain specific language embedded in functional
programming language Haskell, for automated tuning of explicit solvers of
partial differential equations (PDEs) on GPUs as well as multicore CPUs. In
Paraiso, one can describe PDE solving algorithms succinctly using tensor
equations notation. Hydrodynamic properties, interpolation methods and other
building blocks are described in abstract, modular, re-usable and combinable
forms, which lets us generate versatile solvers from little set of Paraiso
source codes.
We demonstrate Paraiso by implementing a compressive hydrodynamics solver. A
single source code less than 500 lines can be used to generate solvers of
arbitrary dimensions, for both multicore CPUs and GPUs. We demonstrate both
manual annotation based tuning and evolutionary computing based automated
tuning of the program.Comment: 52 pages, 14 figures, accepted for publications in Computational
Science and Discover
Fast, Accurate Second Order Methods for Network Optimization
Dual descent methods are commonly used to solve network flow optimization
problems, since their implementation can be distributed over the network. These
algorithms, however, often exhibit slow convergence rates. Approximate Newton
methods which compute descent directions locally have been proposed as
alternatives to accelerate the convergence rates of conventional dual descent.
The effectiveness of these methods, is limited by the accuracy of such
approximations. In this paper, we propose an efficient and accurate distributed
second order method for network flow problems. The proposed approach utilizes
the sparsity pattern of the dual Hessian to approximate the the Newton
direction using a novel distributed solver for symmetric diagonally dominant
linear equations. Our solver is based on a distributed implementation of a
recent parallel solver of Spielman and Peng (2014). We analyze the properties
of the proposed algorithm and show that, similar to conventional Newton
methods, superlinear convergence within a neighbor- hood of the optimal value
is attained. We finally demonstrate the effectiveness of the approach in a set
of experiments on randomly generated networks.Comment: arXiv admin note: text overlap with arXiv:1502.0315
Recent Advances in Graph Partitioning
We survey recent trends in practical algorithms for balanced graph
partitioning together with applications and future research directions
Rectified Gaussian Scale Mixtures and the Sparse Non-Negative Least Squares Problem
In this paper, we develop a Bayesian evidence maximization framework to solve
the sparse non-negative least squares (S-NNLS) problem. We introduce a family
of probability densities referred to as the Rectified Gaussian Scale Mixture
(R- GSM) to model the sparsity enforcing prior distribution for the solution.
The R-GSM prior encompasses a variety of heavy-tailed densities such as the
rectified Laplacian and rectified Student- t distributions with a proper choice
of the mixing density. We utilize the hierarchical representation induced by
the R-GSM prior and develop an evidence maximization framework based on the
Expectation-Maximization (EM) algorithm. Using the EM based method, we estimate
the hyper-parameters and obtain a point estimate for the solution. We refer to
the proposed method as rectified sparse Bayesian learning (R-SBL). We provide
four R- SBL variants that offer a range of options for computational complexity
and the quality of the E-step computation. These methods include the Markov
chain Monte Carlo EM, linear minimum mean-square-error estimation, approximate
message passing and a diagonal approximation. Using numerical experiments, we
show that the proposed R-SBL method outperforms existing S-NNLS solvers in
terms of both signal and support recovery performance, and is also very robust
against the structure of the design matrix.Comment: Under Review by IEEE Transactions on Signal Processin
- …