187,152 research outputs found

    The Power of Randomization: Distributed Submodular Maximization on Massive Datasets

    Full text link
    A wide variety of problems in machine learning, including exemplar clustering, document summarization, and sensor placement, can be cast as constrained submodular maximization problems. Unfortunately, the resulting submodular optimization problems are often too large to be solved on a single machine. We develop a simple distributed algorithm that is embarrassingly parallel and it achieves provable, constant factor, worst-case approximation guarantees. In our experiments, we demonstrate its efficiency in large problems with different kinds of constraints with objective values always close to what is achievable in the centralized setting

    On parallel versus sequential approximation

    Get PDF
    In this paper we deal with the class NCX of NP Optimization problems that are approximable within constant ratio in NC. This class is the parallel counterpart of the class APX. Our main motivation here is to reduce the study of sequential and parallel approximability to the same framework. To this aim, we first introduce a new kind of NC-reduction that preserves the relative error of the approximate solutions and show that the class NCX has {em complete} problems under this reducibility. An important subset of NCX is the class MAXSNP, we show that MAXSNP-complete problems have a threshold on the parallel approximation ratio that is, there are positive constants epsilon1epsilon_1, epsilon2epsilon_2 such that although the problem can be approximated in P within epsilon1epsilon_1 it cannot be approximated in NC within epsilon_2$, unless P=NC. This result is attained by showing that the problem of approximating the value obtained through a non-oblivious local search algorithm is P-complete, for some values of the approximation ratio. Finally, we show that approximating through non-oblivious local search is in average NC.Postprint (published version

    On limited-memory quasi-Newton methods for minimizing a quadratic function

    Full text link
    The main focus in this paper is exact linesearch methods for minimizing a quadratic function whose Hessian is positive definite. We give two classes of limited-memory quasi-Newton Hessian approximations that generate search directions parallel to those of the method of preconditioned conjugate gradients, and hence give finite termination on quadratic optimization problems. The Hessian approximations are described by a novel compact representation which provides a dynamical framework. We also discuss possible extensions of these classes and show their behavior on randomly generated quadratic optimization problems. The methods behave numerically similar to L-BFGS. Inclusion of information from the first iteration in the limited-memory Hessian approximation and L-BFGS significantly reduces the effects of round-off errors on the considered problems. In addition, we give our compact representation of the Hessian approximations in the full Broyden class for the general unconstrained optimization problem. This representation consists of explicit matrices and gradients only as vector components

    Parallel Rollout for Deterministic Optimal Control

    Full text link
    We extend the parallel rollout algorithm for solving deterministic infinite horizon optimal control problems with nonnegative stage costs. Given the exact or approximate cost functions of several base policies, the proposed scheme can harness the presence of multiple computing units. We show that the proposed scheme permits a parallel implementation, and can be viewed as a decomposition method for solving challenging optimization problems that arise in model predictive control (MPC) or related approximation schemes. When applied to problems involving continuous state and control spaces, our method requires computing multiple copies of similar MPC problems with common dynamics and stage costs
    • …
    corecore