12,046 research outputs found

    Rounding Sum-of-Squares Relaxations

    Get PDF
    We present a general approach to rounding semidefinite programming relaxations obtained by the Sum-of-Squares method (Lasserre hierarchy). Our approach is based on using the connection between these relaxations and the Sum-of-Squares proof system to transform a *combining algorithm* -- an algorithm that maps a distribution over solutions into a (possibly weaker) solution -- into a *rounding algorithm* that maps a solution of the relaxation to a solution of the original problem. Using this approach, we obtain algorithms that yield improved results for natural variants of three well-known problems: 1) We give a quasipolynomial-time algorithm that approximates the maximum of a low degree multivariate polynomial with non-negative coefficients over the Euclidean unit sphere. Beyond being of interest in its own right, this is related to an open question in quantum information theory, and our techniques have already led to improved results in this area (Brand\~{a}o and Harrow, STOC '13). 2) We give a polynomial-time algorithm that, given a d dimensional subspace of R^n that (almost) contains the characteristic function of a set of size n/k, finds a vector vv in the subspace satisfying v44>c(k/d1/3)v22|v|_4^4 > c(k/d^{1/3}) |v|_2^2, where vp=(Eivip)1/p|v|_p = (E_i v_i^p)^{1/p}. Aside from being a natural relaxation, this is also motivated by a connection to the Small Set Expansion problem shown by Barak et al. (STOC 2012) and our results yield a certain improvement for that problem. 3) We use this notion of L_4 vs. L_2 sparsity to obtain a polynomial-time algorithm with substantially improved guarantees for recovering a planted μ\mu-sparse vector v in a random d-dimensional subspace of R^n. If v has mu n nonzero coordinates, we can recover it with high probability whenever μ<O(min(1,n/d2))\mu < O(\min(1,n/d^2)), improving for d<n2/3d < n^{2/3} prior methods which intrinsically required μ<O(1/(d))\mu < O(1/\sqrt(d))

    Sum of squares lower bounds for refuting any CSP

    Full text link
    Let P:{0,1}k{0,1}P:\{0,1\}^k \to \{0,1\} be a nontrivial kk-ary predicate. Consider a random instance of the constraint satisfaction problem CSP(P)\mathrm{CSP}(P) on nn variables with Δn\Delta n constraints, each being PP applied to kk randomly chosen literals. Provided the constraint density satisfies Δ1\Delta \gg 1, such an instance is unsatisfiable with high probability. The \emph{refutation} problem is to efficiently find a proof of unsatisfiability. We show that whenever the predicate PP supports a tt-\emph{wise uniform} probability distribution on its satisfying assignments, the sum of squares (SOS) algorithm of degree d=Θ(nΔ2/(t1)logΔ)d = \Theta(\frac{n}{\Delta^{2/(t-1)} \log \Delta}) (which runs in time nO(d)n^{O(d)}) \emph{cannot} refute a random instance of CSP(P)\mathrm{CSP}(P). In particular, the polynomial-time SOS algorithm requires Ω~(n(t+1)/2)\widetilde{\Omega}(n^{(t+1)/2}) constraints to refute random instances of CSP(P)(P) when PP supports a tt-wise uniform distribution on its satisfying assignments. Together with recent work of Lee et al. [LRS15], our result also implies that \emph{any} polynomial-size semidefinite programming relaxation for refutation requires at least Ω~(n(t+1)/2)\widetilde{\Omega}(n^{(t+1)/2}) constraints. Our results (which also extend with no change to CSPs over larger alphabets) subsume all previously known lower bounds for semialgebraic refutation of random CSPs. For every constraint predicate~PP, they give a three-way hardness tradeoff between the density of constraints, the SOS degree (hence running time), and the strength of the refutation. By recent algorithmic results of Allen et al. [AOW15] and Raghavendra et al. [RRS16], this full three-way tradeoff is \emph{tight}, up to lower-order factors.Comment: 39 pages, 1 figur

    The multi-program performance model: debunking current practice in multi-core simulation

    Get PDF
    Composing a representative multi-program multi-core workload is non-trivial. A multi-core processor can execute multiple independent programs concurrently, and hence, any program mix can form a potential multi-program workload. Given the very large number of possible multiprogram workloads and the limited speed of current simulation methods, it is impossible to evaluate all possible multi-program workloads. This paper presents the Multi-Program Performance Model (MPPM), a method for quickly estimating multiprogram multi-core performance based on single-core simulation runs. MPPM employs an iterative method to model the tight performance entanglement between co-executing programs on a multi-core processor with shared caches. Because MPPM involves analytical modeling, it is very fast, and it estimates multi-core performance for a very large number of multi-program workloads in a reasonable amount of time. In addition, it provides confidence bounds on its performance estimates. Using SPEC CPU2006 and up to 16 cores, we report an average performance prediction error of 2.3% and 2.9% for system throughput (STP) and average normalized turnaround time (ANTT), respectively, while being up to five orders of magnitude faster than detailed simulation. Subsequently, we demonstrate that randomly picking a limited number of multi-program workloads, as done in current pactice, can lead to incorrect design decisions in practical design and research studies, which is alleviated using MPPM. In addition, MPPM can be used to quickly identify multi-program workloads that stress multi-core performance through excessive conflict behavior in shared caches; these stress workloads can then be used for driving the design process further

    Target Assignment in Robotic Networks: Distance Optimality Guarantees and Hierarchical Strategies

    Get PDF
    We study the problem of multi-robot target assignment to minimize the total distance traveled by the robots until they all reach an equal number of static targets. In the first half of the paper, we present a necessary and sufficient condition under which true distance optimality can be achieved for robots with limited communication and target-sensing ranges. Moreover, we provide an explicit, non-asymptotic formula for computing the number of robots needed to achieve distance optimality in terms of the robots' communication and target-sensing ranges with arbitrary guaranteed probabilities. The same bounds are also shown to be asymptotically tight. In the second half of the paper, we present suboptimal strategies for use when the number of robots cannot be chosen freely. Assuming first that all targets are known to all robots, we employ a hierarchical communication model in which robots communicate only with other robots in the same partitioned region. This hierarchical communication model leads to constant approximations of true distance-optimal solutions under mild assumptions. We then revisit the limited communication and sensing models. By combining simple rendezvous-based strategies with a hierarchical communication model, we obtain decentralized hierarchical strategies that achieve constant approximation ratios with respect to true distance optimality. Results of simulation show that the approximation ratio is as low as 1.4

    A Task-Based Approach to Organization: Knowledge, Communication and Structure

    Get PDF
    We bridge a gap between organizational economics and strategy research by developing a task-based approach to analyze organizational knowledge, process and structure, and deriving testable implications for the relation between production and organizational structure. We argue that organization emerges to integrate disperse knowledge and to coordinate talent in production and is designed to complement the limitations of human ability. The complexity of the tasks undertaken determines the optimal level of knowledge acquisition and talent. The relations between tasks, namely, complementarities or substitutabilities and synergies, determine the allocation of knowledge among members of the organization. Communication shapes the relation between individual talent, and governs the organizational process and structure that integrates disperse knowledge to perform tasks more efficiently. Organization structure can also be deliberately designed ex ante to correct bias of individual judgement, the extent to which is dependent on the attributes of tasks. Organization process and the routinized organizational structure are the core of organizational capital, which generates rent and sustains organizational growth. This task-based approach enriches the existing body of organization studies, in particular the knowledge-based theory of the firm and the dynamic capabilities theory.task-based approach, complementarities, tacit knowledge, codifiable knowledge, code,vertical communication, horizontal communication, organizational architecture, decision bias
    corecore