45 research outputs found

    A polynomial time approximation scheme for computing the supremum of Gaussian processes

    Full text link
    We give a polynomial time approximation scheme (PTAS) for computing the supremum of a Gaussian process. That is, given a finite set of vectors VRdV\subseteq\mathbb{R}^d, we compute a (1+ε)(1+\varepsilon)-factor approximation to EXNd[supvVv,X]\mathop {\mathbb{E}}_{X\leftarrow\mathcal{N}^d}[\sup_{v\in V}|\langle v,X\rangle|] deterministically in time poly(d)VOε(1)\operatorname {poly}(d)\cdot|V|^{O_{\varepsilon}(1)}. Previously, only a constant factor deterministic polynomial time approximation algorithm was known due to the work of Ding, Lee and Peres [Ann. of Math. (2) 175 (2012) 1409-1471]. This answers an open question of Lee (2010) and Ding [Ann. Probab. 42 (2014) 464-496]. The study of supremum of Gaussian processes is of considerable importance in probability with applications in functional analysis, convex geometry, and in light of the recent breakthrough work of Ding, Lee and Peres [Ann. of Math. (2) 175 (2012) 1409-1471], to random walks on finite graphs. As such our result could be of use elsewhere. In particular, combining with the work of Ding [Ann. Probab. 42 (2014) 464-496], our result yields a PTAS for computing the cover time of bounded-degree graphs. Previously, such algorithms were known only for trees. Along the way, we also give an explicit oblivious estimator for semi-norms in Gaussian space with optimal query complexity. Our algorithm and its analysis are elementary in nature, using two classical comparison inequalities, Slepian's lemma and Kanter's lemma.Comment: Published in at http://dx.doi.org/10.1214/13-AAP997 the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    On asymptotic constants in the theory of extremes for Gaussian processes

    Full text link
    This paper gives a new representation of Pickands' constants, which arise in the study of extremes for a variety of Gaussian processes. Using this representation, we resolve the long-standing problem of devising a reliable algorithm for estimating these constants. A detailed error analysis illustrates the strength of our approach.Comment: Published in at http://dx.doi.org/10.3150/13-BEJ534 the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm

    Correlation Decay in Random Decision Networks

    Full text link
    We consider a decision network on an undirected graph in which each node corresponds to a decision variable, and each node and edge of the graph is associated with a reward function whose value depends only on the variables of the corresponding nodes. The goal is to construct a decision vector which maximizes the total reward. This decision problem encompasses a variety of models, including maximum-likelihood inference in graphical models (Markov Random Fields), combinatorial optimization on graphs, economic team theory and statistical physics. The network is endowed with a probabilistic structure in which costs are sampled from a distribution. Our aim is to identify sufficient conditions to guarantee average-case polynomiality of the underlying optimization problem. We construct a new decentralized algorithm called Cavity Expansion and establish its theoretical performance for a variety of models. Specifically, for certain classes of models we prove that our algorithm is able to find near optimal solutions with high probability in a decentralized way. The success of the algorithm is based on the network exhibiting a correlation decay (long-range independence) property. Our results have the following surprising implications in the area of average case complexity of algorithms. Finding the largest independent (stable) set of a graph is a well known NP-hard optimization problem for which no polynomial time approximation scheme is possible even for graphs with largest connectivity equal to three, unless P=NP. We show that the closely related maximum weighted independent set problem for the same class of graphs admits a PTAS when the weights are i.i.d. with the exponential distribution. Namely, randomization of the reward function turns an NP-hard problem into a tractable one

    Correlation Decay in Random Decision Networks

    Get PDF
    We consider a decision network on an undirected graph in which each node corresponds to a decision variable, and each node and edge of the graph is associated with a reward function whose value depends only on the variables of the corresponding nodes. The goal is to construct a decision vector that maximizes the total reward. This decision problem encompasses a variety of models, including maximum-likelihood inference in graphical models (Markov Random Fields), combinatorial optimization on graphs, economic team theory, and statistical physics. The network is endowed with a probabilistic structure in which rewards are sampled from a distribution. Our aim is to identify sufficient conditions on the network structure and rewards distributions to guarantee average-case polynomiality of the underlying optimization problem. Additionally, we wish to characterize the efficiency of a decentralized solution generated on the basis of local information. We construct a new decentralized algorithm called Cavity Expansion and establish its theoretical performance for a variety of graph models and reward function distributions. Specifically, for certain classes of models we prove that our algorithm is able to find a near-optimal solution with high probability in a decentralized way. The success of the algorithm is based on the network exhibiting a certain correlation decay (long-range independence) property, and we prove that this property is indeed exhibited by the models of interest. Our results have the following surprising implications in the area of average-case complexity of algorithms. Finding the largest independent (stable) set of a graph is a well known NP-hard optimization problem for which no polynomial time approximation scheme is possible even for graphs with largest connectivity equal to three unless P D NP. Yet we show that the closely related Maximum Weight Independent Set problem for the same class of graphs admits a PTAS when the weights are independently and identically distributed with the exponential distribution. Namely, randomization of the reward function turns an NP-hard problem into a tractable one. Keywords: optimization; NP-hardness; long-range independenceNational Science Foundation (U.S.) (Grant CMMI-0726733

    An Efficient PTAS for Stochastic Load Balancing with Poisson Jobs

    Get PDF
    We give the first polynomial-time approximation scheme (PTAS) for the stochastic load balancing problem when the job sizes follow Poisson distributions. This improves upon the 2-approximation algorithm due to Goel and Indyk (FOCS\u2799). Moreover, our approximation scheme is an efficient PTAS that has a running time double exponential in 1/? but nearly-linear in n, where n is the number of jobs and ? is the target error. Previously, a PTAS (not efficient) was only known for jobs that obey exponential distributions (Goel and Indyk, FOCS\u2799). Our algorithm relies on several probabilistic ingredients including some (seemingly) new results on scaling and the so-called "focusing effect" of maximum of Poisson random variables which might be of independent interest

    Majorizing measures for the optimizer

    Get PDF
    The theory of majorizing measures, extensively developed by Fernique, Talagrand and many others, provides one of the most general frameworks for controlling the behavior of stochastic processes. In particular, it can be applied to derive quantitative bounds on the expected suprema and the degree of continuity of sample paths for many processes. One of the crowning achievements of the theory is Talagrand’s tight alternative characterization of the suprema of Gaussian processes in terms of majorizing measures. The proof of this theorem was difficult, and thus considerable effort was put into the task of developing both shorter and easier to understand proofs. A major reason for this difficulty was considered to be theory of majorizing measures itself, which had the reputation of being opaque and mysterious. As a consequence, most recent treatments of the theory (including by Talagrand himself) have eschewed the use of majorizing measures in favor of a purely combinatorial approach (the generic chaining) where objects based on sequences of partitions provide roughly matching upper and lower bounds on the desired expected supremum. In this paper, we return to majorizing measures as a primary object of study, and give a viewpoint that we think is very natural and clarifying from an optimization perspective. As our main contribution, we give an algorithmic proof of the majorizing measures theorem based on two parts: - We make the simple (but apparently new) observation that finding the best majorizing measure can be cast as a convex program. This also allows for efficiently computing the measure using off-the-shelf methods from convex optimization. - We obtain tree-based upper and lower bound certificates by rounding, in a series of steps, the primal and dual solutions to this convex program. While duality has conceptually been part of the theory since its beginnings, as far as we are aware no explicit link to convex optimization has been previously made.<p

    Correlation decay and decentralized optimization in graphical models

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 213-229) and index.Many models of optimization, statistics, social organizations and machine learning capture local dependencies by means of a network that describes the interconnections and interactions of different components. However, in most cases, optimization or inference on these models is hard due to the dimensionality of the networks. This is so even when using algorithms that take advantage of the underlying graphical structure. Approximate methods are therefore needed. The aim of this thesis is to study such large-scale systems, focusing on the question of how randomness affects the complexity of optimizing in a graph; of particular interest is the study of a phenomenon known as correlation decay, namely, the phenomenon where the influence of a node on another node of the network decreases quickly as the distance between them grows. In the first part of this thesis, we develop a new message-passing algorithm for optimization in graphical models. We formally prove a connection between the correlation decay property and (i) the near-optimality of this algorithm, as well as (ii) the decentralized nature of optimal solutions. In the context of discrete optimization with random costs, we develop a technique for establishing that a system exhibits correlation decay. We illustrate the applicability of the method by giving concrete results for the cases of uniform and Gaussian distributed cost coefficients in networks with bounded connectivity. In the second part, we pursue similar questions in a combinatorial optimization setting: we consider the problem of finding a maximum weight independent set in a bounded degree graph, when the node weights are i.i.d. random variables.(cont.) Surprisingly, we discover that the problem becomes tractable for certain distributions. Specifically, we construct a PTAS for the case of exponentially distributed weights and arbitrary graphs with degree at most 3, and obtain generalizations for higher degrees and different distributions. At the same time we prove that no PTAS exists for the case of exponentially distributed weights for graphs with sufficiently large but bounded degree, unless P=NP. Next, we shift our focus to graphical games, which are a game-theoretic analog of graphical models. We establish a connection between the problem of finding an approximate Nash equilibrium in a graphical game and the problem of optimization in graphical models. We use this connection to re-derive NashProp, a message-passing algorithm which computes Nash equilibria for graphical games on trees; we also suggest several new search algorithms for graphical games in general networks. Finally, we propose a definition of correlation decay in graphical games, and establish that the property holds in a restricted family of graphical games. The last part of the thesis is devoted to a particular application of graphical models and message-passing algorithms to the problem of early prediction of Alzheimer's disease. To this end, we develop a new measure of synchronicity between different parts of the brain, and apply it to electroencephalogram data. We show that the resulting prediction method outperforms a vast number of other EEG-based measures in the task of predicting the onset of Alzheimer's disease.by Théophane Weber.Ph.D
    corecore