38,459 research outputs found

    Smoothed Efficient Algorithms and Reductions for Network Coordination Games

    Get PDF
    Worst-case hardness results for most equilibrium computation problems have raised the need for beyond-worst-case analysis. To this end, we study the smoothed complexity of finding pure Nash equilibria in Network Coordination Games, a PLS-complete problem in the worst case. This is a potential game where the sequential-better-response algorithm is known to converge to a pure NE, albeit in exponential time. First, we prove polynomial (resp. quasi-polynomial) smoothed complexity when the underlying game graph is a complete (resp. arbitrary) graph, and every player has constantly many strategies. We note that the complete graph case is reminiscent of perturbing all parameters, a common assumption in most known smoothed analysis results. Second, we define a notion of smoothness-preserving reduction among search problems, and obtain reductions from 22-strategy network coordination games to local-max-cut, and from kk-strategy games (with arbitrary kk) to local-max-cut up to two flips. The former together with the recent result of [BCC18] gives an alternate O(n8)O(n^8)-time smoothed algorithm for the 22-strategy case. This notion of reduction allows for the extension of smoothed efficient algorithms from one problem to another. For the first set of results, we develop techniques to bound the probability that an (adversarial) better-response sequence makes slow improvements on the potential. Our approach combines and generalizes the local-max-cut approaches of [ER14,ABPW17] to handle the multi-strategy case: it requires a careful definition of the matrix which captures the increase in potential, a tighter union bound on adversarial sequences, and balancing it with good enough rank bounds. We believe that the approach and notions developed herein could be of interest in addressing the smoothed complexity of other potential and/or congestion games

    Improving the smoothed complexity of FLIP for max cut problems

    Full text link
    Finding locally optimal solutions for max-cut and max-kk-cut are well-known PLS-complete problems. An instinctive approach to finding such a locally optimum solution is the FLIP method. Even though FLIP requires exponential time in worst-case instances, it tends to terminate quickly in practical instances. To explain this discrepancy, the run-time of FLIP has been studied in the smoothed complexity framework. Etscheid and R\"{o}glin showed that the smoothed complexity of FLIP for max-cut in arbitrary graphs is quasi-polynomial. Angel, Bubeck, Peres, and Wei showed that the smoothed complexity of FLIP for max-cut in complete graphs is O(Ď•5n15.1)O(\phi^5n^{15.1}), where Ď•\phi is an upper bound on the random edge-weight density and nn is the number of vertices in the input graph. While Angel et al.'s result showed the first polynomial smoothed complexity, they also conjectured that their run-time bound is far from optimal. In this work, we make substantial progress towards improving the run-time bound. We prove that the smoothed complexity of FLIP in complete graphs is O(Ď•n7.83)O(\phi n^{7.83}). Our results are based on a carefully chosen matrix whose rank captures the run-time of the method along with improved rank bounds for this matrix and an improved union bound based on this matrix. In addition, our techniques provide a general framework for analyzing FLIP in the smoothed framework. We illustrate this general framework by showing that the smoothed complexity of FLIP for max-33-cut in complete graphs is polynomial and for max-kk-cut in arbitrary graphs is quasi-polynomial. We believe that our techniques should also be of interest towards addressing the smoothed complexity of FLIP for max-kk-cut in complete graphs for larger constants kk.Comment: 36 page

    First Observational Tests of Eternal Inflation: Analysis Methods and WMAP 7-Year Results

    Get PDF
    In the picture of eternal inflation, our observable universe resides inside a single bubble nucleated from an inflating false vacuum. Many of the theories giving rise to eternal inflation predict that we have causal access to collisions with other bubble universes, providing an opportunity to confront these theories with observation. We present the results from the first observational search for the effects of bubble collisions, using cosmic microwave background data from the WMAP satellite. Our search targets a generic set of properties associated with a bubble collision spacetime, which we describe in detail. We use a modular algorithm that is designed to avoid a posteriori selection effects, automatically picking out the most promising signals, performing a search for causal boundaries, and conducting a full Bayesian parameter estimation and model selection analysis. We outline each component of this algorithm, describing its response to simulated CMB skies with and without bubble collisions. Comparing the results for simulated bubble collisions to the results from an analysis of the WMAP 7-year data, we rule out bubble collisions over a range of parameter space. Our model selection results based on WMAP 7-year data do not warrant augmenting LCDM with bubble collisions. Data from the Planck satellite can be used to more definitively test the bubble collision hypothesis.Comment: Companion to arXiv:1012.1995. 41 pages, 23 figures. v2: replaced with version accepted by PRD. Significant extensions to the Bayesian pipeline to do the full-sky non-Gaussian source detection problem (previously restricted to patches). Note that this has changed the normalization of evidence values reported previously, as full-sky priors are now employed, but the conclusions remain unchange

    Certified Algorithms: Worst-Case Analysis and Beyond

    Get PDF
    In this paper, we introduce the notion of a certified algorithm. Certified algorithms provide worst-case and beyond-worst-case performance guarantees. First, a ?-certified algorithm is also a ?-approximation algorithm - it finds a ?-approximation no matter what the input is. Second, it exactly solves ?-perturbation-resilient instances (?-perturbation-resilient instances model real-life instances). Additionally, certified algorithms have a number of other desirable properties: they solve both maximization and minimization versions of a problem (e.g. Max Cut and Min Uncut), solve weakly perturbation-resilient instances, and solve optimization problems with hard constraints. In the paper, we define certified algorithms, describe their properties, present a framework for designing certified algorithms, provide examples of certified algorithms for Max Cut/Min Uncut, Minimum Multiway Cut, k-medians and k-means. We also present some negative results

    Reflection methods for user-friendly submodular optimization

    Get PDF
    Recently, it has become evident that submodularity naturally captures widely occurring concepts in machine learning, signal processing and computer vision. Consequently, there is need for efficient optimization procedures for submodular functions, especially for minimization problems. While general submodular minimization is challenging, we propose a new method that exploits existing decomposability of submodular functions. In contrast to previous approaches, our method is neither approximate, nor impractical, nor does it need any cumbersome parameter tuning. Moreover, it is easy to implement and parallelize. A key component of our method is a formulation of the discrete submodular minimization problem as a continuous best approximation problem that is solved through a sequence of reflections, and its solution can be easily thresholded to obtain an optimal discrete solution. This method solves both the continuous and discrete formulations of the problem, and therefore has applications in learning, inference, and reconstruction. In our experiments, we illustrate the benefits of our method on two image segmentation tasks.Comment: Neural Information Processing Systems (NIPS), \'Etats-Unis (2013

    Seven-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Are There Cosmic Microwave Background Anomalies?

    Get PDF
    (Abridged) A simple six-parameter LCDM model provides a successful fit to WMAP data, both when the data are analyzed alone and in combination with other cosmological data. Even so, it is appropriate to search for any hints of deviations from the now standard model of cosmology, which includes inflation, dark energy, dark matter, baryons, and neutrinos. The cosmological community has subjected the WMAP data to extensive and varied analyses. While there is widespread agreement as to the overall success of the six-parameter LCDM model, various "anomalies" have been reported relative to that model. In this paper we examine potential anomalies and present analyses and assessments of their significance. In most cases we find that claimed anomalies depend on posterior selection of some aspect or subset of the data. Compared with sky simulations based on the best fit model, one can select for low probability features of the WMAP data. Low probability features are expected, but it is not usually straightforward to determine whether any particular low probability feature is the result of the a posteriori selection or of non-standard cosmology. We examine in detail the properties of the power spectrum with respect to the LCDM model. We examine several potential or previously claimed anomalies in the sky maps and power spectra, including cold spots, low quadrupole power, quadropole-octupole alignment, hemispherical or dipole power asymmetry, and quadrupole power asymmetry. We conclude that there is no compelling evidence for deviations from the LCDM model, which is generally an acceptable statistical fit to WMAP and other cosmological data.Comment: 19 pages, 17 figures, also available with higher-res figures on http://lambda.gsfc.nasa.gov; accepted by ApJS; (v2) text as accepte
    • …
    corecore