44,042 research outputs found

    Smoothed Efficient Algorithms and Reductions for Network Coordination Games

    Get PDF
    Worst-case hardness results for most equilibrium computation problems have raised the need for beyond-worst-case analysis. To this end, we study the smoothed complexity of finding pure Nash equilibria in Network Coordination Games, a PLS-complete problem in the worst case. This is a potential game where the sequential-better-response algorithm is known to converge to a pure NE, albeit in exponential time. First, we prove polynomial (resp. quasi-polynomial) smoothed complexity when the underlying game graph is a complete (resp. arbitrary) graph, and every player has constantly many strategies. We note that the complete graph case is reminiscent of perturbing all parameters, a common assumption in most known smoothed analysis results. Second, we define a notion of smoothness-preserving reduction among search problems, and obtain reductions from 22-strategy network coordination games to local-max-cut, and from kk-strategy games (with arbitrary kk) to local-max-cut up to two flips. The former together with the recent result of [BCC18] gives an alternate O(n8)O(n^8)-time smoothed algorithm for the 22-strategy case. This notion of reduction allows for the extension of smoothed efficient algorithms from one problem to another. For the first set of results, we develop techniques to bound the probability that an (adversarial) better-response sequence makes slow improvements on the potential. Our approach combines and generalizes the local-max-cut approaches of [ER14,ABPW17] to handle the multi-strategy case: it requires a careful definition of the matrix which captures the increase in potential, a tighter union bound on adversarial sequences, and balancing it with good enough rank bounds. We believe that the approach and notions developed herein could be of interest in addressing the smoothed complexity of other potential and/or congestion games

    Expected Fitness Gains of Randomized Search Heuristics for the Traveling Salesperson Problem.

    Get PDF
    Randomized search heuristics are frequently applied to NP-hard combinatorial optimization problems. The runtime analysis of randomized search heuristics has contributed tremendously to their theoretical understanding. Recently, randomized search heuristics have been examined regarding their achievable progress within a fixed time budget. We follow this approach and present a fixed budget analysis for an NP-hard combinatorial optimization problem. We consider the well-known Traveling Salesperson problem (TSP) and analyze the fitness increase that randomized search heuristics are able to achieve within a given fixed time budget. In particular, we analyze Manhattan and Euclidean TSP instances and Randomized Local Search (RLS), (1 + 1) EA and (1 + λ) EA algorithms for the TSP in a smoothed complexity setting and derive the lower bounds of the expected fitness gain for a specified number of generations

    Multiple Testing of Local Maxima for Detection of Peaks in Random Fields

    Full text link
    A topological multiple testing scheme is presented for detecting peaks in images under stationary ergodic Gaussian noise, where tests are performed at local maxima of the smoothed observed signals. The procedure generalizes the one-dimensional scheme of Schwartzman et al. (2011) to Euclidean domains of arbitrary dimension. Two methods are developed according to two different ways of computing p-values: (i) using the exact distribution of the height of local maxima (Cheng and Schwartzman, 2014), available explicitly when the noise field is isotropic; (ii) using an approximation to the overshoot distribution of local maxima above a pre-threshold (Cheng and Schwartzman, 2014), applicable when the exact distribution is unknown, such as when the stationary noise field is non-isotropic. The algorithms, combined with the Benjamini-Hochberg procedure for thresholding p-values, provide asymptotic strong control of the False Discovery Rate (FDR) and power consistency, with specific rates, as the search space and signal strength get large. The optimal smoothing bandwidth and optimal pre-threshold are obtained to achieve maximum power. Simulations show that FDR levels are maintained in non-asymptotic conditions. The methods are illustrated in a nanoscopy image analysis problem of detecting fluorescent molecules against the image background.Comment: 30 pages, 5 figures. arXiv admin note: text overlap with arXiv:1203.306

    Towards explaining the speed of kk-means

    Get PDF
    The kk-means method is a popular algorithm for clustering, known for its speed in practice. This stands in contrast to its exponential worst-case running-time. To explain the speed of the kk-means method, a smoothed analysis has been conducted. We sketch this smoothed analysis and a generalization to Bregman divergences

    Newton based Stochastic Optimization using q-Gaussian Smoothed Functional Algorithms

    Full text link
    We present the first q-Gaussian smoothed functional (SF) estimator of the Hessian and the first Newton-based stochastic optimization algorithm that estimates both the Hessian and the gradient of the objective function using q-Gaussian perturbations. Our algorithm requires only two system simulations (regardless of the parameter dimension) and estimates both the gradient and the Hessian at each update epoch using these. We also present a proof of convergence of the proposed algorithm. In a related recent work (Ghoshdastidar et al., 2013), we presented gradient SF algorithms based on the q-Gaussian perturbations. Our work extends prior work on smoothed functional algorithms by generalizing the class of perturbation distributions as most distributions reported in the literature for which SF algorithms are known to work and turn out to be special cases of the q-Gaussian distribution. Besides studying the convergence properties of our algorithm analytically, we also show the results of several numerical simulations on a model of a queuing network, that illustrate the significance of the proposed method. In particular, we observe that our algorithm performs better in most cases, over a wide range of q-values, in comparison to Newton SF algorithms with the Gaussian (Bhatnagar, 2007) and Cauchy perturbations, as well as the gradient q-Gaussian SF algorithms (Ghoshdastidar et al., 2013).Comment: This is a longer of version of the paper with the same title accepted in Automatic

    Templates for Convex Cone Problems with Applications to Sparse Signal Recovery

    Full text link
    This paper develops a general framework for solving a variety of convex cone problems that frequently arise in signal processing, machine learning, statistics, and other fields. The approach works as follows: first, determine a conic formulation of the problem; second, determine its dual; third, apply smoothing; and fourth, solve using an optimal first-order method. A merit of this approach is its flexibility: for example, all compressed sensing problems can be solved via this approach. These include models with objective functionals such as the total-variation norm, ||Wx||_1 where W is arbitrary, or a combination thereof. In addition, the paper also introduces a number of technical contributions such as a novel continuation scheme, a novel approach for controlling the step size, and some new results showing that the smooth and unsmoothed problems are sometimes formally equivalent. Combined with our framework, these lead to novel, stable and computationally efficient algorithms. For instance, our general implementation is competitive with state-of-the-art methods for solving intensively studied problems such as the LASSO. Further, numerical experiments show that one can solve the Dantzig selector problem, for which no efficient large-scale solvers exist, in a few hundred iterations. Finally, the paper is accompanied with a software release. This software is not a single, monolithic solver; rather, it is a suite of programs and routines designed to serve as building blocks for constructing complete algorithms.Comment: The TFOCS software is available at http://tfocs.stanford.edu This version has updated reference

    Optic nerve head segmentation

    Get PDF
    Reliable and efficient optic disk localization and segmentation are important tasks in automated retinal screening. General-purpose edge detection algorithms often fail to segment the optic disk due to fuzzy boundaries, inconsistent image contrast or missing edge features. This paper presents an algorithm for the localization and segmentation of the optic nerve head boundary in low-resolution images (about 20 /spl mu//pixel). Optic disk localization is achieved using specialized template matching, and segmentation by a deformable contour model. The latter uses a global elliptical model and a local deformable model with variable edge-strength dependent stiffness. The algorithm is evaluated against a randomly selected database of 100 images from a diabetic screening programme. Ten images were classified as unusable; the others were of variable quality. The localization algorithm succeeded on all bar one usable image; the contour estimation algorithm was qualitatively assessed by an ophthalmologist as having Excellent-Fair performance in 83% of cases, and performs well even on blurred image
    corecore