7,535 research outputs found

    Expected Fitness Gains of Randomized Search Heuristics for the Traveling Salesperson Problem.

    Get PDF
    Randomized search heuristics are frequently applied to NP-hard combinatorial optimization problems. The runtime analysis of randomized search heuristics has contributed tremendously to their theoretical understanding. Recently, randomized search heuristics have been examined regarding their achievable progress within a fixed time budget. We follow this approach and present a fixed budget analysis for an NP-hard combinatorial optimization problem. We consider the well-known Traveling Salesperson problem (TSP) and analyze the fitness increase that randomized search heuristics are able to achieve within a given fixed time budget. In particular, we analyze Manhattan and Euclidean TSP instances and Randomized Local Search (RLS), (1 + 1) EA and (1 + Ξ») EA algorithms for the TSP in a smoothed complexity setting and derive the lower bounds of the expected fitness gain for a specified number of generations

    Online Optimization of Smoothed Piecewise Constant Functions

    Get PDF
    We study online optimization of smoothed piecewise constant functions over the domain [0, 1). This is motivated by the problem of adaptively picking parameters of learning algorithms as in the recently introduced framework by Gupta and Roughgarden (2016). Majority of the machine learning literature has focused on Lipschitz-continuous functions or functions with bounded gradients. 1 This is with good reason---any learning algorithm suffers linear regret even against piecewise constant functions that are chosen adversarially, arguably the simplest of non-Lipschitz continuous functions. The smoothed setting we consider is inspired by the seminal work of Spielman and Teng (2004) and the recent work of Gupta and Roughgarden---in this setting, the sequence of functions may be chosen by an adversary, however, with some uncertainty in the location of discontinuities. We give algorithms that achieve sublinear regret in the full information and bandit settings

    Smoothed Analysis of the Condition Numbers and Growth Factors of Matrices

    Full text link
    Let \orig{A} be any matrix and let AA be a slight random perturbation of \orig{A}. We prove that it is unlikely that AA has large condition number. Using this result, we prove it is unlikely that AA has large growth factor under Gaussian elimination without pivoting. By combining these results, we bound the smoothed precision needed by Gaussian elimination without pivoting. Our results improve the average-case analysis of Gaussian elimination without pivoting performed by Yeung and Chan (SIAM J. Matrix Anal. Appl., 1997).Comment: corrected some minor mistake

    An algebraic multigrid method for Q2βˆ’Q1Q_2-Q_1 mixed discretizations of the Navier-Stokes equations

    Full text link
    Algebraic multigrid (AMG) preconditioners are considered for discretized systems of partial differential equations (PDEs) where unknowns associated with different physical quantities are not necessarily co-located at mesh points. Specifically, we investigate a Q2βˆ’Q1Q_2-Q_1 mixed finite element discretization of the incompressible Navier-Stokes equations where the number of velocity nodes is much greater than the number of pressure nodes. Consequently, some velocity degrees-of-freedom (dofs) are defined at spatial locations where there are no corresponding pressure dofs. Thus, AMG approaches leveraging this co-located structure are not applicable. This paper instead proposes an automatic AMG coarsening that mimics certain pressure/velocity dof relationships of the Q2βˆ’Q1Q_2-Q_1 discretization. The main idea is to first automatically define coarse pressures in a somewhat standard AMG fashion and then to carefully (but automatically) choose coarse velocity unknowns so that the spatial location relationship between pressure and velocity dofs resembles that on the finest grid. To define coefficients within the inter-grid transfers, an energy minimization AMG (EMIN-AMG) is utilized. EMIN-AMG is not tied to specific coarsening schemes and grid transfer sparsity patterns, and so it is applicable to the proposed coarsening. Numerical results highlighting solver performance are given on Stokes and incompressible Navier-Stokes problems.Comment: Submitted to a journa

    Linear Programming for Large-Scale Markov Decision Problems

    Get PDF
    We consider the problem of controlling a Markov decision process (MDP) with a large state space, so as to minimize average cost. Since it is intractable to compete with the optimal policy for large scale problems, we pursue the more modest goal of competing with a low-dimensional family of policies. We use the dual linear programming formulation of the MDP average cost problem, in which the variable is a stationary distribution over state-action pairs, and we consider a neighborhood of a low-dimensional subset of the set of stationary distributions (defined in terms of state-action features) as the comparison class. We propose two techniques, one based on stochastic convex optimization, and one based on constraint sampling. In both cases, we give bounds that show that the performance of our algorithms approaches the best achievable by any policy in the comparison class. Most importantly, these results depend on the size of the comparison class, but not on the size of the state space. Preliminary experiments show the effectiveness of the proposed algorithms in a queuing application.Comment: 27 pages, 3 figure
    • …
    corecore