35,594 research outputs found

    An asymptotical study of combinatorial optimization problems by means of statistical mechanics

    Get PDF
    AbstractThe analogy between combinatorial optimization and statistical mechanics has proven to be a fruitful object of study. Simulated annealing, a metaheuristic for combinatorial optimization problems, is based on this analogy. In this paper we show how a statistical mechanics formalism can be utilized to analyze the asymptotic behavior of combinatorial optimization problems with sum objective function and provide an alternative proof for the following result: Under a certain combinatorial condition and some natural probabilistic assumptions on the coefficients of the problem, the ratio between the optimal solution and an arbitrary feasible solution tends to one almost surely, as the size of the problem tends to infinity, so that the problem of optimization becomes trivial in some sense. Whereas this result can also be proven by purely probabilistic techniques, the above approach allows one to understand why the assumed combinatorial condition is essential for such a type of asymptotic behavior

    One Class of Stochastic Local Search Algorithms

    Get PDF
    Accelerated probabilistic modeling algorithms, presenting stochastic local search (SLS) technique, are considered. General algorithm scheme and specific combinatorial optimization method, using “golden section” rule (GS-method), are given. Convergence rates using Markov chains are received. An overview of current combinatorial optimization techniques is presented

    The Probabilistic Minimum Spanning Tree, Part II: Probabilistic Analysis and Asymptotic Results

    Get PDF
    In this paper, which is a sequel to [3], we perform probabilistic analysis under the random Euclidean and the random length models of the probabilistic minimum spanning tree (PMST) problem and the two re-optimization strategies, in which we find the MST or the Steiner tree respectively among the points that are present at a particular instance. Under the random Euclidean model we prove that with probability 1, as the number of points goes to infinity, the expected length of the PMST is the same with the expectation of the MST re-optimization strategy and within a constant of the Steiner re-optimization strategy. In the random length model, using a result of Frieze [6], we prove that with probability 1 the expected length of the PMST is asymptotically smaller than the expectation of the MST re-optimization strategy. These results add evidence that a priori strategies may offer a useful and practical method for resolving combinatorial optimization problems on modified instances. Key words: Probabilistic analysis, combinatorial optimization, minimum spanning tree, Steiner tree

    Efficient inference in Bayes networks as a combinatorial optimization problem

    Get PDF
    AbstractA number of exact algorithms have been developed in recent years to perform probabilistic inference in Bayesian belief networks. The techniques used in these algorithms are closely related to network structures, and some of them are not easy to understand and implement. We consider the problem from the combinatorial optimization point of view and state that efficient probabilistic inference in a belief network is a problem of finding an optimal factoring given a set of probability distributions. From this viewpoint, previously developed algorithms can be seen as alternative factoring strategies. In this paper, we define a combinatorial optimization problem, the optimal factoring problem, and discuss application of this problem in belief networks. We show that optimal factoring provides insight into the key elements of efficient probabilistic inference, and demonstrate simple, easily implemented algorithms with excellent performance

    Combinatorial Bayesian Optimization with Random Mapping Functions to Convex Polytope

    Full text link
    Bayesian optimization is a popular method for solving the problem of global optimization of an expensive-to-evaluate black-box function. It relies on a probabilistic surrogate model of the objective function, upon which an acquisition function is built to determine where next to evaluate the objective function. In general, Bayesian optimization with Gaussian process regression operates on a continuous space. When input variables are categorical or discrete, an extra care is needed. A common approach is to use one-hot encoded or Boolean representation for categorical variables which might yield a {\em combinatorial explosion} problem. In this paper we present a method for Bayesian optimization in a combinatorial space, which can operate well in a large combinatorial space. The main idea is to use a random mapping which embeds the combinatorial space into a convex polytope in a continuous space, on which all essential process is performed to determine a solution to the black-box optimization in the combinatorial space. We describe our {\em combinatorial Bayesian optimization} algorithm and present its regret analysis. Numerical experiments demonstrate that our method outperforms existing methods.Comment: 10 pages, 2 figure
    corecore