7,247 research outputs found

    The Network Improvement Problem for Equilibrium Routing

    Full text link
    In routing games, agents pick their routes through a network to minimize their own delay. A primary concern for the network designer in routing games is the average agent delay at equilibrium. A number of methods to control this average delay have received substantial attention, including network tolls, Stackelberg routing, and edge removal. A related approach with arguably greater practical relevance is that of making investments in improvements to the edges of the network, so that, for a given investment budget, the average delay at equilibrium in the improved network is minimized. This problem has received considerable attention in the literature on transportation research and a number of different algorithms have been studied. To our knowledge, none of this work gives guarantees on the output quality of any polynomial-time algorithm. We study a model for this problem introduced in transportation research literature, and present both hardness results and algorithms that obtain nearly optimal performance guarantees. - We first show that a simple algorithm obtains good approximation guarantees for the problem. Despite its simplicity, we show that for affine delays the approximation ratio of 4/3 obtained by the algorithm cannot be improved. - To obtain better results, we then consider restricted topologies. For graphs consisting of parallel paths with affine delay functions we give an optimal algorithm. However, for graphs that consist of a series of parallel links, we show the problem is weakly NP-hard. - Finally, we consider the problem in series-parallel graphs, and give an FPTAS for this case. Our work thus formalizes the intuition held by transportation researchers that the network improvement problem is hard, and presents topology-dependent algorithms that have provably tight approximation guarantees.Comment: 27 pages (including abstract), 3 figure

    LineWalker: Line Search for Black Box Derivative-Free Optimization and Surrogate Model Construction

    Full text link
    This paper describes a simple, but effective sampling method for optimizing and learning a discrete approximation (or surrogate) of a multi-dimensional function along a one-dimensional line segment of interest. The method does not rely on derivative information and the function to be learned can be a computationally-expensive ``black box'' function that must be queried via simulation or other means. It is assumed that the underlying function is noise-free and smooth, although the algorithm can still be effective when the underlying function is piecewise smooth. The method constructs a smooth surrogate on a set of equally-spaced grid points by evaluating the true function at a sparse set of judiciously chosen grid points. At each iteration, the surrogate's non-tabu local minima and maxima are identified as candidates for sampling. Tabu search constructs are also used to promote diversification. If no non-tabu extrema are identified, a simple exploration step is taken by sampling the midpoint of the largest unexplored interval. The algorithm continues until a user-defined function evaluation limit is reached. Numerous examples are shown to illustrate the algorithm's efficacy and superiority relative to state-of-the-art methods, including Bayesian optimization and NOMAD, on primarily nonconvex test functions.Comment: 58 pages, 7 main figures, 29 total figure

    An Entropy Search Portfolio for Bayesian Optimization

    Full text link
    Bayesian optimization is a sample-efficient method for black-box global optimization. How- ever, the performance of a Bayesian optimization method very much depends on its exploration strategy, i.e. the choice of acquisition function, and it is not clear a priori which choice will result in superior performance. While portfolio methods provide an effective, principled way of combining a collection of acquisition functions, they are often based on measures of past performance which can be misleading. To address this issue, we introduce the Entropy Search Portfolio (ESP): a novel approach to portfolio construction which is motivated by information theoretic considerations. We show that ESP outperforms existing portfolio methods on several real and synthetic problems, including geostatistical datasets and simulated control tasks. We not only show that ESP is able to offer performance as good as the best, but unknown, acquisition function, but surprisingly it often gives better performance. Finally, over a wide range of conditions we find that ESP is robust to the inclusion of poor acquisition functions.Comment: 10 pages, 5 figure

    Bayesian Optimization for Probabilistic Programs

    Full text link
    We present the first general purpose framework for marginal maximum a posteriori estimation of probabilistic program variables. By using a series of code transformations, the evidence of any probabilistic program, and therefore of any graphical model, can be optimized with respect to an arbitrary subset of its sampled variables. To carry out this optimization, we develop the first Bayesian optimization package to directly exploit the source code of its target, leading to innovations in problem-independent hyperpriors, unbounded optimization, and implicit constraint satisfaction; delivering significant performance improvements over prominent existing packages. We present applications of our method to a number of tasks including engineering design and parameter optimization

    Continuous multi-task Bayesian optimisation with correlation

    Get PDF
    This paper considers the problem of simultaneously identifying the optima for a (continuous or discrete) set of correlated tasks, where the performance of a particular input parameter on a particular task can only be estimated from (potentially noisy) samples. This has many applications, for example, identifying a stochastic algorithm’s optimal parameter settings for various tasks described by continuous feature values. We adapt the framework of Bayesian Optimisation to this problem. We propose a general multi-task optimisation framework and two myopic sampling procedures that determine task and parameter values for sampling, in order to efficiently find the best parameter setting for all tasks simultaneously. We show experimentally that our methods are much more efficient than collecting information randomly, and also more efficient than two other Bayesian multi-task optimisation algorithms from the literature

    The Price-Level Computation Method

    Get PDF
    It has been submitted that, for the very large number of different traditional type formulae to determine price indices associated with a pair of periods, which are joined with the longstanding question of which one to choose, they should all be abandoned. For the method proposed instead, price levels associated with periods are first all computed together, subject to a consistency of the data, and then price indices that are as taken together true are determined from their ratios. An approximation method can apply in the case of inconsistency. Here is an account of the mathematics of the methodinflation, index-number problem, non-parametric, price index, price level, revealed preference

    The Computation of Optimum Linear Taxation

    Get PDF
    The equitable sharing of the benefits arising from planned development is a subject of lively contemporary debate. One of the tasks being carried out by the System and Decision Sciences Area of the International Institute for Applied Systems Analysis (IIASA) concerns the treatment of planning and redustribution problems in ways that can provide some guidance to decision makers in the formulation of economic policy. This report examines the first part of a study undertaken to assess the redistributive leverage provided by different instruments of planning. It is devoted specifically to the analysis and computation of optimal redistributive policies in small, general equilibrium models of economic planning

    Dealing with asynchronicity in parallel Gaussian Process based global optimization

    Get PDF
    During the last decade, Kriging-based sequential algorithms like EGO and its variants have become reference optimization methods in computer experiments. Such algorithms rely on the iterative maximization of a sampling criterion, the expected improvement (EI), which takes advantage of Kriging conditional distributions to make an explicit trade-off between promizing and uncertain search space points. We have recently worked on a multipoints EI criterion meant to simultaneously choose several points, which is useful for instance in synchronous parallel computation. Here we propose extensions of these works to asynchronous parallel optimization and focus on a variant of EI, EEI, for the case where some new evaluation(s) have to be done while the reponses of previously simulations are not all known yet. In particular, different issues regarding EEI's maximization are addressed, and a proxy strategy is proposed

    Remote estimation of surface moisture over a watershed

    Get PDF
    The author has identified the following significant results. Contoured analyses of moisture availability, moisture flux, sensible heat flux, thermal inertia, and day and nighttime temperatures over a Missouri watershed for a date in June and in September show that forests and creeks exhibit the highest values of moisture availability, whereas farmlands and villages are relatively dry. The distribution of moisture availability over agricultural districts differs significantly between the two cases. This difference is attributed to a change in the surface's vegetative canopy between June and September, with higher moisture availabilities found in the latter case. Horizontal variations of moisture, however, do indicate some relationship between moisture availability and both local rainfall accumulations and the nature of the terrain
    • …
    corecore