1,280 research outputs found

    On extremum-searching approximate probabilistic algorithms

    Get PDF

    On generalized processor sharing and objective functions: analytical framework

    Get PDF
    Today, telecommunication networks host a wide range of heterogeneous services. Some demand strict delay minima, while others only need a best-effort kind of service. To achieve service differentiation, network traffic is partitioned in several classes which is then transmitted according to a flexible and fair scheduling mechanism. Telecommunication networks can, for instance, use an implementation of Generalized Processor Sharing (GPS) in its internal nodes to supply an adequate Quality of Service to each class. GPS is flexible and fair, but also notoriously hard to study analytically. As a result, one has to resort to simulation or approximation techniques to optimize GPS for some given objective function. In this paper, we set up an analytical framework for two-class discrete-time probabilistic GPS which allows to optimize the scheduling for a generic objective function in terms of the mean unfinished work of both classes without the need for exact results or estimations/approximations for these performance characteristics. This framework is based on results of strict priority scheduling, which can be regarded as a special case of GPS, and some specific unfinished-work properties in two-class GPS. We also apply our framework on a popular type of objective functions, i.e., convex combinations of functions of the mean unfinished work. Lastly, we incorporate the framework in an algorithm to yield a faster and less computation-intensive result for the optimum of an objective function

    The Advantage of Intelligent Algorithms for TSP

    Get PDF

    Stochastic Optimization in Econometric Models – A Comparison of GA, SA and RSG

    Get PDF
    This paper shows that, in case of an econometric model with a high sensitivity to data, using stochastic optimization algorithms is better than using classical gradient techniques. In addition, we showed that the Repetitive Stochastic Guesstimation (RSG) algorithm –invented by Charemza-is closer to Simulated Annealing (SA) than to Genetic Algorithms (GAs), so we produced hybrids between RSG and SA to study their joint behavior. The evaluation of all algorithms involved was performed on a short form of the Romanian macro model, derived from Dobrescu (1996). The subject of optimization was the model’s solution, as function of the initial values (in the first stage) and of the objective functions (in the second stage). We proved that a priori information help “elitist “ algorithms (like RSG and SA) to obtain best results; on the other hand, when one has equal believe concerning the choice among different objective functions, GA gives a straight answer. Analyzing the average related bias of the model’s solution proved the efficiency of the stochastic optimization methods presented.underground economy, Laffer curve, informal activity, fiscal policy, transitionmacroeconomic model, stochastic optimization, evolutionary algorithms, Repetitive Stochastic Guesstimation

    Sublinear Computation Paradigm

    Get PDF
    This open access book gives an overview of cutting-edge work on a new paradigm called the “sublinear computation paradigm,” which was proposed in the large multiyear academic research project “Foundations of Innovative Algorithms for Big Data.” That project ran from October 2014 to March 2020, in Japan. To handle the unprecedented explosion of big data sets in research, industry, and other areas of society, there is an urgent need to develop novel methods and approaches for big data analysis. To meet this need, innovative changes in algorithm theory for big data are being pursued. For example, polynomial-time algorithms have thus far been regarded as “fast,” but if a quadratic-time algorithm is applied to a petabyte-scale or larger big data set, problems are encountered in terms of computational resources or running time. To deal with this critical computational and algorithmic bottleneck, linear, sublinear, and constant time algorithms are required. The sublinear computation paradigm is proposed here in order to support innovation in the big data era. A foundation of innovative algorithms has been created by developing computational procedures, data structures, and modelling techniques for big data. The project is organized into three teams that focus on sublinear algorithms, sublinear data structures, and sublinear modelling. The work has provided high-level academic research results of strong computational and algorithmic interest, which are presented in this book. The book consists of five parts: Part I, which consists of a single chapter on the concept of the sublinear computation paradigm; Parts II, III, and IV review results on sublinear algorithms, sublinear data structures, and sublinear modelling, respectively; Part V presents application results. The information presented here will inspire the researchers who work in the field of modern algorithms

    Probabilistic Line Searches for Stochastic Optimization

    Full text link
    In deterministic optimization, line searches are a standard tool ensuring stability and efficiency. Where only stochastic gradients are available, no direct equivalent has so far been formulated, because uncertain gradients do not allow for a strict sequence of decisions collapsing the search space. We construct a probabilistic line search by combining the structure of existing deterministic methods with notions from Bayesian optimization. Our method retains a Gaussian process surrogate of the univariate optimization objective, and uses a probabilistic belief over the Wolfe conditions to monitor the descent. The algorithm has very low computational cost, and no user-controlled parameters. Experiments show that it effectively removes the need to define a learning rate for stochastic gradient descent.Comment: Extended version of the NIPS '15 conference paper, includes detailed pseudo-code, 59 pages, 35 figure
    • 

    corecore