78 research outputs found

    Probabilistic Analysis of Discrete Optimization Problems

    Get PDF
    We investigate the performance of exact algorithms for hard optimization problems under random inputs. In particular, we prove various structural properties that lead to two general average-case analyses applicable to a large class of optimization problems. In the first part we study the size of the Pareto curve for binary optimization problems with two objective functions. Pareto optimal solutions can be seen as trade-offs between multiple objectives. While in the worst case, the cardinality of the Pareto curve is exponential in the number of variables, we prove polynomial upper bounds for the expected number of Pareto points when at least one objective function is linear and exhibits sufficient randomness. Our analysis covers general probability distributions with finite mean and, in its most general form, can even handle different probability distributions for the coefficients of the objective function. We apply this result to the constrained shortest path problem and to the knapsack problem. There are algorithms for both problems that can enumerate all Pareto optimal solutions very efficiently, so that our polynomial upper bound on the size of the Pareto curve implies that the expected running time of these algorithms is polynomial as well. For example, we obtain a bound of O(n4) for uniformly random knapsack instances, where n denotes the number of available items. In the second part we investigate the performance of knapsack core algorithms, the predominant algorithmic concept in practice. The idea is to fix most variables to the values prescribed by the optimal fractional solution. The reduced problem has only polylogarithmic size on average and is solved using the Nemhauser/Ullmann algorithm. Applying the analysis of the first part, we can prove an upper bound of O(npolylog n) on the expected running time. Furthermore, we extend our analysis to a harder class of random input distributions. Finally, we present an experimental study of knapsack instances for various random input distributions. We investigate structural properties including the size of the Pareto curve and the integrality gap and compare the running time between different implementations of core algorithms. The last part of the thesis introduces a semi-random input model for constrained binary optimization problems, which enables us to perform a smoothed analysis for a large class of optimization problems while at the same time taking care of the combinatorial structure of individual problems. Our analysis is centered around structural properties, called winner, loser, and feasibility gap. These gaps describe the sensitivity of the optimal solution to slight perturbations of the input and can be used to bound the necessary accuracy as well as the complexity for solving an instance. We exploit the gaps in form of an adaptive rounding scheme increasing the accuracy of calculation until the optimal solution is found. The strength of our techniques is illustrated by applications to various NP-hard optimization problems for which we obtain the rst algorithms with polynomial average-case/smoothed complexity

    An Investigation Report on Auction Mechanism Design

    Full text link
    Auctions are markets with strict regulations governing the information available to traders in the market and the possible actions they can take. Since well designed auctions achieve desirable economic outcomes, they have been widely used in solving real-world optimization problems, and in structuring stock or futures exchanges. Auctions also provide a very valuable testing-ground for economic theory, and they play an important role in computer-based control systems. Auction mechanism design aims to manipulate the rules of an auction in order to achieve specific goals. Economists traditionally use mathematical methods, mainly game theory, to analyze auctions and design new auction forms. However, due to the high complexity of auctions, the mathematical models are typically simplified to obtain results, and this makes it difficult to apply results derived from such models to market environments in the real world. As a result, researchers are turning to empirical approaches. This report aims to survey the theoretical and empirical approaches to designing auction mechanisms and trading strategies with more weights on empirical ones, and build the foundation for further research in the field

    Revenue Management of a Professional Services Firm with Quality Revelation

    Get PDF

    Multicriteria investment problem with Savage's risk criteria: Theoretical aspects of stability and case study

    Get PDF
    A discrete variant of a multicriteria investment portfolio optimization problem with Savage's risk criteria is considered. One of the three problem parameter spaces is endowed with Hölder's norm, and the other two are endowed with Chebyshev's norm. The lower and upper attainable bounds on the stability radius of one Pareto optimal portfolio are obtained. We illustrate the application of our theoretical results by modeling a relevant case study.</p

    Automated Auction Mechanism Design with Competing Markets

    Full text link
    Resource allocation is a major issue in multiple areas of computer science. Despite the wide range of resource types across these areas, for example real commodities in e-commerce and computing resources in distributed computing, auctions are commonly used in solving the optimization problems involved in these areas, since well designed auctions achieve desirable economic outcomes. Auctions are markets with strict regulations governing the information available to traders in the market and the possible actions they can take. Auction mechanism design aims to manipulate the rules of an auction in order to achieve specific goals. Economists traditionally use mathematical methods, mainly game theory, to analyze auctions and design new auction forms. However, due to the high complexity of auctions, the mathematical models are typically simplified to obtain results, and this makes it difficult to apply results derived from such models to market environments in the real world. As a result, researchers are turning to empirical approaches. Following this line of work, we present what we call a grey-box approach to automated auction mechanism design using reinforcement learning and evolutionary computation methods. We first describe a new strategic game, called \cat, which were designed to run multiple markets that compete to attract traders and make profit. The CAT game enables us to address the imbalance between prior work in this field that studied auctions in an isolated environment and the actual competitive situation that markets face. We then define a novel, parameterized framework for auction mechanisms, and present a classification of auction rules with each as a building block fitting into the framework. Finally we evaluate the viability of building blocks, and acquire auction mechanisms by combining viable blocks through iterations of CAT games. We carried out experiments to examine the effectiveness of the grey-box approach. The best mechanisms we learnt were able to outperform the standard mechanisms against which learning took place and carefully hand-coded mechanisms which won tournaments based on the CAT game. These best mechanisms were also able to outperform mechanisms from the literature even when the evaluation did not take place in the context of CAT games. These results suggest that the grey-box approach can generate robust double auction mechanisms and, as a consequence, is an effective approach to automated mechanism design. The contributions of this work are two-fold. First, the grey-box approach helps to design better auction mechanisms which can play a central role in solutions to resource allocation problems in various application domains of computer science. Second, the parameterized view and the reinforcement learning-based search method can be used in other strategic, competitive situations where decision making processes are complex and difficult to design and evaluate manually

    Strategic algorithms

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 193-201).Classical algorithms from theoretical computer science arise time and again in practice. However,a practical situations typically do not fit precisely into the traditional theoretical models. Additional necessary components are, for example, uncertainty and economic incentives. Therefore, modem algorithm design is calling for more interdisciplinary approaches, as well as for deeper theoretical understanding, so that the algorithms can apply to more realistic settings and complex systems. Consider, for instance, the classical shortest path algorithm, which, given a graph with specified edge weights, seeks the path minimizing the total weight from a source to a destination. In practice, the edge weights are often uncertain and it is not even clear what we mean by shortest path anymore: is it the path that minimizes the expected weight? Or its variance, or some another metric? With a risk-averse objective function that takes into account both mean and standard deviation, we run into nonconvex optimization challenges that require new theory beyond classical shortest path algorithm design. Yet another shortest path application, routing of packets in the Internet, needs to further incorporate economic incentives to reflect the various business relationships among the Internet Service Providers that affect the choice of packet routes. Strategic Algorithms are algorithms that integrate optimization, uncertainty and economic modeling into algorithm design, with the goal of bringing about new theoretical developments and solving practical applications arising in complex computational-economic systems.(cont.) In short, this thesis contributes new algorithms and their underlying theory at the interface of optimization, uncertainty and economics. Although the interplay of these disciplines is present in various forms in our work, for the sake of presentation we have divided the material into three categories: 1. In Part I we investigate algorithms at the intersection of Optimization and Uncertainty. The key conceptual contribution in this part is discovering a novel connection between stochastic and nonconvex optimization. Traditional algorithm design has not taken into account the risk inherent in stochastic optimization problems. We consider natural objectives that incorporate risk, which tum out equivalent to certain nonconvex problems from the realm of continuous optimization. As a result, our work advances the state of art in both stochastic and in nonconvex optimization, presenting new complexity results and proposing general purpose efficient approximation algorithms, some of which have shown promising practical performance and have been implemented in a real traffic prediction and navigation system. 2. Part II proposes new algorithm and mechanism design at the intersection of Uncertainty and Economics. In Part I we postulate that the random variables in our models come from given distributions. However, determining those distributions or their parameters is a challenging and fundamental problem in itself. A tool from Economics that has recently gained momentum for measuring the probability distribution of a random variable is an information or prediction market. Such markets, most popularly known for predicting the outcomes of political elections or other events of interest, have shown remarkable accuracy in practice, though at the same time have left open the theoretical and strategic analysis of current implementations, as well as the need for new and improved designs which handle more complex outcome spaces (probability distribution functions) as opposed to binary or n-ary valued distributions. The contributions of this part include a unified strategic analysis of different prediction market designs that have been implemented in practice.(cont.) We also offer new market designs for handling exponentially large outcome spaces stemming from ranking or permutation-type outcomes, together with algorithmic and complexity analysis. 3. In Part III we consider the interplay of optimization and economics in the context of network routing. This part is motivated by the network of autonomous systems in the Internet where each portion of the network is controlled by an Internet service provider, namely by a self-interested economic agent. The business incentives do not exist merely in addition to the computer protocols governing the network. Although they are not currently integrated in those protocols and are decided largely via private contracting and negotiations, these economic considerations are a principal factor that determines how packets are routed. And vice versa, the demand and flow of network traffic fundamentally affect provider contracts and prices. The contributions of this part are the design and analysis of economic mechanisms for network routing. The mechanisms are based on first- and second-price auctions (the so-called Vickrey-Clarke-Groves, or VCG mechanisms). We first analyze the equilibria and prices resulting from these mechanisms. We then investigate the compatibility of the better understood VCG-mechanisms with the current inter-domain routing protocols, and we demonstrate the critical importance of correct modeling and how it affects the complexity and algorithms necessary to implement the economic mechanisms.by Evdokia Velinova Nikolova.Ph.D

    Understanding Optimisation Processes with Biologically-Inspired Visualisations

    Get PDF
    Evolutionary algorithms (EAs) constitute a branch of artificial intelligence utilised to evolve solutions to solve optimisation problems abound in industry and research. EAs often generate many solutions and visualisation has been a primary strategy to display EA solutions, given that visualisation is a multi-domain well-evaluated medium to comprehend extensive data. The endeavour of visualising solutions is inherent with challenges resulting from high dimensional phenomenons and the large number of solutions to display. Recently, scholars have produced methods to mitigate some of these known issues when illustrating solutions. However, one key consideration is that displaying the final subset of solutions exclusively (rather than the whole population) discards most of the informativeness of the search, creating inadequate insight into the black-box EA. There is an unequivocal knowledge gap and requirement for methods which can visualise the whole population of solutions from an optimiser and subjugate the high-dimensional problems and scaling issues to create interpretability of the EA search process. Furthermore, a requirement for explainability in evolutionary computing has been demanded by the evolutionary computing community, which could take the form of visualisations, to support EA comprehension much like the support explainable artificial intelligence has brought to artificial intelligence. In this thesis, we report novel visualisation methods that can be used to visualise large and high-dimensional optimiser populations with the aim of creating greater interpretability during a search. We consider the nascent intersection of visualisation and explainability in evolutionary computing. The potential high informativeness of a visualisation method from an early chapter of this work forms an effective platform to develop an explainability visualisation method, namely the population dynamics plot, to attempt to inject explainability into the inner workings of the search process. We further support the visualisation of populations using machine learning to construct models which can capture the characteristics of an EA search and develop intelligent visualisations which use artificial intelligence to potentially enhance and support visualisation for a more informative search process. The methods developed in this thesis are evaluated both quantitatively and qualitatively. We use multi-feature benchmark problems to show the method’s ability to reveal specific problem characteristics such as disconnected fronts, local optima and bias, as well as potentially creating a better understanding of the problem landscape and optimiser search for evaluating and comparing algorithm performance (we show the visualisation method to be more insightful than conventional metrics like hypervolume alone). One of the most insightful methods developed in this thesis can produce a visualisation requiring less than 1% of the time and memory necessary to produce a visualisation of the same objective space solutions using existing methods. This allows for greater scalability and the use in short compile time applications such as online visualisations. Predicated by an existing visualisation method in this thesis, we then develop and apply an explainability method to a real-world problem and evaluate it to show the method to be highly effective at explaining the search via solutions in the objective spaces, solution lineage and solution variation operators to compactly comprehend, evaluate and communicate the search of an optimiser, although we note the explainability properties are only evaluated against the author’s ability and could be evaluated further in future work with a usability study. The work is then supported by the development of intelligent visualisation models that may allow one to predict solutions in optima (importantly local optima) in unseen problems by using a machine learning model. The results are effective, with some models able to predict and visualise solution optima with a balanced F1 accuracy metric of 96%. The results of this thesis provide a suite of visualisations which aims to provide greater informativeness of the search and scalability than previously existing literature. The work develops one of the first explainability methods aiming to create greater insight into the search space, solution lineage and reproductive operators. The work applies machine learning to potentially enhance EA understanding via visualisation. These models could also be used for a number of applications outside visualisation. Ultimately, the work provides novel methods for all EA stakeholders which aims to support understanding, evaluation and communication of EA processes with visualisation

    Throughput and Yield Improvement for a Continuous Discrete-Product Manufacturing System

    Get PDF
    A seam-welded steel pipe manufacturing process has mainly four distinct major design and/or operational problems dealing with buffer inventory, cutting tools, pipe sizing and inspection-rework facility. The general objective of this research is to optimally solve these four important problems to improve the throughput and yield of the system at a minimum cost. The first problem of this research finds the optimal buffer capacity of steel strip coils to minimize the maintenance and downtime related costs. The total cost function for this coil feeding system is formulated as a constrained non-linear programming (NLP) problem which is solved with a search algorithm. The second problem aims at finding the optimal tool magazine reload timing, magazine size and the order quantity for the cutting tools. This tool magazine system is formulated as a mixed-integer NLP problem which is solved for minimizing the total cost. The third problem deals with different type of manufacturing defects. The profit function of this problem forms a binary integer NLP problem which involves multiple integrals with several exponential and discrete functions. An exhaustive search method is employed to find the optimum strategy for dealing with the defects and pipe sizing. The fourth problem pertains to the number of servers and floor space allocations for the off-line inspection-rework facility. The total cost function forms an integer NLP structure, which is minimized with a customized search algorithm. In order to judge the impact of the above-mentioned problems, an overall equipment effectiveness (OEE) measure, coined as monetary loss based regression (MLBR) method, is also developed as the fifth problem to assess the performance of the entire manufacturing system. Finally, a numerical simulation of the entire process is conducted to illustrate the applications of the optimum parameters setting and to evaluate the overall effectiveness of the simulated system. The successful improvement of the simulated system supports this research to be implemented in a real manufacturing setup. Different pathways shown here for improving the throughput and yield of industrial systems reflect not only to the improvement of methodologies and techniques but also to the advancement of new technology and national economy

    LIPIcs, Volume 244, ESA 2022, Complete Volume

    Get PDF
    LIPIcs, Volume 244, ESA 2022, Complete Volum
    • …
    corecore