603 research outputs found

    Three Puzzles on Mathematics, Computation, and Games

    Full text link
    In this lecture I will talk about three mathematical puzzles involving mathematics and computation that have preoccupied me over the years. The first puzzle is to understand the amazing success of the simplex algorithm for linear programming. The second puzzle is about errors made when votes are counted during elections. The third puzzle is: are quantum computers possible?Comment: ICM 2018 plenary lecture, Rio de Janeiro, 36 pages, 7 Figure

    The tropical shadow-vertex algorithm solves mean payoff games in polynomial time on average

    Full text link
    We introduce an algorithm which solves mean payoff games in polynomial time on average, assuming the distribution of the games satisfies a flip invariance property on the set of actions associated with every state. The algorithm is a tropical analogue of the shadow-vertex simplex algorithm, which solves mean payoff games via linear feasibility problems over the tropical semiring (R{},max,+)(\mathbb{R} \cup \{-\infty\}, \max, +). The key ingredient in our approach is that the shadow-vertex pivoting rule can be transferred to tropical polyhedra, and that its computation reduces to optimal assignment problems through Pl\"ucker relations.Comment: 17 pages, 7 figures, appears in 41st International Colloquium, ICALP 2014, Copenhagen, Denmark, July 8-11, 2014, Proceedings, Part

    Analysis of pivot sampling in dual-pivot Quicksort: A holistic analysis of Yaroslavskiy's partitioning scheme

    Get PDF
    The final publication is available at Springer via http://dx.doi.org/10.1007/s00453-015-0041-7The new dual-pivot Quicksort by Vladimir Yaroslavskiy-used in Oracle's Java runtime library since version 7-features intriguing asymmetries. They make a basic variant of this algorithm use less comparisons than classic single-pivot Quicksort. In this paper, we extend the analysis to the case where the two pivots are chosen as fixed order statistics of a random sample. Surprisingly, dual-pivot Quicksort then needs more comparisons than a corresponding version of classic Quicksort, so it is clear that counting comparisons is not sufficient to explain the running time advantages observed for Yaroslavskiy's algorithm in practice. Consequently, we take a more holistic approach and give also the precise leading term of the average number of swaps, the number of executed Java Bytecode instructions and the number of scanned elements, a new simple cost measure that approximates I/O costs in the memory hierarchy. We determine optimal order statistics for each of the cost measures. It turns out that the asymmetries in Yaroslavskiy's algorithm render pivots with a systematic skew more efficient than the symmetric choice. Moreover, we finally have a convincing explanation for the success of Yaroslavskiy's algorithm in practice: compared with corresponding versions of classic single-pivot Quicksort, dual-pivot Quicksort needs significantly less I/Os, both with and without pivot sampling.Peer ReviewedPostprint (author's final draft

    Design of large scale applications of secure multiparty computation : secure linear programming

    Get PDF
    Secure multiparty computation is a basic concept of growing interest in modern cryptography. It allows a set of mutually distrusting parties to perform a computation on their private information in such a way that as little as possible is revealed about each private input. The early results of multiparty computation have only theoretical signi??cance since they are not able to solve computationally complex functions in a reasonable amount of time. Nowadays, e??ciency of secure multiparty computation is an important topic of cryptographic research. As a case study we apply multiparty computation to solve the problem of secure linear programming. The results enable, for example in the context of the EU-FP7 project SecureSCM, collaborative supply chain management. Collaborative supply chain management is about the optimization of the supply and demand con??guration of a supply chain. In order to optimize the total bene??t of the entire chain, parties should collaborate by pooling their sensitive data. With the focus on e??ciency we design protocols that securely solve any linear program using the simplex algorithm. The simplex algorithm is well studied and there are many variants of the simplex algorithm providing a simple and e??cient solution to solving linear programs in practice. However, the cryptographic layer on top of any variant of the simplex algorithm imposes restrictions and new complexity measures. For example, hiding the number of iterations of the simplex algorithm has the consequence that the secure implementations have a worst case number of iterations. Then, since the simplex algorithm has exponentially many iterations in the worst case, the secure implementations have exponentially many iterations in all cases. To give a basis for understanding the restrictions, we review the basic theory behind the simplex algorithm and we provide a set of cryptographic building blocks used to implement secure protocols evaluating basic variants of the simplex algorithm. We show how to balance between privacy and e??ciency; some protocols reveal data about the internal state of the simplex algorithm, such as the number of iterations, in order to improve the expected running times. For the sake of simplicity and e??ciency, the protocols are based on Shamir's secret sharing scheme. We combine and use the results from the literature on secure random number generation, secure circuit evaluation, secure comparison, and secret indexing to construct e??cient building blocks for secure simplex. The solutions for secure linear programming in this thesis can be split into two categories. On the one hand, some protocols evaluate the classical variants of the simplex algorithm in which numbers are truncated, while the other protocols evaluate the variants of the simplex algorithms in which truncation is avoided. On the other hand, the protocols can be separated by the size of the tableaus. Theoretically there is no clear winner that has both the best security properties and the best performance

    Geometric Combinatorics of Transportation Polytopes and the Behavior of the Simplex Method

    Full text link
    This dissertation investigates the geometric combinatorics of convex polytopes and connections to the behavior of the simplex method for linear programming. We focus our attention on transportation polytopes, which are sets of all tables of non-negative real numbers satisfying certain summation conditions. Transportation problems are, in many ways, the simplest kind of linear programs and thus have a rich combinatorial structure. First, we give new results on the diameters of certain classes of transportation polytopes and their relation to the Hirsch Conjecture, which asserts that the diameter of every dd-dimensional convex polytope with nn facets is bounded above by ndn-d. In particular, we prove a new quadratic upper bound on the diameter of 33-way axial transportation polytopes defined by 11-marginals. We also show that the Hirsch Conjecture holds for p×2p \times 2 classical transportation polytopes, but that there are infinitely-many Hirsch-sharp classical transportation polytopes. Second, we present new results on subpolytopes of transportation polytopes. We investigate, for example, a non-regular triangulation of a subpolytope of the fourth Birkhoff polytope B4B_4. This implies the existence of non-regular triangulations of all Birkhoff polytopes BnB_n for n4n \geq 4. We also study certain classes of network flow polytopes and prove new linear upper bounds for their diameters.Comment: PhD thesis submitted June 2010 to the University of California, Davis. 183 pages, 49 figure

    Upper and Lower Bounds on the Smoothed Complexity of the Simplex Method

    Full text link
    The simplex method for linear programming is known to be highly efficient in practice, and understanding its performance from a theoretical perspective is an active research topic. The framework of smoothed analysis, first introduced by Spielman and Teng (JACM '04) for this purpose, defines the smoothed complexity of solving a linear program with dd variables and nn constraints as the expected running time when Gaussian noise of variance σ2\sigma^2 is added to the LP data. We prove that the smoothed complexity of the simplex method is O(σ3/2d13/4log7/4n)O(\sigma^{-3/2} d^{13/4}\log^{7/4} n), improving the dependence on 1/σ1/\sigma compared to the previous bound of O(σ2d2logn)O(\sigma^{-2} d^2\sqrt{\log n}). We accomplish this through a new analysis of the \emph{shadow bound}, key to earlier analyses as well. Illustrating the power of our new method, we use our method to prove a nearly tight upper bound on the smoothed complexity of two-dimensional polygons. We also establish the first non-trivial lower bound on the smoothed complexity of the simplex method, proving that the \emph{shadow vertex simplex method} requires at least Ω(min(σ1/2d1/2log1/4d,2d))\Omega \Big(\min \big(\sigma^{-1/2} d^{-1/2}\log^{-1/4} d,2^d \big) \Big) pivot steps with high probability. A key part of our analysis is a new variation on the extended formulation for the regular 2k2^k-gon. We end with a numerical experiment that suggests this analysis could be further improved.Comment: 41 pages, 5 figure

    A Stochastic Model for Programming the Supply of a Strategic Material

    Get PDF

    Ordering policies in an environment of stochastic yields and substitutable demands

    Get PDF
    Includes bibliographical references.Partially supported by the Leaders for Manufacturing Program.Gabriel R. Bitran, Sriram Dasu

    Advances in design and implementation of optimization software

    Get PDF
    Developing optimization software that is capable of solving large and complex real-life problems is a huge effort. It is based on a deep knowledge of four areas: theory of optimization algorithms, relevant results of computer science, principles of software engineering, and computer technology. The paper highlights the diverse requirements of optimization software and introduces the ingredients needed to fulfill them. After a review of the hardware/software environment it gives a survey of computationally successful techniques for continuous optimization. It also outlines the perspective offered by parallel computing, and stresses the importance of optimization modeling systems. The inclusion of many references is intended to both give due credit to results in the field of optimization software and help readers obtain more detailed information on issues of interest
    corecore