2,196 research outputs found

    The s-monotone index selection rules for pivot algorithms of linear programming

    Get PDF
    In this paper we introduce the concept of s-monotone index selection rule for linear programming problems. We show that several known anti-cycling pivot rules like the minimal index, Last-In–First-Out and the most-often-selected-variable pivot rules are s-monotone index selection rules. Furthermore, we show a possible way to define new s-monotone pivot rules. We prove that several known algorithms like the primal (dual) simplex, MBU-simplex algorithms and criss-cross algorithm with s-monotone pivot rules are finite methods. We implemented primal simplex and primal MBU-simplex algorithms, in MATLAB, using three s-monotone index selection rules, the minimal-index, the Last-In–First-Out (LIFO) and the Most-Often-Selected-Variable (MOSV) index selection rule. Numerical results demonstrate the viability of the above listed s-monotone index selection rules in the framework of pivot algorithms

    Strongly polynomial primal monotonic build-up simplex algorithm for maximal flow problems

    Get PDF
    The maximum flow problem (MFP) is a fundamental model in operations research. The network simplex algorithm is one of the most efficient solution methods for MFP in practice. The theoretical properties of established pivot algorithms for MFP is less understood. Variants of the primal simplex and dual simplex methods for MFP have been proven strongly polynomial, but no similar result exists for other pivot algorithms like the monotonic build-up or the criss-cross simplex algorithm. The monotonic build-up simplex algorithm (MBUSA) starts with a feasible solution, and fixes the dual feasibility one variable a time, temporarily losing primal feasibility. In the case of maximum flow problems, pivots in one such iteration are all dual degenerate, bar the last one. Using a labelling technique to break these ties we show a variant that solves the maximum flow problem in 2|V||A|2 pivots

    An update on the Hirsch conjecture

    Get PDF
    The Hirsch conjecture was posed in 1957 in a letter from Warren M. Hirsch to George Dantzig. It states that the graph of a d-dimensional polytope with n facets cannot have diameter greater than n - d. Despite being one of the most fundamental, basic and old problems in polytope theory, what we know is quite scarce. Most notably, no polynomial upper bound is known for the diameters that are conjectured to be linear. In contrast, very few polytopes are known where the bound n−dn-d is attained. This paper collects known results and remarks both on the positive and on the negative side of the conjecture. Some proofs are included, but only those that we hope are accessible to a general mathematical audience without introducing too many technicalities.Comment: 28 pages, 6 figures. Many proofs have been taken out from version 2 and put into the appendix arXiv:0912.423

    Euclidean distance geometry and applications

    Full text link
    Euclidean distance geometry is the study of Euclidean geometry based on the concept of distance. This is useful in several applications where the input data consists of an incomplete set of distances, and the output is a set of points in Euclidean space that realizes the given distances. We survey some of the theory of Euclidean distance geometry and some of the most important applications: molecular conformation, localization of sensor networks and statics.Comment: 64 pages, 21 figure

    Optimization with Sparsity-Inducing Penalties

    Get PDF
    Sparse estimation methods are aimed at using or obtaining parsimonious representations of data or models. They were first dedicated to linear variable selection but numerous extensions have now emerged such as structured sparsity or kernel selection. It turns out that many of the related estimation problems can be cast as convex optimization problems by regularizing the empirical risk with appropriate non-smooth norms. The goal of this paper is to present from a general perspective optimization tools and techniques dedicated to such sparsity-inducing penalties. We cover proximal methods, block-coordinate descent, reweighted ℓ2\ell_2-penalized techniques, working-set and homotopy methods, as well as non-convex formulations and extensions, and provide an extensive set of experiments to compare various algorithms from a computational point of view

    A2THOS: Availability Analysis and Optimisation in SLAs

    Get PDF
    IT service availability is at the core of customer satisfaction and business success for today’s organisations. Many medium-large size organisations outsource part of their IT services to external providers, with Service Level Agreements describing the agreed availability of outsourced service components. Availability management of partially outsourced IT services is a non trivial task since classic approaches for calculating availability are not applicable, and IT managers can only rely on their expertise to fulfil it. This often leads to the adoption of non optimal solutions. In this paper we present A2THOS, a framework to calculate the availability of partially outsourced IT services in the presence of SLAs and to achieve a cost-optimal choice of availability levels for outsourced IT components while guaranteeing a target availability level for the service

    The Topology ToolKit

    Full text link
    This system paper presents the Topology ToolKit (TTK), a software platform designed for topological data analysis in scientific visualization. TTK provides a unified, generic, efficient, and robust implementation of key algorithms for the topological analysis of scalar data, including: critical points, integral lines, persistence diagrams, persistence curves, merge trees, contour trees, Morse-Smale complexes, fiber surfaces, continuous scatterplots, Jacobi sets, Reeb spaces, and more. TTK is easily accessible to end users due to a tight integration with ParaView. It is also easily accessible to developers through a variety of bindings (Python, VTK/C++) for fast prototyping or through direct, dependence-free, C++, to ease integration into pre-existing complex systems. While developing TTK, we faced several algorithmic and software engineering challenges, which we document in this paper. In particular, we present an algorithm for the construction of a discrete gradient that complies to the critical points extracted in the piecewise-linear setting. This algorithm guarantees a combinatorial consistency across the topological abstractions supported by TTK, and importantly, a unified implementation of topological data simplification for multi-scale exploration and analysis. We also present a cached triangulation data structure, that supports time efficient and generic traversals, which self-adjusts its memory usage on demand for input simplicial meshes and which implicitly emulates a triangulation for regular grids with no memory overhead. Finally, we describe an original software architecture, which guarantees memory efficient and direct accesses to TTK features, while still allowing for researchers powerful and easy bindings and extensions. TTK is open source (BSD license) and its code, online documentation and video tutorials are available on TTK's website
    • 

    corecore