475 research outputs found

    On generating the irredundant conjunctive and disjunctive normal forms of monotone Boolean functions

    Get PDF
    AbstractLet f:{0,1}n→{0,1} be a monotone Boolean function whose value at any point x∈{0,1}n can be determined in time t. Denote by c=⋀I∈C⋁i∈Ixi the irredundant CNF of f, where C is the set of the prime implicates of f. Similarly, let d=⋁J∈D⋀j∈Jxj be the irredundant DNF of the same function, where D is the set of the prime implicants of f. We show that given subsets C′⊆C and D′⊆D such that (C′,D′)≠(C,D), a new term in (C⧹C′)∪(D⧹D′) can be found in time O(n(t+n))+mo(logm), where m=|C′|+|D′|. In particular, if f(x) can be evaluated for every x∈{0,1}n in polynomial time, then the forms c and d can be jointly generated in incremental quasi-polynomial time. On the other hand, even for the class of ∧,∨-formulae f of depth 2, i.e., for CNFs or DNFs, it is unlikely that uniform sampling from within the set of the prime implicates and implicants of f can be carried out in time bounded by a quasi-polynomial 2polylog(·) in the input size of f. We also show that for some classes of polynomial-time computable monotone Boolean functions it is NP-hard to test either of the conditions D′=D or C′=C. This provides evidence that for each of these classes neither conjunctive nor disjunctive irredundant normal forms can be generated in total (or incremental) quasi-polynomial time. Such classes of monotone Boolean functions naturally arise in game theory, networks and relay contact circuits, convex programming, and include a subset of ∧,∨-formulae of depth 3

    Morehshin Allahyari : Material Speculation

    Get PDF
    "MATERIAL SPECULATION presents radical propositions for 3D Printing that inspect petropolitical and poetic relationships between 3D Printing, Plastic, Oil, Terrorism, and Technocapitalism. Allahyari addresses complex contemporary cultural and political dynamics with the sophistication and nuance it deserves, weaving multiple dynamics together for a holistic image of contemporary relations with objecthood and ideology. " -- Publisher's website

    A new Lenstra-type Algorithm for Quasiconvex Polynomial Integer Minimization with Complexity 2^O(n log n)

    Full text link
    We study the integer minimization of a quasiconvex polynomial with quasiconvex polynomial constraints. We propose a new algorithm that is an improvement upon the best known algorithm due to Heinz (Journal of Complexity, 2005). This improvement is achieved by applying a new modern Lenstra-type algorithm, finding optimal ellipsoid roundings, and considering sparse encodings of polynomials. For the bounded case, our algorithm attains a time-complexity of s (r l M d)^{O(1)} 2^{2n log_2(n) + O(n)} when M is a bound on the number of monomials in each polynomial and r is the binary encoding length of a bound on the feasible region. In the general case, s l^{O(1)} d^{O(n)} 2^{2n log_2(n) +O(n)}. In each we assume d>= 2 is a bound on the total degree of the polynomials and l bounds the maximum binary encoding size of the input.Comment: 28 pages, 10 figure

    On the Solution of Linear Programming Problems in the Age of Big Data

    Full text link
    The Big Data phenomenon has spawned large-scale linear programming problems. In many cases, these problems are non-stationary. In this paper, we describe a new scalable algorithm called NSLP for solving high-dimensional, non-stationary linear programming problems on modern cluster computing systems. The algorithm consists of two phases: Quest and Targeting. The Quest phase calculates a solution of the system of inequalities defining the constraint system of the linear programming problem under the condition of dynamic changes in input data. To this end, the apparatus of Fejer mappings is used. The Targeting phase forms a special system of points having the shape of an n-dimensional axisymmetric cross. The cross moves in the n-dimensional space in such a way that the solution of the linear programming problem is located all the time in an "-vicinity of the central point of the cross.Comment: Parallel Computational Technologies - 11th International Conference, PCT 2017, Kazan, Russia, April 3-7, 2017, Proceedings (to be published in Communications in Computer and Information Science, vol. 753

    Assessing non-Markovian dynamics

    Full text link
    We investigate what a snapshot of a quantum evolution - a quantum channel reflecting open system dynamics - reveals about the underlying continuous time evolution. Remarkably, from such a snapshot, and without imposing additional assumptions, it can be decided whether or not a channel is consistent with a time (in)dependent Markovian evolution, for which we provide computable necessary and sufficient criteria. Based on these, a computable measure of `Markovianity' is introduced. We discuss how the consistency with Markovian dynamics can be checked in quantum process tomography. The results also clarify the geometry of the set of quantum channels with respect to being solutions of time (in)dependent master equations.Comment: 5 pages, RevTex, 2 figures. (Except from typesetting) version to be published in the Physical Review Letter

    A scaling-invariant algorithm for linear programming whose running time depends only on the constraint matrix

    Get PDF
    Following the breakthrough work of Tardos (Oper. Res. '86) in the bit-complexity model, Vavasis and Ye (Math. Prog. '96) gave the first exact algorithm for linear programming in the real model of computation with running time depending only on the constraint matrix. For solving a linear program (LP) max cx, Ax = b, x ≥ 0, A g m × n, Vavasis and Ye developed a primal-dual interior point method using a g€layered least squares' (LLS) step, and showed that O(n3.5 log(χA+n)) iterations suffice to solve (LP) exactly, where χA is a condition measure controlling the size of solutions to linear systems related to A. Monteiro and Tsuchiya (SIAM J. Optim. '03), noting that the central path is invariant under rescalings of the columns of A and c, asked whether there exists an LP algorithm depending instead on the measure χA∗, defined as the minimum χAD value achievable by a column rescaling AD of A, and gave strong evidence that this should be the case. We resolve this open question affirmatively. Our first main contribution is an O(m2 n2 + n3) time algorithm which works on the linear matroid of A to compute a nearly optimal diagonal rescaling D satisfying χAD ≤ n(χ∗)3. This algorithm also allows us to approximate the value of χA up to a factor n (χ∗)2. This result is in (surprising) contrast to that of Tunçel (Math. Prog. '99), who showed NP-hardness for approximating χA to within 2poly(rank(A)). The key insight for our algorithm is to work with ratios gi/gj of circuits of A - i.e., minimal linear dependencies Ag=0 - which allow us to approximate the value of χA∗ by a maximum geometric mean cycle computation in what we call the g€circuit ratio digraph' of A. While this resolves Monteiro and Tsuchiya's question by appropriate preprocessing, it falls short of providing either a truly scaling invariant algorithm or an improvement upon the base LLS analysis. In this vein, as our second main contribution we develop a scaling invariant LLS algorithm, which uses and dynamically maintains improving estimates of the circuit ratio digraph, together with a refined potential function based analysis for LLS algorithms in general. With this analysis, we derive an improved O(n2.5 lognlog(χA∗+n)) iteration bound for optimally solving (LP) using our algorithm. The same argument also yields a factor n/logn improvement on the iteration complexity bound of the original Vavasis-Ye algorithm

    An Exponential Lower Bound for the Latest Deterministic Strategy Iteration Algorithms

    Full text link
    This paper presents a new exponential lower bound for the two most popular deterministic variants of the strategy improvement algorithms for solving parity, mean payoff, discounted payoff and simple stochastic games. The first variant improves every node in each step maximizing the current valuation locally, whereas the second variant computes the globally optimal improvement in each step. We outline families of games on which both variants require exponentially many strategy iterations

    On Relevant Equilibria in Reachability Games

    Full text link
    We study multiplayer reachability games played on a finite directed graph equipped with target sets, one for each player. In those reachability games, it is known that there always exists a Nash equilibrium (NE) and a subgame perfect equilibrium (SPE). But sometimes several equilibria may coexist such that in one equilibrium no player reaches his target set whereas in another one several players reach it. It is thus very natural to identify "relevant" equilibria. In this paper, we consider different notions of relevant equilibria including Pareto optimal equilibria and equilibria with high social welfare. We provide complexity results for various related decision problems

    A-Tint: A polymake extension for algorithmic tropical intersection theory

    Full text link
    In this paper we study algorithmic aspects of tropical intersection theory. We analyse how divisors and intersection products on tropical cycles can actually be computed using polyhedral geometry. The main focus of this paper is the study of moduli spaces, where the underlying combinatorics of the varieties involved allow a much more efficient way of computing certain tropical cycles. The algorithms discussed here have been implemented in an extension for polymake, a software for polyhedral computations.Comment: 32 pages, 5 figures, 4 tables. Second version: Revised version, to be published in European Journal of Combinatoric

    Maximal admissible faces and asymptotic bounds for the normal surface solution space

    Get PDF
    The enumeration of normal surfaces is a key bottleneck in computational three-dimensional topology. The underlying procedure is the enumeration of admissible vertices of a high-dimensional polytope, where admissibility is a powerful but non-linear and non-convex constraint. The main results of this paper are significant improvements upon the best known asymptotic bounds on the number of admissible vertices, using polytopes in both the standard normal surface coordinate system and the streamlined quadrilateral coordinate system. To achieve these results we examine the layout of admissible points within these polytopes. We show that these points correspond to well-behaved substructures of the face lattice, and we study properties of the corresponding "admissible faces". Key lemmata include upper bounds on the number of maximal admissible faces of each dimension, and a bijection between the maximal admissible faces in the two coordinate systems mentioned above.Comment: 31 pages, 10 figures, 2 tables; v2: minor revisions (to appear in Journal of Combinatorial Theory A
    • …
    corecore