78,084 research outputs found

    Quantile and probability curves without crossing

    Get PDF
    The most common approach to estimating conditional quantile curves is to fit a curve, typically linear, pointwise for each quantile. Linear functional forms, coupled with pointwise fitting, are used for a number of reasons including parsimony of the resulting approximations and good computational properties. The resulting fits, however, may not respect a logical monotonicity requirement that the quantile curve be increasing as a function of probability. This paper studies the natural monotonization of these empirical curves induced by sampling from the estimated non-monotone model, and then taking the resulting conditional quantile curves that by construction are monotone in the probability.

    Optimal Decision Rules in Logical Recognition Models

    Get PDF
    The task of smooth and stable decision rules construction in logical recognition models is considered. Logical regularities of classes are defined as conjunctions of one-place predicates that determine the membership of features values in an intervals of the real axis. The conjunctions are true on a special no extending subsets of reference objects of some class and are optimal. The standard approach of linear decision rules construction for given sets of logical regularities consists in realization of voting schemes. The weighting coefficients of voting procedures are done as heuristic ones or are as solutions of complex optimization task. The modifications of linear decision rules are proposed that are based on the search of maximal estimations of standard objects for their classes and use approximations of logical regularities by smooth sigmoid functions

    Polynomial Logical Zonotopes: A Set Representation for Reachability Analysis of Logical Systems

    Full text link
    In this paper, we introduce a set representation called polynomial logical zonotopes for performing exact and computationally efficient reachability analysis on logical systems. Polynomial logical zonotopes are a generalization of logical zonotopes, which are able to represent up to 2^n binary vectors using only n generators. Due to their construction, logical zonotopes are only able to support exact computations of some logical operations (XOR, NOT, XNOR), while other operations (AND, NAND, OR, NOR) result in over-approximations. In order to perform all fundamental logical operations exactly, we formulate a generalization of logical zonotopes that is constructed by additional dependent generators and exponent matrices. We prove that through this polynomial-like construction, we are able to perform all of the fundamental logical operations (XOR, NOT, XNOR, AND, NAND, OR, NOR) exactly. While we are able to perform all of the logical operations exactly, this comes with a slight increase in computational complexity compared to logical zonotopes. We show that we can use polynomial logical zonotopes to perform exact reachability analysis while retaining a low computational complexity. To illustrate and showcase the computational benefits of polynomial logical zonotopes, we present the results of performing reachability analysis on two use cases: (1) safety verification of an intersection crossing protocol, (2) and reachability analysis on a high-dimensional Boolean function. Moreover, to highlight the extensibility of logical zonotopes, we include an additional use case where we perform a computationally tractable exhaustive search for the key of a linear-feedback shift register.Comment: arXiv admin note: substantial text overlap with arXiv:2210.0859

    Sherali-Adams gaps, flow-cover inequalities and generalized configurations for capacity-constrained Facility Location

    Get PDF
    Metric facility location is a well-studied problem for which linear programming methods have been used with great success in deriving approximation algorithms. The capacity-constrained generalizations, such as capacitated facility location (CFL) and lower-bounded facility location (LBFL), have proved notorious as far as LP-based approximation is concerned: while there are local-search-based constant-factor approximations, there is no known linear relaxation with constant integrality gap. According to Williamson and Shmoys devising a relaxation-based approximation for \cfl\ is among the top 10 open problems in approximation algorithms. This paper advances significantly the state-of-the-art on the effectiveness of linear programming for capacity-constrained facility location through a host of impossibility results for both CFL and LBFL. We show that the relaxations obtained from the natural LP at Ω(n)\Omega(n) levels of the Sherali-Adams hierarchy have an unbounded gap, partially answering an open question of \cite{LiS13, AnBS13}. Here, nn denotes the number of facilities in the instance. Building on the ideas for this result, we prove that the standard CFL relaxation enriched with the generalized flow-cover valid inequalities \cite{AardalPW95} has also an unbounded gap. This disproves a long-standing conjecture of \cite{LeviSS12}. We finally introduce the family of proper relaxations which generalizes to its logical extreme the classic star relaxation and captures general configuration-style LPs. We characterize the behavior of proper relaxations for CFL and LBFL through a sharp threshold phenomenon.Comment: arXiv admin note: substantial text overlap with arXiv:1305.599

    Collection analysis for Horn clause programs

    Get PDF
    We consider approximating data structures with collections of the items that they contain. For examples, lists, binary trees, tuples, etc, can be approximated by sets or multisets of the items within them. Such approximations can be used to provide partial correctness properties of logic programs. For example, one might wish to specify than whenever the atom sort(t,s)sort(t,s) is proved then the two lists tt and ss contain the same multiset of items (that is, ss is a permutation of tt). If sorting removes duplicates, then one would like to infer that the sets of items underlying tt and ss are the same. Such results could be useful to have if they can be determined statically and automatically. We present a scheme by which such collection analysis can be structured and automated. Central to this scheme is the use of linear logic as a omputational logic underlying the logic of Horn clauses

    Infinitary cut-elimination via finite approximations

    Full text link
    We investigate non-wellfounded proof systems based on parsimonious logic, a weaker variant of linear logic where the exponential modality ! is interpreted as a constructor for streams over finite data. Logical consistency is maintained at a global level by adapting a standard progressing criterion. We present an infinitary version of cut-elimination based on finite approximations, and we prove that, in presence of the progressing criterion, it returns well-defined non-wellfounded proofs at its limit. Furthermore, we show that cut-elimination preserves the progressive criterion and various regularity conditions internalizing degrees of proof-theoretical uniformity. Finally, we provide a denotational semantics for our systems based on the relational model

    Tractable Simulation of Error Correction with Honest Approximations to Realistic Fault Models

    Full text link
    In previous work, we proposed a method for leveraging efficient classical simulation algorithms to aid in the analysis of large-scale fault tolerant circuits implemented on hypothetical quantum information processors. Here, we extend those results by numerically studying the efficacy of this proposal as a tool for understanding the performance of an error-correction gadget implemented with fault models derived from physical simulations. Our approach is to approximate the arbitrary error maps that arise from realistic physical models with errors that are amenable to a particular classical simulation algorithm in an "honest" way; that is, such that we do not underestimate the faults introduced by our physical models. In all cases, our approximations provide an "honest representation" of the performance of the circuit composed of the original errors. This numerical evidence supports the use of our method as a way to understand the feasibility of an implementation of quantum information processing given a characterization of the underlying physical processes in experimentally accessible examples.Comment: 34 pages, 9 tables, 4 figure
    corecore