117,087 research outputs found

    Simulating the DPLL(T ) procedure in a sequent calculus with focusing

    Get PDF
    This paper gives an abstract description of decision procedures for Satisfiability Modulo Theory (SMT) as proof search procedures in a sequent calculus with polarities and focusing. In particular, we show how to simulate the execution of standard techniques based on the Davis-Putnam- Logemann-Loveland (DPLL) procedure modulo theory as the gradual construction of a proof tree in sequent calculus. The construction mimicking a run of DPLL-modulo-Theory can be obtained by a meta-logical control on the proof-search in sequent calculus. This control is provided by polarities and focusing features, which there- fore narrow the corresponding search space in a sense we discuss. This simulation can also account for backjumping and learning steps, which correspond to the use of general cuts in sequent calculus

    Simulating the DPLL(T ) procedure in a sequent calculus with focusing

    Get PDF
    This paper gives an abstract description of decision procedures for Satisfiability Modulo Theory (SMT) as proof search procedures in a sequent calculus with polarities and focusing. In particular, we show how to simulate the execution of standard techniques based on the Davis-Putnam- Logemann-Loveland (DPLL) procedure modulo theory as the gradual construction of a proof tree in sequent calculus. The construction mimicking a run of DPLL-modulo-Theory can be obtained by a meta-logical control on the proof-search in sequent calculus. This control is provided by polarities and focusing features, which there- fore narrow the corresponding search space in a sense we discuss. This simulation can also account for backjumping and learning steps, which correspond to the use of general cuts in sequent calculus

    Denial-of-Service Resistance in Key Establishment

    Get PDF
    Denial of Service (DoS) attacks are an increasing problem for network connected systems. Key establishment protocols are applications that are particularly vulnerable to DoS attack as they are typically required to perform computationally expensive cryptographic operations in order to authenticate the protocol initiator and to generate the cryptographic keying material that will subsequently be used to secure the communications between initiator and responder. The goal of DoS resistance in key establishment protocols is to ensure that attackers cannot prevent a legitimate initiator and responder deriving cryptographic keys without expending resources beyond a responder-determined threshold. In this work we review the strategies and techniques used to improve resistance to DoS attacks. Three key establishment protocols implementing DoS resistance techniques are critically reviewed and the impact of misapplication of the techniques on DoS resistance is discussed. Recommendations on effectively applying resistance techniques to key establishment protocols are made

    When is it Better to Compare than to Score?

    Full text link
    When eliciting judgements from humans for an unknown quantity, one often has the choice of making direct-scoring (cardinal) or comparative (ordinal) measurements. In this paper we study the relative merits of either choice, providing empirical and theoretical guidelines for the selection of a measurement scheme. We provide empirical evidence based on experiments on Amazon Mechanical Turk that in a variety of tasks, (pairwise-comparative) ordinal measurements have lower per sample noise and are typically faster to elicit than cardinal ones. Ordinal measurements however typically provide less information. We then consider the popular Thurstone and Bradley-Terry-Luce (BTL) models for ordinal measurements and characterize the minimax error rates for estimating the unknown quantity. We compare these minimax error rates to those under cardinal measurement models and quantify for what noise levels ordinal measurements are better. Finally, we revisit the data collected from our experiments and show that fitting these models confirms this prediction: for tasks where the noise in ordinal measurements is sufficiently low, the ordinal approach results in smaller errors in the estimation

    The emergence of choice: Decision-making and strategic thinking through analogies

    Get PDF
    Consider the chess game: When faced with a complex scenario, how does understanding arise in one’s mind? How does one integrate disparate cues into a global, meaningful whole? how do humans avoid the combinatorial explosion? How are abstract ideas represented? The purpose of this paper is to propose a new computational model of human chess intuition and intelligence. We suggest that analogies and abstract roles are crucial to solving these landmark problems. We present a proof-of-concept model, in the form of a computational architecture, which may be able to account for many crucial aspects of human intuition, such as (i) concentration of attention to relevant aspects, (ii) \ud how humans may avoid the combinatorial explosion, (iii) perception of similarity at a strategic level, and (iv) a state of meaningful anticipation over how a global scenario \ud may evolve

    Gradual Liquid Type Inference

    Full text link
    Liquid typing provides a decidable refinement inference mechanism that is convenient but subject to two major issues: (1) inference is global and requires top-level annotations, making it unsuitable for inference of modular code components and prohibiting its applicability to library code, and (2) inference failure results in obscure error messages. These difficulties seriously hamper the migration of existing code to use refinements. This paper shows that gradual liquid type inference---a novel combination of liquid inference and gradual refinement types---addresses both issues. Gradual refinement types, which support imprecise predicates that are optimistically interpreted, can be used in argument positions to constrain liquid inference so that the global inference process e effectively infers modular specifications usable for library components. Dually, when gradual refinements appear as the result of inference, they signal an inconsistency in the use of static refinements. Because liquid refinements are drawn from a nite set of predicates, in gradual liquid type inference we can enumerate the safe concretizations of each imprecise refinement, i.e. the static refinements that justify why a program is gradually well-typed. This enumeration is useful for static liquid type error explanation, since the safe concretizations exhibit all the potential inconsistencies that lead to static type errors. We develop the theory of gradual liquid type inference and explore its pragmatics in the setting of Liquid Haskell.Comment: To appear at OOPSLA 201

    Gradual sub-lattice reduction and a new complexity for factoring polynomials

    Get PDF
    We present a lattice algorithm specifically designed for some classical applications of lattice reduction. The applications are for lattice bases with a generalized knapsack-type structure, where the target vectors are boundably short. For such applications, the complexity of the algorithm improves traditional lattice reduction by replacing some dependence on the bit-length of the input vectors by some dependence on the bound for the output vectors. If the bit-length of the target vectors is unrelated to the bit-length of the input, then our algorithm is only linear in the bit-length of the input entries, which is an improvement over the quadratic complexity floating-point LLL algorithms. To illustrate the usefulness of this algorithm we show that a direct application to factoring univariate polynomials over the integers leads to the first complexity bound improvement since 1984. A second application is algebraic number reconstruction, where a new complexity bound is obtained as well

    Estimation from Pairwise Comparisons: Sharp Minimax Bounds with Topology Dependence

    Full text link
    Data in the form of pairwise comparisons arises in many domains, including preference elicitation, sporting competitions, and peer grading among others. We consider parametric ordinal models for such pairwise comparison data involving a latent vector w∗∈Rdw^* \in \mathbb{R}^d that represents the "qualities" of the dd items being compared; this class of models includes the two most widely used parametric models--the Bradley-Terry-Luce (BTL) and the Thurstone models. Working within a standard minimax framework, we provide tight upper and lower bounds on the optimal error in estimating the quality score vector w∗w^* under this class of models. The bounds depend on the topology of the comparison graph induced by the subset of pairs being compared via its Laplacian spectrum. Thus, in settings where the subset of pairs may be chosen, our results provide principled guidelines for making this choice. Finally, we compare these error rates to those under cardinal measurement models and show that the error rates in the ordinal and cardinal settings have identical scalings apart from constant pre-factors.Comment: 39 pages, 5 figures. Significant extension of arXiv:1406.661

    Frictional Unemployment on Labor Flow Networks

    Full text link
    We develop an alternative theory to the aggregate matching function in which workers search for jobs through a network of firms: the labor flow network. The lack of an edge between two companies indicates the impossibility of labor flows between them due to high frictions. In equilibrium, firms' hiring behavior correlates through the network, generating highly disaggregated local unemployment. Hence, aggregation depends on the topology of the network in non-trivial ways. This theory provides new micro-foundations for the Beveridge curve, wage dispersion, and the employer-size premium. We apply our model to employer-employee matched records and find that network topologies with Pareto-distributed connections cause disproportionately large changes on aggregate unemployment under high labor supply elasticity
    • 

    corecore