4,102 research outputs found

    A Divergence Critic for Inductive Proof

    Full text link
    Inductive theorem provers often diverge. This paper describes a simple critic, a computer program which monitors the construction of inductive proofs attempting to identify diverging proof attempts. Divergence is recognized by means of a ``difference matching'' procedure. The critic then proposes lemmas and generalizations which ``ripple'' these differences away so that the proof can go through without divergence. The critic enables the theorem prover Spike to prove many theorems completely automatically from the definitions alone.Comment: See http://www.jair.org/ for any accompanying file

    Using a generalisation critic to find bisimulations for coinductive proofs

    Get PDF
    Coinduction is a method of growing importance in reasoning about functional languages, due to the increasing prominence of lazy data structures. Through the use of bisimulations and proofs that observational equivalence is a congruence in various domains it can be used to proof the congruence of two processes. Several proof tools have been developed to aid coinductive proofs but all require user interaction. Crucially they require the user to supply an appropriate relation which the system can then prove to be a bisimulation. A method is proposed which uses the idea of proof plans to make a heuristic guess at a suitable relation. If the proof fails for that relation the reasons for failure are analysed using a proof critic and a new relation is proposed to allow the proof to go through

    AlignFlow: Cycle Consistent Learning from Multiple Domains via Normalizing Flows

    Full text link
    Given datasets from multiple domains, a key challenge is to efficiently exploit these data sources for modeling a target domain. Variants of this problem have been studied in many contexts, such as cross-domain translation and domain adaptation. We propose AlignFlow, a generative modeling framework that models each domain via a normalizing flow. The use of normalizing flows allows for a) flexibility in specifying learning objectives via adversarial training, maximum likelihood estimation, or a hybrid of the two methods; and b) learning and exact inference of a shared representation in the latent space of the generative model. We derive a uniform set of conditions under which AlignFlow is marginally-consistent for the different learning objectives. Furthermore, we show that AlignFlow guarantees exact cycle consistency in mapping datapoints from a source domain to target and back to the source domain. Empirically, AlignFlow outperforms relevant baselines on image-to-image translation and unsupervised domain adaptation and can be used to simultaneously interpolate across the various domains using the learned representation.Comment: AAAI 202

    Best-First Rippling

    Get PDF
    Rippling is a form of rewriting that guides search by only performing steps that reduce the syntactic differences between formulae. Termination is normally ensured by a measure that is decreases with each rewrite step. Because of this restriction, rippling will fail to prove theorems about, for example, mutual recursion as steps that temporarily increase the differences are necessary. Best-first rippling is an extension to rippling where the restrictions have been recast as heuristic scores for use in best-first search. If nothing better is available, previously illegal steps can be considered, making best-first rippling more flexible than ordinary rippling. We have implemented best-first rippling in the IsaPlanner system together with a mechanism for caching proof-states that helps remove symmetries in the search space, and machinery to ensure termination based on term embeddings. Our experiments show that the implementation of best-first rippling is faster on average than IsaPlanner’s version of traditional depth-first rippling, and solves a range of problems where ordinary rippling fails
    • …
    corecore