373 research outputs found

    Abstract Canonical Inference

    Full text link
    An abstract framework of canonical inference is used to explore how different proof orderings induce different variants of saturation and completeness. Notions like completion, paramodulation, saturation, redundancy elimination, and rewrite-system reduction are connected to proof orderings. Fairness of deductive mechanisms is defined in terms of proof orderings, distinguishing between (ordinary) "fairness," which yields completeness, and "uniform fairness," which yields saturation.Comment: 28 pages, no figures, to appear in ACM Trans. on Computational Logi

    On the Expressivity and Applicability of Model Representation Formalisms

    Get PDF
    A number of first-order calculi employ an explicit model representation formalism for automated reasoning and for detecting satisfiability. Many of these formalisms can represent infinite Herbrand models. The first-order fragment of monadic, shallow, linear, Horn (MSLH) clauses, is such a formalism used in the approximation refinement calculus. Our first result is a finite model property for MSLH clause sets. Therefore, MSLH clause sets cannot represent models of clause sets with inherently infinite models. Through a translation to tree automata, we further show that this limitation also applies to the linear fragments of implicit generalizations, which is the formalism used in the model-evolution calculus, to atoms with disequality constraints, the formalisms used in the non-redundant clause learning calculus (NRCL), and to atoms with membership constraints, a formalism used for example in decision procedures for algebraic data types. Although these formalisms cannot represent models of clause sets with inherently infinite models, through an additional approximation step they can. This is our second main result. For clause sets including the definition of an equivalence relation with the help of an additional, novel approximation, called reflexive relation splitting, the approximation refinement calculus can automatically show satisfiability through the MSLH clause set formalism.Comment: 15 page

    New results on rewrite-based satisfiability procedures

    Full text link
    Program analysis and verification require decision procedures to reason on theories of data structures. Many problems can be reduced to the satisfiability of sets of ground literals in theory T. If a sound and complete inference system for first-order logic is guaranteed to terminate on T-satisfiability problems, any theorem-proving strategy with that system and a fair search plan is a T-satisfiability procedure. We prove termination of a rewrite-based first-order engine on the theories of records, integer offsets, integer offsets modulo and lists. We give a modularity theorem stating sufficient conditions for termination on a combinations of theories, given termination on each. The above theories, as well as others, satisfy these conditions. We introduce several sets of benchmarks on these theories and their combinations, including both parametric synthetic benchmarks to test scalability, and real-world problems to test performances on huge sets of literals. We compare the rewrite-based theorem prover E with the validity checkers CVC and CVC Lite. Contrary to the folklore that a general-purpose prover cannot compete with reasoners with built-in theories, the experiments are overall favorable to the theorem prover, showing that not only the rewriting approach is elegant and conceptually simple, but has important practical implications.Comment: To appear in the ACM Transactions on Computational Logic, 49 page

    Breakdown rates and macroinvertebrate colonisation of alder (Alnus glutinosa) leaves in an acid lake (Lake Orta, N Italy), before, during and after a liming intervention

    Get PDF
    To test the effectiveness of the liming intervention on Lake Orta, the speed of leaves decay and of colonisation processes by macrobenthonic fauna were studied on alder leaves (Alnus glutinosa) placed on the bottom of the lake and recovered after appropriate time intervals. Experiments were performed at two sites (North and South) and two depths (-3 and –18 m), during three successive winters: 1988-1989 (pre-liming), 1989-1990 (liming), 1990-1991 (post-liming). Two main results emerged: 1) alder leaves, which are known to have a medium to high decaying speed in a number of aquatic environments, behave in Lake Orta as a low speed species. Decaying processes in the three years are significantly different only in station N3, where the mean breakdown rate in 1988- 1989 is more than twice that measured in the two subsequent winters. 2) The species richness of colonising benthic fauna is low: the community is made up almost exclusively of Chironomidae, which form 70 to 100% of the whole population; among them, the genus Phenopsectra is always present, while Tanytarsus was collected only during the first year and in the less deep sampling sites. The mean population abundances were higher before liming

    Hierarchic Superposition Revisited

    Get PDF
    Many applications of automated deduction require reasoning in first-order logic modulo background theories, in particular some form of integer arithmetic. A major unsolved research challenge is to design theorem provers that are "reasonably complete" even in the presence of free function symbols ranging into a background theory sort. The hierarchic superposition calculus of Bachmair, Ganzinger, and Waldmann already supports such symbols, but, as we demonstrate, not optimally. This paper aims to rectify the situation by introducing a novel form of clause abstraction, a core component in the hierarchic superposition calculus for transforming clauses into a form needed for internal operation. We argue for the benefits of the resulting calculus and provide two new completeness results: one for the fragment where all background-sorted terms are ground and another one for a special case of linear (integer or rational) arithmetic as a background theory

    An inexpensive nonlinear medium for intense ultrabroadband pulse characterization

    Get PDF
    The ability of pellets made up of compressed iron iodate nanocrystals to frequency-double the whole visible spectrum is demonstrated. We suggest their use for complete characterization of intense ultrabroadband laser pulse

    Antenatal automatic diagnosis of cleft lip via unsupervised clustering method relying on 3D facial soft tissue landmarks

    Get PDF
    Objectives Ultrasound (US) is the first-choice device to detect different types of facial dysmorphisms. Anyway, at present no standard protocol has been defined for automatic nor semi-automatic diagnosis. Even though the practitioner's contribution is core, steps towards automatism are to be undertaken. We propose a methodology for diagnosing cleft lip on 3D US scans. Methods A bounded Depth Minimum Steiner Trees (D-MST) clustering algorithm is proposed for discriminating groups of 3D US faces relying on the presence/absence of a cleft lip. The analysis of 3D facial surfaces via Differential Geometry is adopted to extract landmarks. Thus, the extracted geometrical information is elaborated to feed the unsupervised clustering algorithm and produce the classification. The clustering returns the probability of being affected by the pathology, allowing physicians to focus their attention on risky individuals for further analysis. Results The feasibility is tested upon the available 3D US scans data and then deeply investigated for a large dataset of adult individuals. 3D facial Bosphorus database is chosen for the testing, which seven cleft lip-affected individuals are added to, by artificially creating the defect. The algorithm correctly separates left and right-sided cleft lips, while healthy individuals create a unique cluster; thus, the method shows accurate diagnosis results. Conclusions Even if further testing is to be performed on tailored datasets made exclusively of fetal images, this techniques gives hefty hints for a future tailored algorithm. This method also fosters the investigation of the scientific formalisation of the "normotype", which is the representative face of a class of individuals, collecting all the principal anthropometric facial measurements, in order to recognise a normal or syndromic fetus

    A novel implantation technique for engineered osteo-chondral grafts

    Get PDF
    We present a novel method to support precise insertion of engineered osteochondral grafts by pulling from the bone layer, thereby minimizing iatrogenic damage associated with direct manipulation of the cartilage layer. Grafts were generated by culturing human expanded chondrocytes on Hyaff®-11 meshes, sutured to Tutobone® spongiosa cylinders. Through the bone layer, shaped to imitate the surface-contours of the talar dome, two sutures were applied: the first for anterograde implantation, to pull the graft into the defect, and the second for retrograde correction, in case of a too deep insertion. All grafts could be correctly positioned into osteochondral lesions created in cadaveric ankle joints with good fit to the surrounding cartilage. Implants withstood short-term dynamic stability tests applied to the ankle joint, without delamination or macroscopic damage. The developed technique, by allowing precise and stable positioning of osteochondral grafts without iatrogenic cartilage damage, is essential for the implantation of engineered tissues, where the cartilage layer is not fully mechanically developed, and could be considered also for conventional autologous osteochondral transplantatio

    Improving the Efficiency of Reasoning Through Structure-Based Reformulation

    Get PDF
    Abstract. We investigate the possibility of improving the efficiency of reasoning through structure-based partitioning of logical theories, combined with partitionbased logical reasoning strategies. To this end, we provide algorithms for reasoning with partitions of axioms in first-order and propositional logic. We analyze the computational benefit of our algorithms and detect those parameters of a partitioning that influence the efficiency of computation. These parameters are the number of symbols shared by a pair of partitions, the size of each partition, and the topology of the partitioning. Finally, we provide a greedy algorithm that automatically reformulates a given theory into partitions, exploiting the parameters that influence the efficiency of computation.
    • …
    corecore