381 research outputs found

    Strict General Setting for Building Decision Procedures into Theorem Provers

    Get PDF
    The efficient and flexible incorporating of decision procedures into theorem provers is very important for their successful use. There are several approaches for combining and augmenting of decision procedures; some of them support handling uninterpreted functions, congruence closure, lemma invoking etc. In this paper we present a variant of one general setting for building decision procedures into theorem provers (gs framework [18]). That setting is based on macro inference rules motivated by techniques used in different approaches. The general setting enables a simple describing of different combination/augmentation schemes. In this paper, we further develop and extend this setting by an imposed ordering on the macro inference rules. That ordering leads to a ”strict setting”. It makes implementing and using variants of well-known or new schemes within this framework a very easy task even for a non-expert user. Also, this setting enables easy comparison of different combination/augmentation schemes and combination of their ideas

    A General Setting for Flexibly Combining and Augmenting Decision Procedures

    Get PDF

    Capturing Hiproofs in HOL Light

    Full text link
    Hierarchical proof trees (hiproofs for short) add structure to ordinary proof trees, by allowing portions of trees to be hierarchically nested. The additional structure can be used to abstract away from details, or to label particular portions to explain their purpose. In this paper we present two complementary methods for capturing hiproofs in HOL Light, along with a tool to produce web-based visualisations. The first method uses tactic recording, by modifying tactics to record their arguments and construct a hierarchical tree; this allows a tactic proof script to be modified. The second method uses proof recording, which extends the HOL Light kernel to record hierachical proof trees alongside theorems. This method is less invasive, but requires care to manage the size of the recorded objects. We have implemented both methods, resulting in two systems: Tactician and HipCam

    The C++0x "Concepts" Effort

    Full text link
    C++0x is the working title for the revision of the ISO standard of the C++ programming language that was originally planned for release in 2009 but that was delayed to 2011. The largest language extension in C++0x was "concepts", that is, a collection of features for constraining template parameters. In September of 2008, the C++ standards committee voted the concepts extension into C++0x, but then in July of 2009, the committee voted the concepts extension back out of C++0x. This article is my account of the technical challenges and debates within the "concepts" effort in the years 2003 to 2009. To provide some background, the article also describes the design space for constrained parametric polymorphism, or what is colloquially know as constrained generics. While this article is meant to be generally accessible, the writing is aimed toward readers with background in functional programming and programming language theory. This article grew out of a lecture at the Spring School on Generic and Indexed Programming at the University of Oxford, March 2010

    Strategic Issues, Problems and Challenges in Inductive Theorem Proving

    Get PDF
    Abstract(Automated) Inductive Theorem Proving (ITP) is a challenging field in automated reasoning and theorem proving. Typically, (Automated) Theorem Proving (TP) refers to methods, techniques and tools for automatically proving general (most often first-order) theorems. Nowadays, the field of TP has reached a certain degree of maturity and powerful TP systems are widely available and used. The situation with ITP is strikingly different, in the sense that proving inductive theorems in an essentially automatic way still is a very challenging task, even for the most advanced existing ITP systems. Both in general TP and in ITP, strategies for guiding the proof search process are of fundamental importance, in automated as well as in interactive or mixed settings. In the paper we will analyze and discuss the most important strategic and proof search issues in ITP, compare ITP with TP, and argue why ITP is in a sense much more challenging. More generally, we will systematically isolate, investigate and classify the main problems and challenges in ITP w.r.t. automation, on different levels and from different points of views. Finally, based on this analysis we will present some theses about the state of the art in the field, possible criteria for what could be considered as substantial progress, and promising lines of research for the future, towards (more) automated ITP

    Author Index Volume 133 (1994)

    Get PDF

    A sequential data assimilation approach for the joint reconstruction of mantle convection and surface tectonics

    Get PDF
    International audienceWith the progress of mantle convection modelling over the last decade, it now becomes possible to solve for the dynamics of the interior flow and the surface tectonics to first order. We show here that tectonic data (like surface kinematics and seafloor age distribution) and mantle convection models with plate-like behaviour can in principle be combined to reconstruct mantle convection. We present a sequential data assimilation method, based on suboptimal schemes derived from the Kalman filter, where surface velocities and seafloor age maps are not used as boundary conditions for the flow, but as data to assimilate. Two stages (a forecast followed by an analysis) are repeated sequentially to take into account data observed at different times. Whenever observations are available, an analysis infers the most probable state of the mantle at this time, considering a prior guess (supplied by the forecast) and the new observations at hand, using the classical best linear unbiased estimate. Between two observation times, the evolution of the mantle is governed by the forward model of mantle convection. This method is applied to synthetic 2-D spherical annulus mantle cases to evaluate its efficiency. We compare the reference evolutions to the estimations obtained by data assimilation. Two parameters control the behaviour of the scheme: the time between two analyses, and the amplitude of noise in the synthetic observations. Our technique proves to be efficient in retrieving temperature field evolutions provided the time between two analyses is 10 Myr. If the amplitude of the a priori error on the observations is large (30 per cent), our method provides a better estimate of surface tectonics than the observations, taking advantage of the information within the physics of convection

    The Magsat bibliography. Revision 1

    Get PDF
    Publications related to the Magsat project number 402, as of February 1991 are presented. Of these, 44 deal with analysis of the Earth's main magnetic field, 209 deal with analysis of the Earth's crustal field, 43 make use of Magsat-based main field models, and 63 deal with analyses of the magnetic field originating external to the Earth. The remainder documents the Magsat program, satellite, instruments, or data, or are review papers or books which use or refer to Magsat and its data. The Bibliography is divided into two parts; the first lists all papers by first author, and the second is subdivided by topic

    Gondwana's promises: German geologists in Antarctica between basic science and resource exploration in the late 1970s

    Get PDF
    The 1970s was a crucial period of transition in polar science, when Antarctica, as the “continent defined by and for science” (Elzinga 1993), was intrinsically linked with economic interests and global environmental concerns. This shift towards a resource-oriented research agenda will be examined in the case of the Federal Republic of Germany and its GANOVEX (German Antarctic North Victoria Land Expedition) Expeditions. They started in 1979 and aimed to erase the last blank spots on the geological map of Antarctica and thus prove that Germany can attain consultative status in the Antarctic Treaty System (ATS). In this context, scientists played a key role in negotiating the possibilities and limits of resource exploration in the late 1970s. I will discuss the so-called “Gondwana hypothesis” and its role in resource-driven research and argue that global geopolitical interests in new resource potentials motivated the geological mapping of Antarctica

    Kernel Spectral Clustering and applications

    Full text link
    In this chapter we review the main literature related to kernel spectral clustering (KSC), an approach to clustering cast within a kernel-based optimization setting. KSC represents a least-squares support vector machine based formulation of spectral clustering described by a weighted kernel PCA objective. Just as in the classifier case, the binary clustering model is expressed by a hyperplane in a high dimensional space induced by a kernel. In addition, the multi-way clustering can be obtained by combining a set of binary decision functions via an Error Correcting Output Codes (ECOC) encoding scheme. Because of its model-based nature, the KSC method encompasses three main steps: training, validation, testing. In the validation stage model selection is performed to obtain tuning parameters, like the number of clusters present in the data. This is a major advantage compared to classical spectral clustering where the determination of the clustering parameters is unclear and relies on heuristics. Once a KSC model is trained on a small subset of the entire data, it is able to generalize well to unseen test points. Beyond the basic formulation, sparse KSC algorithms based on the Incomplete Cholesky Decomposition (ICD) and L0L_0, L1,L0+L1L_1, L_0 + L_1, Group Lasso regularization are reviewed. In that respect, we show how it is possible to handle large scale data. Also, two possible ways to perform hierarchical clustering and a soft clustering method are presented. Finally, real-world applications such as image segmentation, power load time-series clustering, document clustering and big data learning are considered.Comment: chapter contribution to the book "Unsupervised Learning Algorithms
    • 

    corecore