6 research outputs found

    Case Studies in Proof Checking

    Get PDF
    The aim of computer proof checking is not to find proofs, but to verify them. This is different from automated deduction, which is the use of computers to find proofs that humans have not devised first. Currently, checking a proof by computer is done by taking a known mathematical proof and entering it into the special language recognized by a proof verifier program, and then running the verifier to hopefully obtain no errors. Of course, if the proof checker approves the proof, there are considerations of whether or not the proof checker is correct, and this has been complicated by the fact that so many systems have sprung into being. The two main challenges in using a proof checker today are the time needed to learn the syntax and general usage of the system and the time needed to formalize a proof in the system even when the user is already proficient with it. As mathematicians are not yet using proof checkers regularly, we wanted to evaluate the validity of this reluctance by analyzing these main obstacles. Judging by Dr. Wiedijk’s Formalizing 100 Theorems list, which gives an overview of the headway various proof systems have made in mathematics, Coq and Mizar are two of the most successful systems in use today (Wiedijk, 2007). I simultaneously formalized two fairly involved theorems in these two systems while I was at approximately the same level of familiarity with each. I kept track of my experiences with learning the systems and analyzed their comparative strengths and weaknesses. The analysis and summary of experiences should also give a general idea of the current state of computer-aided proof checking

    Em direção à formalização das propriedades de normalização do sistema λex

    Get PDF
    Trabalho de Conclusão de Curso (graduação)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Ciência da Computação, 2016.O cálculo /\ é um sistema formal, capaz de expressar o processo computacional. Pela sua simplicidade e expressividade, este cálculo é usado como modelo teórico para o paradigma de programação funcional. Em consequência disto, uma grande quantidade de extensões do cálculo foi proposta, com o objetivo de obter um sistema formal intermediário entre o cálculo /\ e suas implementações. O objeto de estudo deste trabalho é uma destas variantes, chamada /\ex, um cálculo com substituições explicitas proposto por Delia Kesner. Este cálculo é um dos primeiros a possuir a preservação da normalização forte enquanto permite composição completa de substituições explícitas. Continuamos o trabalho de formalização deste cálculo, no assistente de prova Coq, iniciado em 2014, e que tem por objetivo fornecer uma prova mecância e construtiva da propriedade de normalização forte para o cálculo /\ex. Mais especificamente, iniciamos a prova da propriedade IE, chave para a prova da preservação da normalização forte do cálculo /\ex. Isto foi feito seguindo a estratégia de prova no artigo da Kesner: estendemos a formalização para marcar alguns termos que não inserem problemas de normalização e definimos regras de redução para lidar com tais termos. Por fim, provamos a equivalência dessas novas regras com a regra original do sistema./\-calculus is a formal system, capable of expressing the computational process. Because of its simplicity and expressiveness, this calculus is used as a theorical model for the paradigm of functional programming. Consequently, a great variety of extensions were proposed, with the goal of obtaining an intermediate formal system between the /\-calculus and its implementations. The object of study of this work is one of these variants, called /\ex, a calculus with explicit subsititutions, proposed by Delia Kesner. This calculus is one of the first to preserve strong normalization of terms while permitting full composition of explicit substitutions. We continued the work in the formalization of this calculus, in the Coq proof assistant, initiated in 2014, with the goal of providing a mechanical and constructive proof of the strong normalization property for the /\ ex calculus. More specifically, we began the proof of the IE property, key to the demonstration of the preservation of strong normalization of the _ex-calculus. This was done following the strategy on Kesner’s paper: we extended the formalization to mark some terms that do not insert normalization issues and define reduction rules to deal with such terms. Finally, we prove the equivalence of these new rules with the original reduction rule of the system

    Don’t Mind The Formalization Gap: The Design And Usage Of Hs-To-Coq

    Get PDF
    Using proof assistants to perform formal, mechanical software verification is a powerful technique for producing correct software. However, the verification is time-consuming and limited to software written in the language of the proof assistant. As an approach to mitigating this tradeoff, this dissertation presents hs-to-coq, a tool for translating programs written in the Haskell programming language into the Coq proof assistant, along with its applications and a general methodology for using it to verify programs. By introducing edit files containing programmatic descriptions of code transformations, we provide the ability to flexibly adapt our verification goals to exist anywhere on the spectrum between “increased confidence” and “full functional correctness”

    System for automatic proving of some classes of analytic inequalities

    Get PDF
    U okviru ovog doktorata razvijen je sistem SimTheP (Simple Theorem Prover) za automatsko dokazivanje nekih klasa analitickih nejednakosti. Kao osnovna klasa nejednakosti posmatrana je klasa MTP (miksovano trigonometrijsko polinomskih) nejednakosti. U doktoratu su navedene jos neke klase analitickih nejednakosti na koje se, uz odred-ene dodatne korake, moze primeniti prikazani sistem. Za potrebe sistema je kreirano vise originalnih algoritama poput algoritma za trazenje prve pozitivne nule polinomske funkcije koji je baziran na Sturmovoj teoremi, algoritma za trazenje najmanjeg odgovarajuceg stepena aproksimacija Tejlorovim razvojima, algoritma sortiranja aproksimacija i slicnih. Svi algoritmi su prikazani pseudokodom i detaljnim objasnjenjem slucajeva upotrebe. Rad sistema i koriscenih algoritama ilustrovani su na vecem broju konkretnih analitickih nejednakosti od kojih su neke bile otvoreni problemi, a koji su potom reseni metodama sistema i publikovani u renomiranim casopisima. U okviru doktorata dat je detaljan prikaz oblasti i problematike vezane za dokazivanje i automatske dokazivace. Razmotreni su osnovni problemi sa kojima se srecu korisnici vecine automatskih dokazivaca, ali su takod-e analizirani i neki problemi vezani u vezi sa implementacijom automatskih dokazivaca teorema. Razvijena je jedna implementacija sistema SimTheP, a u cilju procene performansi ovog sistema urad-ena je uporedna analiza sa dokazivacem MetiTarski.In this doctoral thesis was developed SimTheP (Simple Theorem Prover), system for automatic proving of some classes of analytical inequalities. MTP (mixed trigonometric polynomial) inequalities were considered as basic class of studied inequalities. Some additional classes of analytical inequalities, on which shown system can be applied with some additional steps, were presented in this thesis. Several original algorithms, such as algorithm for seeking rst positive root of polynomial function based on Sturms theorem, algorithm for seeking smallest appropriate degree of approximation by Taylor series, algorithm for sorting of approximations and similar others, were created for use in system. All algorithms were shown by pseudo-code and detailed use case scenarios. Inner workings of system and application of stated algorithms was illustrated on great number of concrete analytical inequalities, of which some were open problems later solved by methods from system and published in renown journals. In this thesis was also given detailed image of area of research and problematic of theorem proving and automatic theorem provers. Some basic problems with which users of most automatic theorem provers deal were considered, but also some problems of implementation of automatic theorem proving were analysed. One implementation of system SimTheP was developed, and to assess performance of this system, side by side comparison with MetiTarski was conducted

    Gridfields: Model-Driven Data Transformation in the Physical Sciences

    Get PDF
    Scientists\u27 ability to generate and store simulation results is outpacing their ability to analyze them via ad hoc programs. We observe that these programs exhibit an algebraic structure that can be used to facilitate reasoning and improve performance. In this dissertation, we present a formal data model that exposes this algebraic structure, then implement the model, evaluate it, and use it to express, optimize, and reason about data transformations in a variety of scientific domains. Simulation results are defined over a logical grid structure that allows a continuous domain to be represented discretely in the computer. Existing approaches for manipulating these gridded datasets are incomplete. The performance of SQL queries that manipulate large numeric datasets is not competitive with that of specialized tools, and the up-front effort required to deploy a relational database makes them unpopular for dynamic scientific applications. Tools for processing multidimensional arrays can only capture regular, rectilinear grids. Visualization libraries accommodate arbitrary grids, but no algebra has been developed to simplify their use and afford optimization. Further, these libraries are data dependent—physical changes to data characteristics break user programs. We adopt the grid as a first-class citizen, separating topology from geometry and separating structure from data. Our model is agnostic with respect to dimension, uniformly capturing, for example, particle trajectories (1-D), sea-surface temperatures (2-D), and blood flow in the heart (3-D). Equipped with data, a grid becomes a gridfield. We provide operators for constructing, transforming, and aggregating gridfields that admit algebraic laws useful for optimization. We implement the model by analyzing several candidate data structures and incorporating their best features. We then show how to deploy gridfields in practice by injecting the model as middleware between heterogeneous, ad hoc file formats and a popular visualization library. In this dissertation, we define, develop, implement, evaluate and deploy a model of gridded datasets that accommodates a variety of complex grid structures and a variety of complex data products. We evaluate the applicability and performance of the model using datasets from oceanography, seismology, and medicine and conclude that our model-driven approach offers significant advantages over the status quo
    corecore