48,956 research outputs found

    Path-Based Program Repair

    Full text link
    We propose a path-based approach to program repair for imperative programs. Our repair framework takes as input a faulty program, a logic specification that is refuted, and a hint where the fault may be located. An iterative abstraction refinement loop is then used to repair the program: in each iteration, the faulty program part is re-synthesized considering a symbolic counterexample, where the control-flow is kept concrete but the data-flow is symbolic. The appeal of the idea is two-fold: 1) the approach lazily considers candidate repairs and 2) the repairs are directly derived from the logic specification. In contrast to prior work, our approach is complete for programs with finitely many control-flow paths, i.e., the program is repaired if and only if it can be repaired at the specified fault location. Initial results for small programs indicate that the approach is useful for debugging programs in practice.Comment: In Proceedings FESCA 2015, arXiv:1503.0437

    Extending Nunchaku to Dependent Type Theory

    Get PDF
    Nunchaku is a new higher-order counterexample generator based on a sequence of transformations from polymorphic higher-order logic to first-order logic. Unlike its predecessor Nitpick for Isabelle, it is designed as a stand-alone tool, with frontends for various proof assistants. In this short paper, we present some ideas to extend Nunchaku with partial support for dependent types and type classes, to make frontends for Coq and other systems based on dependent type theory more useful.Comment: In Proceedings HaTT 2016, arXiv:1606.0542

    Tool support for reasoning in display calculi

    Get PDF
    We present a tool for reasoning in and about propositional sequent calculi. One aim is to support reasoning in calculi that contain a hundred rules or more, so that even relatively small pen and paper derivations become tedious and error prone. As an example, we implement the display calculus D.EAK of dynamic epistemic logic. Second, we provide embeddings of the calculus in the theorem prover Isabelle for formalising proofs about D.EAK. As a case study we show that the solution of the muddy children puzzle is derivable for any number of muddy children. Third, there is a set of meta-tools, that allows us to adapt the tool for a wide variety of user defined calculi

    Partition strategies for incremental Mini-Bucket

    Get PDF
    Los modelos en grafo probabilísticos, tales como los campos aleatorios de Markov y las redes bayesianas, ofrecen poderosos marcos de trabajo para la representación de conocimiento y el razonamiento en modelos con gran número de variables. Sin embargo, los problemas de inferencia exacta en modelos de grafos son NP-hard en general, lo que ha causado que se produzca bastante interés en métodos de inferencia aproximados. El mini-bucket incremental es un marco de trabajo para inferencia aproximada que produce como resultado límites aproximados inferior y superior de la función de partición exacta, a base de -empezando a partir de un modelo con todos los constraints relajados, es decir, con las regiones más pequeñas posibleincrementalmente añadir regiones más grandes a la aproximación. Los métodos de inferencia aproximada que existen actualmente producen límites superiores ajustados de la función de partición, pero los límites inferiores suelen ser demasiado imprecisos o incluso triviales. El objetivo de este proyecto es investigar estrategias de partición que mejoren los límites inferiores obtenidos con el algoritmo de mini-bucket, trabajando dentro del marco de trabajo de mini-bucket incremental. Empezamos a partir de la idea de que creemos que debería ser beneficioso razonar conjuntamente con las variables de un modelo que tienen una alta correlación, y desarrollamos una estrategia para la selección de regiones basada en esa idea. Posteriormente, implementamos nuestra estrategia y exploramos formas de mejorarla, y finalmente medimos los resultados obtenidos usando nuestra estrategia y los comparamos con varios métodos de referencia. Nuestros resultados indican que nuestra estrategia obtiene límites inferiores más ajustados que nuestros dos métodos de referencia. También consideramos y descartamos dos posibles hipótesis que podrían explicar esta mejora.Els models en graf probabilístics, com bé els camps aleatoris de Markov i les xarxes bayesianes, ofereixen poderosos marcs de treball per la representació del coneixement i el raonament en models amb grans quantitats de variables. Tanmateix, els problemes d’inferència exacta en models de grafs son NP-hard en general, el qual ha provocat que es produeixi bastant d’interès en mètodes d’inferència aproximats. El mini-bucket incremental es un marc de treball per a l’inferència aproximada que produeix com a resultat límits aproximats inferior i superior de la funció de partició exacta que funciona començant a partir d’un model al qual se li han relaxat tots els constraints -és a dir, un model amb les regions més petites possibles- i anar afegint a l’aproximació regions incrementalment més grans. Els mètodes d’inferència aproximada que existeixen actualment produeixen límits superiors ajustats de la funció de partició. Tanmateix, els límits inferiors acostumen a ser massa imprecisos o fins aviat trivials. El objectiu d’aquest projecte es recercar estratègies de partició que millorin els límits inferiors obtinguts amb l’algorisme de mini-bucket, treballant dins del marc de treball del mini-bucket incremental. La nostra idea de partida pel projecte es que creiem que hauria de ser beneficiós per la qualitat de l’aproximació raonar conjuntament amb les variables del model que tenen una alta correlació entre elles, i desenvolupem una estratègia per a la selecció de regions basada en aquesta idea. Posteriorment, implementem la nostra estratègia i explorem formes de millorar-la, i finalment mesurem els resultats obtinguts amb la nostra estratègia i els comparem a diversos mètodes de referència. Els nostres resultats indiquen que la nostra estratègia obté límits inferiors més ajustats que els nostres dos mètodes de referència. També considerem i descartem dues possibles hipòtesis que podrien explicar aquesta millora.Probabilistic graphical models such as Markov random fields and Bayesian networks provide powerful frameworks for knowledge representation and reasoning over models with large numbers of variables. Unfortunately, exact inference problems on graphical models are generally NP-hard, which has led to signifi- cant interest in approximate inference algorithms. Incremental mini-bucket is a framework for approximate inference that provides upper and lower bounds on the exact partition function by, starting from a model with completely relaxed constraints, i.e. with the smallest possible regions, incrementally adding larger regions to the approximation. Current approximate inference algorithms provide tight upper bounds on the exact partition function but loose or trivial lower bounds. This project focuses on researching partitioning strategies that improve the lower bounds obtained with mini-bucket elimination, working within the framework of incremental mini-bucket. We start from the idea that variables that are highly correlated should be reasoned about together, and we develop a strategy for region selection based on that idea. We implement the strategy and explore ways to improve it, and finally we measure the results obtained using the strategy and compare them to several baselines. We find that our strategy performs better than both of our baselines. We also rule out several possible explanations for the improvement

    Adapting Real Quantifier Elimination Methods for Conflict Set Computation

    Get PDF
    The satisfiability problem in real closed fields is decidable. In the context of satisfiability modulo theories, the problem restricted to conjunctive sets of literals, that is, sets of polynomial constraints, is of particular importance. One of the central problems is the computation of good explanations of the unsatisfiability of such sets, i.e.\ obtaining a small subset of the input constraints whose conjunction is already unsatisfiable. We adapt two commonly used real quantifier elimination methods, cylindrical algebraic decomposition and virtual substitution, to provide such conflict sets and demonstrate the performance of our method in practice

    Lightweight Formal Verification in Classroom Instruction of Reasoning about Functional Code

    Full text link
    In college courses dealing with material that requires mathematical rigor, the adoption of a machine-readable representation for formal arguments can be advantageous. Students can focus on a specific collection of constructs that are represented consistently. Examples and counterexamples can be evaluated. Assignments can be assembled and checked with the help of an automated formal reasoning system. However, usability and accessibility do not have a high priority and are not addressed sufficiently well in the design of many existing machine-readable representations and corresponding formal reasoning systems. In earlier work [Lap09], we attempt to address this broad problem by proposing several specific design criteria organized around the notion of a natural context: the sphere of awareness a working human user maintains of the relevant constructs, arguments, experiences, and background materials necessary to accomplish the task at hand. We report on our attempt to evaluate our proposed design criteria by deploying within the classroom a lightweight formal verification system designed according to these criteria. The lightweight formal verification system was used within the instruction of a common application of formal reasoning: proving by induction formal propositions about functional code. We present all of the formal reasoning examples and assignments considered during this deployment, most of which are drawn directly from an introductory text on functional programming. We demonstrate how the design of the system improves the effectiveness and understandability of the examples, and how it aids in the instruction of basic formal reasoning techniques. We make brief remarks about the practical and administrative implications of the system’s design from the perspectives of the student, the instructor, and the grader

    Advances in Learning Bayesian Networks of Bounded Treewidth

    Full text link
    This work presents novel algorithms for learning Bayesian network structures with bounded treewidth. Both exact and approximate methods are developed. The exact method combines mixed-integer linear programming formulations for structure learning and treewidth computation. The approximate method consists in uniformly sampling kk-trees (maximal graphs of treewidth kk), and subsequently selecting, exactly or approximately, the best structure whose moral graph is a subgraph of that kk-tree. Some properties of these methods are discussed and proven. The approaches are empirically compared to each other and to a state-of-the-art method for learning bounded treewidth structures on a collection of public data sets with up to 100 variables. The experiments show that our exact algorithm outperforms the state of the art, and that the approximate approach is fairly accurate.Comment: 23 pages, 2 figures, 3 table
    • …
    corecore