18 research outputs found

    Implementation methodology for using concurrent and collaborative approaches for theorem provers, with case studies of SAT and LCF style provers

    Get PDF
    Theorem provers are faced with the challenges of size and complexity, fueled by the increasing range of applications. The use of concurrent/ distributed programming paradigms to engineer better theorem provers merits serious investigation, as it provides: more processing power and opportunities for implementing novel approaches to address theorem proving tasks hitherto infeasible in a sequential setting. Investigation of these opportunities for two diverse theorem prover settings with an emphasis on desirable implementation criteria is the core focus of this thesis. Concurrent programming is notoriously error prone, hard to debug and evaluate. Thus, implementation approaches which promote easy prototyping, portability, incremental development and effective isolation of design and implementation can greatly aid the enterprise of experimentation with the application of concurrent techniques to address specific theorem proving tasks. In this thesis, we have explored one such approach by using Alice ML, a functional programming language with support for concurrency and distribution, to implement the prototypes and have used programming abstractions to encapsulate the implementations of the concurrent techniques used. The utility of this approach is illustrated via proof-of-concept prototypes of concurrent systems for two diverse case studies of theorem proving: the propositional satisfiability problem (SAT) and LCF style (first-order) theorem proving, addressing some previously unexplored parallelisation opportunities for each, as follows:. SAT: We have developed a novel hybrid approach for SAT and implemented a prototype for the same: DPLL-Stalmarck. It uses two complementary algorithms for SAT, DPLL and Stalmarck’s. The two solvers run asynchronously and dynamic information exchange is used for co-operative solving. Interaction of the solvers has been encapsulated as a programming abstraction. Compared to the standalone DPLL solver, DPLL-Stalmarck shows significant performance gains for two of the three problem classes considered and comparable behaviour otherwise. As an exploratory research effort, we have developed a novel algorithm, Concurrent Stalmarck, by applying concurrent techniques to the Stalmarck algorithm. A proof-of-concept prototype for the same has been implemented. Implementation of the saturation technique of the Stalmarck algorithm in a parallel setting, as implemented in Concurrent Stalmarck, has been encapsulated as a programming abstraction. LCF: Provision of programmable concurrent primitives enables customisation of concurrent techniques to specific theorem proving scenarios. In this case study, we have developed a multilayered approach to support programmable, sound extensions for an LCF prover: use programming abstractions to implement the concurrent techniques; use these to develop novel tacticals (control structures to apply tactics), incorporating concurrent techniques; and use these to develop novel proof search procedures. This approach has been implemented in a prototypical LCF style first-order prover, using Alice ML. New tacticals developed are: fastest-first; distributed composition; crossTalk: a novel tactic which uses dynamic, collaborative information exchange to handle unification across multiple sub-goals, with shared meta-variables; a new tactic, performing simultaneous proof-refutation attempts on propositional (sub- )goals, by invoking an external SAT solver (SAT case study), as a counter-example finder. Examples of concrete theorem proving scenarios are provided, demonstrating the utility of these extensions. Synthesis of a variety of automatic proof search procedures has been demonstrated, illustrating the scope of programmability and customisation, enabled by our multilayered approach

    Algorithmic skeletons for exact combinatorial search at scale

    Get PDF
    Exact combinatorial search is essential to a wide range of application areas including constraint optimisation, graph matching, and computer algebra. Solutions to combinatorial problems are found by systematically exploring a search space, either to enumerate solutions, determine if a specific solution exists, or to find an optimal solution. Combinatorial searches are computationally hard both in theory and practice, and efficiently exploring the huge number of combinations is a real challenge, often addressed using approximate search algorithms. Alternatively, exact search can be parallelised to reduce execution time. However, parallel search is challenging due to both highly irregular search trees and sensitivity to search order, leading to anomalies that can cause unexpected speedups and slowdowns. As core counts continue to grow, parallel search becomes increasingly useful for improving the performance of existing searches, and allowing larger instances to be solved. A high-level approach to parallel search allows non-expert users to benefit from increasing core counts. Algorithmic Skeletons provide reusable implementations of common parallelism patterns that are parameterised with user code which determines the specific computation, e.g. a particular search. We define a set of skeletons for exact search, requiring the user to provide in the minimal case a single class that specifies how the search tree is generated and a parameter that specifies the type of search required. The five are: Sequential search; three general-purpose parallel search methods: Depth-Bounded, Stack-Stealing, and Budget; and a specific parallel search method, Ordered, that guarantees replicable performance. We implement and evaluate the skeletons in a new C++ parallel search framework, YewPar. YewPar provides both high-level skeletons and low-level search specific schedulers and utilities to deal with the irregularity of search and knowledge exchange between workers. YewPar is based on the HPX library for distributed task-parallelism potentially allowing search to execute on multi-cores, clusters, cloud, and high performance computing systems. Underpinning the skeleton design is a novel formal model, MT^3 , a parallel operational semantics that describes multi-threaded tree traversals, allowing reasoning about parallel search, e.g. describing common parallel search phenomena such as performance anomalies. YewPar is evaluated using seven different search applications (and over 25 specific instances): Maximum Clique, k-Clique, Subgraph Isomorphism, Travelling Salesperson, Binary Knapsack, Enumerating Numerical Semigroups, and the Unbalanced Tree Search Benchmark. The search instances are evaluated at multiple scales from 1 to 255 workers, on a 17 host, 272 core Beowulf cluster. The overheads of the skeletons are low, with a mean 6.1% slowdown compared to hand-coded sequential implementation. Crucially, for all search applications YewPar reduces search times by an order of magnitude, i.e hours/minutes to minutes/seconds, and we commonly see greater than 60% (average) parallel efficiency speedups for up to 255 workers. Comparing skeleton performance reveals that no one skeleton is best for all searches, highlighting a benefit of a skeleton approach that allows multiple parallelisations to be explored with minimal refactoring. The Ordered skeleton avoids slowdown anomalies where, due to search knowledge being order dependent, a parallel search takes longer than a sequential search. Analysis of Ordered shows that, while being 41% slower on average (73% worse-case) than Depth-Bounded, in nearly all cases it maintains the following replicable performance properties: 1) parallel executions are no slower than one worker sequential executions 2) runtimes do not increase as workers are added, and 3) variance between repeated runs is low. In particular, where Ordered maintains a relative standard deviation (RSD) of less than 15%, Depth-Bounded suffers from an RSD greater than 50%, showing the importance of carefully controlling search orders for repeatability

    Proceedings of the 22nd Conference on Formal Methods in Computer-Aided Design – FMCAD 2022

    Get PDF
    The Conference on Formal Methods in Computer-Aided Design (FMCAD) is an annual conference on the theory and applications of formal methods in hardware and system verification. FMCAD provides a leading forum to researchers in academia and industry for presenting and discussing groundbreaking methods, technologies, theoretical results, and tools for reasoning formally about computing systems. FMCAD covers formal aspects of computer-aided system design including verification, specification, synthesis, and testing

    Proceedings of the 22nd Conference on Formal Methods in Computer-Aided Design – FMCAD 2022

    Get PDF
    The Conference on Formal Methods in Computer-Aided Design (FMCAD) is an annual conference on the theory and applications of formal methods in hardware and system verification. FMCAD provides a leading forum to researchers in academia and industry for presenting and discussing groundbreaking methods, technologies, theoretical results, and tools for reasoning formally about computing systems. FMCAD covers formal aspects of computer-aided system design including verification, specification, synthesis, and testing

    A Hardware SAT Solver Using Non-chronological Backtracking and Clause Recording Without Overheads

    No full text

    Methoden und Beschreibungssprachen zur Modellierung und Verifikation vonSchaltungen und Systemen: MBMV 2015 - Tagungsband, Chemnitz, 03. - 04. MĂ€rz 2015

    Get PDF
    Der Workshop Methoden und Beschreibungssprachen zur Modellierung und Verifikation von Schaltungen und Systemen (MBMV 2015) findet nun schon zum 18. mal statt. Ausrichter sind in diesem Jahr die Professur Schaltkreis- und Systementwurf der Technischen UniversitĂ€t Chemnitz und das Steinbeis-Forschungszentrum Systementwurf und Test. Der Workshop hat es sich zum Ziel gesetzt, neueste Trends, Ergebnisse und aktuelle Probleme auf dem Gebiet der Methoden zur Modellierung und Verifikation sowie der Beschreibungssprachen digitaler, analoger und Mixed-Signal-Schaltungen zu diskutieren. Er soll somit ein Forum zum Ideenaustausch sein. Weiterhin bietet der Workshop eine Plattform fĂŒr den Austausch zwischen Forschung und Industrie sowie zur Pflege bestehender und zur KnĂŒpfung neuer Kontakte. Jungen Wissenschaftlern erlaubt er, ihre Ideen und AnsĂ€tze einem breiten Publikum aus Wissenschaft und Wirtschaft zu prĂ€sentieren und im Rahmen der Veranstaltung auch fundiert zu diskutieren. Sein langjĂ€hriges Bestehen hat ihn zu einer festen GrĂ¶ĂŸe in vielen Veranstaltungskalendern gemacht. Traditionell sind auch die Treffen der ITGFachgruppen an den Workshop angegliedert. In diesem Jahr nutzen zwei im Rahmen der InnoProfile-Transfer-Initiative durch das Bundesministerium fĂŒr Bildung und Forschung geförderte Projekte den Workshop, um in zwei eigenen Tracks ihre Forschungsergebnisse einem breiten Publikum zu prĂ€sentieren. Vertreter der Projekte Generische Plattform fĂŒr SystemzuverlĂ€ssigkeit und Verifikation (GPZV) und GINKO - Generische Infrastruktur zur nahtlosen energetischen Kopplung von Elektrofahrzeugen stellen Teile ihrer gegenwĂ€rtigen Arbeiten vor. Dies bereichert denWorkshop durch zusĂ€tzliche Themenschwerpunkte und bietet eine wertvolle ErgĂ€nzung zu den BeitrĂ€gen der Autoren. [... aus dem Vorwort

    Studies related to the process of program development

    Get PDF
    The submitted work consists of a collection of publications arising from research carried out at Rhodes University (1970-1980) and at Heriot-Watt University (1980-1992). The theme of this research is the process of program development, i.e. the process of creating a computer program to solve some particular problem. The papers presented cover a number of different topics which relate to this process, viz. (a) Programming methodology programming. (b) Properties of programming languages. aspects of structured. (c) Formal specification of programming languages. (d) Compiler techniques. (e) Declarative programming languages. (f) Program development aids. (g) Automatic program generation. (h) Databases. (i) Algorithms and applications

    Incorporating design explanation within formal object-oriented method (FOOM)

    Full text link
    Requirements engineering is a commencing phase in the development of either software applications or information systems. It is concerned with understanding and specifying the customer\u27s requirements of the system to be delivered. Throughout the literature, this is agreed to be one of the most crucial and, unfortunately, problematic phases in development. Despite the diversity of research directions, approaches and methods, the question of process understanding and management is still limited. Among contemporary approaches to the improvement of the current practice of Requirements Engineering, Formal Object-Oriented Method (FOOM) has been introduced as a new promising solution. The FOOM approach to requirements engineering is based on a synthesis of socio-organisational theory, the object-oriented approach, and mathematical formal specification. The entire FOOM specification process is evolutionary and involves a large volume of changes in requirements. During this process, requirements evolve through various forms of informal, semi-formal, and formal while maintaining a semantic link between these forms and, most importantly, conforming to the customer\u27s requirements. A deep understanding of the complexity of the requirements model and its dynamics is critical in improving requirements engineering process management. This thesis investigates the benefits of documenting both the evolution of the requirements model and the rationale for that evolution. Design explanation explains and justifies the deliberations of, and decisions made during, the design activity. In this thesis, design explanation is used to describe the requirements engineering process in order to improve understandability of, and traceability within, the evolving requirements specification. The design explanation recorded during this research project is also useful in assisting the researcher in gaining insights into the creativity and opportunistic characteristics of the requirements engineering process. This thesis offers an interpretive investigation into incorporating design explanation within FOOM in order to extend and advantage the method. The researcher\u27s interpretation and analysis of collected data highlight an insight-driven and opportunistic process rather than a strictly and systematically predefined one. In fact, the process was not smoothly evolutionary, but involved occasional \u27crisis\u27 points at which the model was reconceptualised, simplified and restructured. Therefore, contributions of the thesis lie not only in an effective incorporation of design explanation within FOOM, but also a deep understanding of the dynamic process of requirements engineering. The new understanding of the complexity of the requirements model and its dynamics suggests new directions for future research and forms a basis for a new approach to process management
    corecore