512 research outputs found

    Better Together: Unifying Datalog and Equality Saturation

    Full text link
    We present egglog, a fixpoint reasoning system that unifies Datalog and equality saturation (EqSat). Like Datalog, it supports efficient incremental execution, cooperating analyses, and lattice-based reasoning. Like EqSat, it supports term rewriting, efficient congruence closure, and extraction of optimized terms. We identify two recent applications--a unification-based pointer analysis in Datalog and an EqSat-based floating-point term rewriter--that have been hampered by features missing from Datalog but found in EqSat or vice-versa. We evaluate egglog by reimplementing those projects in egglog. The resulting systems in egglog are faster, simpler, and fix bugs found in the original systems.Comment: PLDI 202

    Extending and Relating Semantic Models of Compensating CSP

    No full text
    Business transactions involve multiple partners coordinating and interacting with each other. These transactions have hierarchies of activities which need to be orchestrated. Usual database approaches (e.g.,checkpoint, rollback) are not applicable to handle faults in a long running transaction due to interaction with multiple partners. The compensation mechanism handles faults that can arise in a long running transaction. Based on the framework of Hoare's CSP process algebra, Butler et al introduced Compensating CSP (cCSP), a language to model long-running transactions. The language introduces a method to declare a transaction as a process and it has constructs for orchestration of compensation. Butler et al also defines a trace semantics for cCSP. In this thesis, the semantic models of compensating CSP are extended by defining an operational semantics, describing how the state of a program changes during its execution. The semantics is encoded into Prolog to animate the specification. The semantic models are further extended to define the synchronisation of processes. The notion of partial behaviour is defined to model the behaviour of deadlock that arises during process synchronisation. A correspondence relationship is then defined between the semantic models and proved by using structural induction. Proving the correspondence means that any of the presentation can be accepted as a primary definition of the meaning of the language and each definition can be used correctly at different times, and for different purposes. The semantic models and their relationships are mechanised by using the theorem prover PVS. The semantic models are embedded in PVS by using Shallow embedding. The relationships between semantic models are proved by mutual structural induction. The mechanisation overcomes the problems in hand proofs and improves the scalability of the approach

    Exploiting Conceptual Modeling for Searching Genomic Metadata: A Quantitative and Qualitative Empirical Study

    Get PDF
    Providing a common data model for the metadata of several heterogenous genomic data sources is hard, as they do not share any standard or agreed practice for metadata description. Two years ago we managed to discover a subset of common metadata present in most sources and to organize it as a smart genomic conceptual model (GCM); the model has been instrumental to our efforts in the development of a major software pipeline for data integration. More recently, we developed a user-friendly search interface, based on a simplified version of GCM. In this paper, we report our evaluation of the effectiveness of this new user interface. Specifically, we present the results of a compendious empirical study to answer the research question: How much is such a simple interface well-understood by a standard user? The target of this study is a mixed population, composed by biologists, bioinformaticians and computer scientists. The result of our empirical study shows that the users were successful in producing search queries starting from their natural language description, as they did it with good accuracy and small error rate. The study also shows that most users were generally satisfied; it provides indications on how to improve our search system and how to continue our effort in integration of genomic sources. We are consequently adapting the user interface, that will be soon opened to public use

    Software Testing of Parallel Programming Frameworks

    Get PDF
    Parallel programming frameworks rapidly evolve to meet the performance demands of High Performance Computing (HPC) applications and the concurrent evolution of supercomputing class system architectures. To meet this demand, standards and specifications that outline the semantics and required capabilities of parallel programming models are being developed by committees of government and industry experts and then implemented by third parties. OpenMP and MPI are particularly prominent examples of such programming models and specifications, and are in common use in the HPC world. Comprehensive testing is required to be sure that any given implementation adheres to the published standard. The type and degree of testing depends on the goals of the developers. In particular, commercial implementations developed by companies for specialized applications (like HPC) will have much more stringent requirements than those for general applications. This thesis describes the development of test suites targeted toward a subset of OpenMP and MPI features, namely processor affinity and thread safety, as implemented in Cray compilers and libraries. These tests, the focus of which were robustness, re-usability, and detailed error output, contributed to software quality for the wide range of applications for which Cray compilers are used, and continue to help Cray ensure correctness in their OpenMP and MPI implementations as their compilers and libraries continue to evolve

    Modularization Approaches in the Context of Monolithic Simulations

    Get PDF
    QualitƤtsmerkmale eines Software-Systems wie ZuverlƤssigkeit oder Performanz kƶnnen Ć¼ber dessen Erfolg oder Scheitern entscheiden. Diese QualitƤtsmerkmale kƶnnen im klassischen Software-Ingenieurswesen erst bestimmt werden, wenn der Entwurfsprozess bereits vollendet ist und Teile des Software-Systems implementiert sind. Computer-Simulationen erlauben es jedoch SchƤtzungen dieser Werte schon wƤhrend des Software-Entwurfs zu bestimmen. Simulationen werden erstellt um bestimmte Aspekte eines Systems zu analysieren. Die ReprƤsentation des Systems ist auf diese Analyse spezialisiert. Diese Spezialisierung resultiert oft in einer monolithischen Struktur der Simulation. Solch eine Struktur kann jedoch die Wartbarkeit der Simulation negativ beeinflussen und das VerstƤndnis und die Wiederverwendbarkeit der ReprƤsentation des Systems verschlechtern. Die Nachteile einer monolithischen Struktur kƶnnen durch das Konzept der Modularisierung reduziert werden. In diesem Ansatz wird ein Problem in kleinere Teilprobleme zerlegt. Diese Zerlegung ermƶglicht ein besseres VestƤndnis und eine bessere Handhabung der Teilprobleme. In dieser Arbeit wird ein Ansatz prƤsentiert, um die Kopplung von neu entwickelten oder bereits existierenden Simulationen zu einer modularen Simulation zu beschreiben. Dieser Ansatz besteht aus einer DomƤnenspezifischen Sprache (DSL), die mit modellgetriebenen Technologien entwickelt wird. Die DSL wird in einer Fallstudie angewendet, um die Kopplung von zwei Simulationen zu beschreiben. Weiterhin wird die Kopplung dieser Simulationen mit einem existierenden Kopplungsansatz gemƤƟ der erzeugten Beschreibung manuell implementiert. In dieser Fallstudie wird die VollstƤndigkeit der FƤhigkeit der DSL untersucht, die Kopplung von mehreren Simulation zu einer modularen Simulation zu beschreiben. Weiterhin wird die Genauigkeit des Modularisierungsansatzes bezĆ¼glich der Verhaltensbewahrung der modularen Simulation gegenĆ¼ber der monolithischen Version evaluiert. HierfĆ¼r werden die Resultate der modularen Simulation mit denen der monolithischen Version verglichen. Zudem wird die Skalierbarkeit des Ansatzes durch die Betrachtung der AusfĆ¼hrungszeiten untersucht, wenn mehrere Simulationen gekoppelt werden. AuƟerdem wird der Effekt der Modularisierung auf die AusfĆ¼hrungszeit in Relation zur monolithischen Simulation betrachtet. Die erhaltenen Resultate zeigen, dass die Kopplung der beiden Simulationen der Fallstudie, mit der DSL beschrieben werden kann. Die Resultate bezĆ¼glich der Evaluation der Genauigkeit weisen Probleme bei der Interaktion der Simulationen mit dem Kopplungsansatz auf. Nichts desto trotz bleibt das Verhalten der monolithischen Simulation in der modularen Version insgesamt erhalten. Die Evaluation zeigt, dass die modulare Simulation eine Erhƶhung der AusfĆ¼hrungszeit im Vergleich zur monolithischen Version erfƤhrt. Zudem deutet die Analyse der Skalierbarkeit darauf hin, dass die AusfĆ¼hrungszeit der modularen Simulation nicht exponentiell mit der Anzahl der gekoppelten Simulationen wƤchst

    A Formal Verification Environment for Use in the Certification of Safety-Related C Programs

    Get PDF
    In this thesis the design of an environment for the formal verification of functional properties of safety-related software written in the programming language C is described. The focus lies on the verification of (primarily) geometric computations. We give an overview of the applicable regulations for safety-related software systems. We define a combination of higher-order logic as formalised in the theorem prover Isabelle and a specification language syntactically based on C expressions. The language retains the mathematical character of higher-level specifications in code specifications. A memory model for C is formalised which is appropriate to model low-level memory operations while keeping the entailed verification overhead in tolerable bounds. Finally, a Hoare style proof calculus is devised so that correctness proofs can be performed in one integrated framework. The applicability of the approach is demonstrated by describing its use in an industrial project
    • ā€¦
    corecore