14 research outputs found

    Computer Science Logic 2018: CSL 2018, September 4-8, 2018, Birmingham, United Kingdom

    Get PDF

    Programming Languages and Systems

    Get PDF
    This open access book constitutes the proceedings of the 29th European Symposium on Programming, ESOP 2020, which was planned to take place in Dublin, Ireland, in April 2020, as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2020. The actual ETAPS 2020 meeting was postponed due to the Corona pandemic. The papers deal with fundamental issues in the specification, design, analysis, and implementation of programming languages and systems

    Compilation and Code Optimization for Data Analytics

    Get PDF
    The trade-offs between the use of modern high-level and low-level programming languages in constructing complex software artifacts are well known. High-level languages allow for greater programmer productivity: abstraction and genericity allow for the same functionality to be implemented with significantly less code compared to low-level languages. Modularity, object-orientation, functional programming, and powerful type systems allow programmers not only to create clean abstractions and protect them from leaking, but also to define code units that are reusable and easily composable, and software architectures that are adaptable and extensible. The abstraction, succinctness, and modularity of high-level code help to avoid software bugs and facilitate debugging and maintenance. The use of high-level languages comes at a performance cost: increased indirection due to abstraction, virtualization, and interpretation, and superfluous work, particularly in the form of tempory memory allocation and deallocation to support objects and encapsulation. As a result of this, the cost of high-level languages for performance-critical systems may seem prohibitive. The vision of abstraction without regret argues that it is possible to use high-level languages for building performance-critical systems that allow for both productivity and high performance, instead of trading off the former for the latter. In this thesis, we realize this vision for building different types of data analytics systems. Our means of achieving this is by employing compilation. The goal is to compile away expensive language features -- to compile high-level code down to efficient low-level code

    Compilation and Code Optimization for Data Analytics

    Get PDF
    The trade-offs between the use of modern high-level and low-level programming languages in constructing complex software artifacts are well known. High-level languages allow for greater programmer productivity: abstraction and genericity allow for the same functionality to be implemented with significantly less code compared to low-level languages. Modularity, object-orientation, functional programming, and powerful type systems allow programmers not only to create clean abstractions and protect them from leaking, but also to define code units that are reusable and easily composable, and software architectures that are adaptable and extensible. The abstraction, succinctness, and modularity of high-level code help to avoid software bugs and facilitate debugging and maintenance. The use of high-level languages comes at a performance cost: increased indirection due to abstraction, virtualization, and interpretation, and superfluous work, particularly in the form of tempory memory allocation and deallocation to support objects and encapsulation. As a result of this, the cost of high-level languages for performance-critical systems may seem prohibitive. The vision of abstraction without regret argues that it is possible to use high-level languages for building performance-critical systems that allow for both productivity and high performance, instead of trading off the former for the latter. In this thesis, we realize this vision for building different types of data analytics systems. Our means of achieving this is by employing compilation. The goal is to compile away expensive language features -- to compile high-level code down to efficient low-level code

    Foundations of Software Science and Computation Structures

    Get PDF
    This open access book constitutes the proceedings of the 23rd International Conference on Foundations of Software Science and Computational Structures, FOSSACS 2020, which took place in Dublin, Ireland, in April 2020, and was held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2020. The 31 regular papers presented in this volume were carefully reviewed and selected from 98 submissions. The papers cover topics such as categorical models and logics; language theory, automata, and games; modal, spatial, and temporal logics; type theory and proof theory; concurrency theory and process calculi; rewriting theory; semantics of programming languages; program analysis, correctness, transformation, and verification; logics of programming; software specification and refinement; models of concurrent, reactive, stochastic, distributed, hybrid, and mobile systems; emerging models of computation; logical aspects of computational complexity; models of software security; and logical foundations of data bases.

    Assertion level proof planning with compiled strategies

    Get PDF
    This book presents new techniques that allow the automatic verification and generation of abstract human-style proofs. The core of this approach builds an efficient calculus that works directly by applying definitions, theorems, and axioms, which reduces the size of the underlying proof object by a factor of ten. The calculus is extended by the deep inference paradigm which allows the application of inference rules at arbitrary depth inside logical expressions and provides new proofs that are exponentially shorter and not available in the sequent calculus without cut. In addition, a strategy language for abstract underspecified declarative proof patterns is developed. Together, the complementary methods provide a framework to automate declarative proofs. The benefits of the techniques are illustrated by practical applications.Die vorliegende Arbeit beschäftigt sich damit, das Formalisieren von Beweisen zu vereinfachen, indem Methoden entwickelt werden, um informale Beweise formal zu verifizieren und erzeugen zu können. Dazu wird ein abstrakter Kalkül entwickelt, der direkt auf der Faktenebene arbeitet, welche von Menschen geführten Beweisen relativ nahe kommt. Anhand einer Fallstudie wird gezeigt, dass die abstrakte Beweisführung auf der Fakteneben vorteilhaft für automatische Suchverfahren ist. Zusätzlich wird eine Strategiesprache entwickelt, die es erlaubt, unterspezifizierte Beweismuster innerhalb des Beweisdokumentes zu spezifizieren und Beweisskizzen automatisch zu verfeinern. Fallstudien zeigen, dass komplexe Beweismuster kompakt in der entwickelten Strategiesprache spezifiziert werden können. Zusammen bilden die einander ergänzenden Methoden den Rahmen zur Automatisierung von deklarativen Beweisen auf der Faktenebene, die bisher überwiegend manuell entwickelt werden mussten

    Assertion level proof planning with compiled strategies

    Get PDF
    This book presents new techniques that allow the automatic verification and generation of abstract human-style proofs. The core of this approach builds an efficient calculus that works directly by applying definitions, theorems, and axioms, which reduces the size of the underlying proof object by a factor of ten. The calculus is extended by the deep inference paradigm which allows the application of inference rules at arbitrary depth inside logical expressions and provides new proofs that are exponentially shorter and not available in the sequent calculus without cut. In addition, a strategy language for abstract underspecified declarative proof patterns is developed. Together, the complementary methods provide a framework to automate declarative proofs. The benefits of the techniques are illustrated by practical applications.Die vorliegende Arbeit beschäftigt sich damit, das Formalisieren von Beweisen zu vereinfachen, indem Methoden entwickelt werden, um informale Beweise formal zu verifizieren und erzeugen zu können. Dazu wird ein abstrakter Kalkül entwickelt, der direkt auf der Faktenebene arbeitet, welche von Menschen geführten Beweisen relativ nahe kommt. Anhand einer Fallstudie wird gezeigt, dass die abstrakte Beweisführung auf der Fakteneben vorteilhaft für automatische Suchverfahren ist. Zusätzlich wird eine Strategiesprache entwickelt, die es erlaubt, unterspezifizierte Beweismuster innerhalb des Beweisdokumentes zu spezifizieren und Beweisskizzen automatisch zu verfeinern. Fallstudien zeigen, dass komplexe Beweismuster kompakt in der entwickelten Strategiesprache spezifiziert werden können. Zusammen bilden die einander ergänzenden Methoden den Rahmen zur Automatisierung von deklarativen Beweisen auf der Faktenebene, die bisher überwiegend manuell entwickelt werden mussten

    Fourth NASA Langley Formal Methods Workshop

    Get PDF
    This publication consists of papers presented at NASA Langley Research Center's fourth workshop on the application of formal methods to the design and verification of life-critical systems. Topic considered include: Proving properties of accident; modeling and validating SAFER in VDM-SL; requirement analysis of real-time control systems using PVS; a tabular language for system design; automated deductive verification of parallel systems. Also included is a fundamental hardware design in PVS

    Efficient Online Processing for Advanced Analytics

    Get PDF
    With the advent of emerging technologies and the Internet of Things, the importance of online data analytics has become more pronounced. Businesses and companies are adopting approaches that provide responsive analytics to stay competitive in the global marketplace. Online analytics allow data analysts to promptly react to patterns or to gain preliminary insights from early results that aid in research, decision making, and effective strategy planning. The growth of data-velocity in a variety of domains including, high-frequency trading, social networks, infrastructure monitoring, and advertising require adopting online engines that can efficiently process continuous streams of data. This thesis presents foundations, techniques, and systems' design that extend the state-of-the-art in online query processing to efficiently support relational joins with arbitrary join-predicates (beyond traditional equi-joins); and to support other data models (beyond relational) that target machine learning and graph computations. The thesis is divided into two parts: We first present a brief overview of Squall, our open-source online query processing engine that supports SQL-like queries on top of streams. Then, we focus on extending Squall to support efficient theta-join processing. Scalable distributed join processing requires a partitioning policy that evenly distributes the processing load while minimizing the size of maintained state and duplicated messages. Efficient load-balance demands apriori-statistics which are not available in the online setting. We propose a novel operator that continuously adjusts itself to the data dynamics, through adaptive dataflow routing and state repartitioning. It is also resilient to data-skew, maintains high throughput rates, avoids blocking during state repartitioning, and behaves as a black-box dataflow operator with provable performance guarantees. Our evaluation demonstrates that the proposed operator outperforms the state-of-the-art static partitioning schemes in resource utilization, throughput, and execution time up to 7x. In the second part, we present a novel framework that supports the Incremental View Maintenance (IVM) of workloads expressed as linear algebra programs. Linear algebra represents a concrete substrate for advanced analytical tasks including, machine learning, scientific computation, and graph algorithms. Previous works on relational calculus IVM are not applicable to matrix algebra workloads. This is because a single entry change to an input-matrix results in changes all over the intermediate views, rendering IVM useless in comparison to re-evaluation. We present Lago, a unified modular compiler framework that supports the IVM of a broad class of linear algebra programs. Lago automatically derives and optimizes incremental trigger programs of analytical computations, while freeing the user from erroneous manual derivations, low-level implementation details, and performance tuning. We present a novel technique that captures Δ\Delta changes as low-rank matrices. Low-rank matrices are representable in a compressed factored form that enables cheaper computations. Lago automatically propagates the factored representation across program statements to derive an efficient trigger program. Moreover, Lago extends its support to other domains that use different semi-ring configurations, e.g., graph applications. Our evaluation results demonstrate orders of magnitude (10x-1
    corecore