148 research outputs found

    Ernst Denert Award for Software Engineering 2019

    Get PDF
    This open access book provides an overview of the dissertations of the five nominees for the Ernst Denert Award for Software Engineering in 2019. The prize, kindly sponsored by the Gerlind & Ernst Denert Stiftung, is awarded for excellent work within the discipline of Software Engineering, which includes methods, tools and procedures for better and efficient development of high quality software. An essential requirement for the nominated work is its applicability and usability in industrial practice. The book contains five papers describing the works by Sebastian Baltes (U Trier) on Software Developers’Work Habits and Expertise, Timo Greifenberg’s thesis on Artefaktbasierte Analyse modellgetriebener Softwareentwicklungsprojekte, Marco Konersmann’s (U Duisburg-Essen) work on Explicitly Integrated Architecture, Marija Selakovic’s (TU Darmstadt) research about Actionable Program Analyses for Improving Software Performance, and Johannes Späth’s (Paderborn U) thesis on Synchronized Pushdown Systems for Pointer and Data-Flow Analysis – which actually won the award. The chapters describe key findings of the respective works, show their relevance and applicability to practice and industrial software engineering projects, and provide additional information and findings that have only been discovered afterwards, e.g. when applying the results in industry. This way, the book is not only interesting to other researchers, but also to industrial software professionals who would like to learn about the application of state-of-the-art methods in their daily work

    A Compositional Deadlock Detector for Android Java

    Get PDF
    We develop a static deadlock analysis for commercial Android Java applications, of sizes in the tens of millions of LoC, under active development at Facebook. The analysis runs primarily at code-review time, on only the modified code and its dependents; we aim at reporting to developers in under 15 minutes. To detect deadlocks in this setting, we first model the real language as an abstract language with balanced re-entrant locks, nondeterministic iteration and branching, and non-recursive procedure calls. We show that the existence of a deadlock in this abstract language is equivalent to a certain condition over the sets of critical pairs of each program thread; these record, for all possible executions of the thread, which locks are currently held at the point when a fresh lock is acquired. Since the critical pairs of any program thread is finite and computable, the deadlock detection problem for our language is decidable, and in NP. We then leverage these results to develop an open-source implementation of our analysis adapted to deal with real Java code. The core of the implementation is an algorithm which computes critical pairs in a compositional, abstract interpretation style, running in quasi-exponential time. Our analyser is built in the INFER verification framework and has been in industrial deployment for over two years; it has seen over two hundred fixed deadlock reports with a report fix rate of ∼54%

    Using JOANA for Information Flow Control in Java Programs - A Practical Guide

    Get PDF

    Security Evaluation and Hardening of Free and Open Source Software (FOSS)

    Get PDF
    Recently, Free and Open Source Software (FOSS) has emerged as an alternative to Commercial-Off- The-Shelf (COTS) software. Now, FOSS is perceived as a viable long-term solution that deserves careful consideration because of its potential for significant cost savings, improved reliability, and numerous advantages over proprietary software. However, the secure integration of FOSS in IT infrastructures is very challenging and demanding. Methodologies and technical policies must be adapted to reliably compose large FOSS-based software systems. A DRDC Valcartier-Concordia University feasibility study completed in March 2004 concluded that the most promising approach for securing FOSS is to combine advanced design patterns and Aspect-Oriented Programming (AOP). Following the recommendations of this study a three years project have been conducted as a collaboration between Concordia University, DRDC Valcartier, and Bell Canada. This paper aims at presenting the main contributions of this project. It consists of a practical framework with the underlying solid semantic foundations for the security evaluation and hardening of FOSS

    Field-Sensitive Program Slicing

    Get PDF
    The granularity level of the program dependence graph (PDG) for composite data structures (tuples, lists, records, objects, etc.) is inaccurate when slicing their inner elements. We present the constrained-edges PDG (CE-PDG) that addresses this accuracy problem. The CE-PDG enhances the representation of composite data structures by decomposing statements into a subgraph that represents the inner elements of the structure, and the inclusion and propagation of data constraints along the CE-PDG edges allows for accurate slicing of complex data structures. Both extensions are conservative with respect to the PDG, in the sense that all slicing criteria (and more) that can be specified in the PDG can be also specified in the CE-PDG, and the slices produced with the CE-PDG are always smaller or equal to the slices produced by the PDG. An evaluation of our approach shows a reduction of the slices of 11.67%/5.49% for programs without/with loops

    Lock sensitive analysis of parallel programs

    Full text link
    "Lock sensitive analysis of parallel programs" (Lock-Sensitive Analyse nebenläufiger Programme) Diese Dissertation behandelt einen Modellprüfungsalgorithmus für dynamische Pushdown-Netzwerke mit Monitoren (Monitor-DPNs). Monitor-DPNs sind ein Modell für parallele Programme mit rekursiven Prozeduren, Thread-Erzeugung, und wechselweisem Ausschluss durch Monitore. Betrachtet werden Vorgängermengenberechnungen, mit denen man viele interessante Eigenschaften ausdrücken kann, unter Anderem Race-Conditions, Bitvektoranalysen und das (EF,EX)-Fragment der branching-time Logik CTL

    Hyperscale Data Processing With Network-Centric Designs

    Get PDF
    Today’s largest data processing workloads are hosted in cloud data centers. Due to unprecedented data growth and the end of Moore’s Law, these workloads have ballooned to the hyperscale level, encompassing billions to trillions of data items and hundreds to thousands of machines per query. Enabling and expanding with these workloads are highly scalable data center networks that connect up to hundreds of thousands of networked servers. These massive scales fundamentally challenge the designs of both data processing systems and data center networks, and the classic layered designs are no longer sustainable. Rather than optimize these massive layers in silos, we build systems across them with principled network-centric designs. In current networks, we redesign data processing systems with network-awareness to minimize the cost of moving data in the network. In future networks, we propose new interfaces and services that the cloud infrastructure offers to applications and codesign data processing systems to achieve optimal query processing performance. To transform the network to future designs, we facilitate network innovation at scale. This dissertation presents a line of systems work that covers all three directions. It first discusses GraphRex, a network-aware system that combines classic database and systems techniques to push the performance of massive graph queries in current data centers. It then introduces data processing in disaggregated data centers, a promising new cloud proposal. It details TELEPORT, a compute pushdown feature that eliminates data processing performance bottlenecks in disaggregated data centers, and Redy, which provides high-performance caches using remote disaggregated memory. Finally, it presents MimicNet, a fine-grained simulation framework that evaluates network proposals at datacenter scale with machine learning approximation. These systems demonstrate that our ideas in network-centric designs achieve orders of magnitude higher efficiency compared to the state of the art at hyperscale

    Symmetry-Aware Predicate Abstraction for Shared-Variable Concurrent Programs (Extended Technical Report)

    Full text link
    Predicate abstraction is a key enabling technology for applying finite-state model checkers to programs written in mainstream languages. It has been used very successfully for debugging sequential system-level C code. Although model checking was originally designed for analyzing concurrent systems, there is little evidence of fruitful applications of predicate abstraction to shared-variable concurrent software. The goal of this paper is to close this gap. We have developed a symmetry-aware predicate abstraction strategy: it takes into account the replicated structure of C programs that consist of many threads executing the same procedure, and generates a Boolean program template whose multi-threaded execution soundly overapproximates the concurrent C program. State explosion during model checking parallel instantiations of this template can now be absorbed by exploiting symmetry. We have implemented our method in the SATABS predicate abstraction framework, and demonstrate its superior performance over alternative approaches on a large range of synchronization programs

    Static dependency analysis of recursive structures for parallelisation

    Get PDF
    • …
    corecore