6 research outputs found

    Computing homomorphic program invariants

    Get PDF
    Program invariants are properties that are true at a particular program point or points. Program invariants are often undocumented assertions made by a programmer that hold the key to reasoning correctly about a software verification task. Unlike the contemporary research in which program invariants are defined to hold for all control flow paths, we propose \textit{homomorphic program invariants}, which hold with respect to a relevant equivalence class of control flow paths. For a problem-specific task, homomorphic program invariants can form stricter assertions. This work demonstrates that the novelty of computing homomorphic program invariants is both useful and practical. Towards our goal of computing homomorphic program invariants, we deal with the challenge of the astronomical number of paths in programs. Since reasoning about a class of program paths must be efficient in order to scale to real-world programs, we extend prior work to efficiently divide program paths into equivalence classes with respect to control flow events of interest. Our technique reasons about inter-procedural paths, which we then use to determine how to modify a program binary to abort execution at the start of an irrelevant program path. With off-the-shelf components, we employ the state-of-the-art in fuzzing and dynamic invariant detection tools to mine homomorphic program invariants. To aid in the task of identifying likely software anomalies, we develop human-in-the-loop analysis methodologies and a toolbox of human-centric static analysis tools. We present work to perform a statically-informed dynamic analysis to efficiently transition from static analysis to dynamic analysis and leverage the strengths of each approach. To evaluate our approach, we apply our techniques to three case study audits of challenge applications from DARPA\u27s Space/Time Analysis for Cybersecurity (STAC) program. In the final case study, we discover an unintentional vulnerability that causes a denial of service (DoS) in space and time, despite the challenge application having been hardened against static and dynamic analysis techniques

    Synthesizing a Progression of Subtasks for Block-Based Visual Programming Tasks

    Full text link
    Block-based visual programming environments play an increasingly important role in introducing computing concepts to K-12 students. In recent years, they have also gained popularity in neuro-symbolic AI, serving as a benchmark to evaluate general problem-solving and logical reasoning skills. The open-ended and conceptual nature of these visual programming tasks make them challenging, both for state-of-the-art AI agents as well as for novice programmers. A natural approach to providing assistance for problem-solving is breaking down a complex task into a progression of simpler subtasks; however, this is not trivial given that the solution codes are typically nested and have non-linear execution behavior. In this paper, we formalize the problem of synthesizing such a progression for a given reference block-based visual programming task. We propose a novel synthesis algorithm that generates a progression of subtasks that are high-quality, well-spaced in terms of their complexity, and solving this progression leads to solving the reference task. We show the utility of our synthesis algorithm in improving the efficacy of AI agents (in this case, neural program synthesizers) for solving tasks in the Karel programming environment. Then, we conduct a user study to demonstrate that our synthesized progression of subtasks can assist a novice programmer in solving tasks in the Hour of Code: Maze Challenge by Code-dot-org

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access two-volume set constitutes the proceedings of the 27th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2021, which was held during March 27 – April 1, 2021, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2021. The conference was planned to take place in Luxembourg and changed to an online format due to the COVID-19 pandemic. The total of 41 full papers presented in the proceedings was carefully reviewed and selected from 141 submissions. The volume also contains 7 tool papers; 6 Tool Demo papers, 9 SV-Comp Competition Papers. The papers are organized in topical sections as follows: Part I: Game Theory; SMT Verification; Probabilities; Timed Systems; Neural Networks; Analysis of Network Communication. Part II: Verification Techniques (not SMT); Case Studies; Proof Generation/Validation; Tool Papers; Tool Demo Papers; SV-Comp Tool Competition Papers

    Towards cooperative software verification with test generation and formal verification

    Get PDF
    There are two major methods for software verification: testing and formal verification. To increase our confidence in software on a large scale, we require tools that apply these methods automatically and reliably. Testing with manually written tests is widespread, but for automatically generated tests for the C programming language there is no standardized format. This makes the use and comparison of automated test generators expensive. In addition, testing can never provide full confidence in software—it can show the presence of bugs, but not their absence. In contrast, formal verification uses established, standardized formats and can prove the absence of bugs. Unfortunately, even successful formal-verification techniques suffer from different weaknesses. Compositions of multiple techniques try to combine the strengths of complementing techniques, but such combinations are often designed as cohesive, monolithic units. This makes them inflexible and it is costly to replace components. To improve on this state of the art, we work towards an off-the-shelf cooperation between verification tools through standardized exchange formats. First, we work towards standardization of automated test generation for C. We increase the comparability of test generators through a common benchmarking framework and reliable tooling, and provide means to reliably compare the bug-finding capabilities of test generators and formal verifiers. Second, we introduce new concepts for the off-the-shelf cooperation between verifiers (both test generators and formal verifiers). We show the flexibility of these concepts through an array of combinations and through an application to incremental verification. We also show how existing, strongly coupled techniques in software verification can be decomposed into stand-alone components that cooperate through clearly defined interfaces and standardized exchange formats. All our work is backed by rigorous implementation of the proposed concepts and thorough experimental evaluations that demonstrate the benefits of our work. Through these means we are able to improve the comparability of automated verifiers, allow the cooperation between a large array of existing verifiers, increase the effectiveness of software verification, and create new opportunities for further research on cooperative verification
    corecore