9,993 research outputs found

    A Compositional Approach to Verifying Modular Robotic Systems

    Full text link
    Robotic systems used in safety-critical industrial situations often rely on modular software architectures, and increasingly include autonomous components. Verifying that these modular robotic systems behave as expected requires approaches that can cope with, and preferably take advantage of, this inherent modularity. This paper describes a compositional approach to specifying the nodes in robotic systems built using the Robotic Operating System (ROS), where each node is specified using First-Order Logic (FOL) assume-guarantee contracts that link the specification to the ROS implementation. We introduce inference rules that facilitate the composition of these node-level contracts to derive system-level properties. We also present a novel Domain-Specific Language, the ROS Contract Language, which captures a node's FOL specification and links this contract to its implementation. RCL contracts can be automatically translated, by our tool Vanda, into executable monitors; which we use to verify the contracts at runtime. We illustrate our approach through the specification and verification of an autonomous rover engaged in the remote inspection of a nuclear site, and finish with smaller examples that illustrate other useful features of our framework.Comment: Version submitted to RA

    An interview study about the use of logs in embedded software engineering

    Get PDF
    Context: Execution logs capture the run-time behavior of software systems. To assist developers in their maintenance tasks, many studies have proposed tools to analyze execution information from logs. However, it is as yet unknown how industry developers use logs in embedded software engineering. Objective: In this study, we aim to understand how developers use logs in an embedded software engineering context. Specifically, we would like to gain insights into the type of logs developers analyze, the purposes for which developers analyze logs, the information developers need from logs and their expectation on tool support. Method: In order to achieve the aim, we conducted these interview studies. First, we interviewed 25 software developers from ASML, which is a leading company in developing lithography machines. This exploratory case study provides the preliminary findings. Next, we validated and refined our findings by conducting a replication study. We involved 14 interviewees from four companies who have different software engineering roles in their daily work. Results: As the result of our first study, we compile a preliminary taxonomy which consists of four types of logs used by developers in practice, 18 purposes of using logs, 13 types of information developers search in logs, 13 challenges faced by developers in log analysis and three suggestions for tool support provided by developers. This taxonomy is refined in the replication study with three additional purposes, one additional information need, four additional challenges and three additional suggestions of tool support. In addition, with these two studies, we observed that text-based editors and self-made scripts are commonly used when it comes to tooling in log analysis practice. As indicated by the interviewees, the development of automatic analysis tools is hindered by the quality of the logs, which further suggests several challenges in log instrumentation and management. Conclusions: Based on our study, we provide suggestions for practitioners on logging practices. We provide implications for tool builders on how to further improve tools based on existing techniques. Finally, we suggest some research directions and studies for researchers to further study software logging.</p

    On Ladder Logic Bombs in Industrial Control Systems

    Full text link
    In industrial control systems, devices such as Programmable Logic Controllers (PLCs) are commonly used to directly interact with sensors and actuators, and perform local automatic control. PLCs run software on two different layers: a) firmware (i.e. the OS) and b) control logic (processing sensor readings to determine control actions). In this work, we discuss ladder logic bombs, i.e. malware written in ladder logic (or one of the other IEC 61131-3-compatible languages). Such malware would be inserted by an attacker into existing control logic on a PLC, and either persistently change the behavior, or wait for specific trigger signals to activate malicious behaviour. For example, the LLB could replace legitimate sensor readings with manipulated values. We see the concept of LLBs as a generalization of attacks such as the Stuxnet attack. We introduce LLBs on an abstract level, and then demonstrate several designs based on real PLC devices in our lab. In particular, we also focus on stealthy LLBs, i.e. LLBs that are hard to detect by human operators manually validating the program running in PLCs. In addition to introducing vulnerabilities on the logic layer, we also discuss countermeasures and we propose two detection techniques.Comment: 11 pages, 14 figures, 2 tables, 1 algorith

    Exploring the Verifiability of Code Generated by GitHub Copilot

    Full text link
    GitHub's Copilot generates code quickly. We investigate whether it generates good code. Our approach is to identify a set of problems, ask Copilot to generate solutions, and attempt to formally verify these solutions with Dafny. Our formal verification is with respect to hand-crafted specifications. We have carried out this process on 6 problems and succeeded in formally verifying 4 of the created solutions. We found evidence which corroborates the current consensus in the literature: Copilot is a powerful tool; however, it should not be "flying the plane" by itself.Comment: HATRA workshop at SPLASH 202

    Are Delayed Issues Harder to Resolve? Revisiting Cost-to-Fix of Defects throughout the Lifecycle

    Full text link
    Many practitioners and academics believe in a delayed issue effect (DIE); i.e. the longer an issue lingers in the system, the more effort it requires to resolve. This belief is often used to justify major investments in new development processes that promise to retire more issues sooner. This paper tests for the delayed issue effect in 171 software projects conducted around the world in the period from 2006--2014. To the best of our knowledge, this is the largest study yet published on this effect. We found no evidence for the delayed issue effect; i.e. the effort to resolve issues in a later phase was not consistently or substantially greater than when issues were resolved soon after their introduction. This paper documents the above study and explores reasons for this mismatch between this common rule of thumb and empirical data. In summary, DIE is not some constant across all projects. Rather, DIE might be an historical relic that occurs intermittently only in certain kinds of projects. This is a significant result since it predicts that new development processes that promise to faster retire more issues will not have a guaranteed return on investment (depending on the context where applied), and that a long-held truth in software engineering should not be considered a global truism.Comment: 31 pages. Accepted with minor revisions to Journal of Empirical Software Engineering. Keywords: software economics, phase delay, cost to fi

    A Time Comparison of Computer-Assisted and Manual Bathymetric Processing

    Get PDF
    We describe an experiment designed to determine the time required to process Multibeam Echosounder (MBES) data using the CUBE (Combined Uncertainty and Bathymetry Estimator) [Calder & Mayer, 2003; Calder, 2003] and Navigation Surface [Smith et al., 2002; Smith, 2003] algorithms. We collected data for a small (22.3xl06 soundings) survey in Valdez Narrows, Alaska, and monitored person-hours expended on processing for a traditional MBES processing stream and the proposed computer-assisted method operating on identical data. The analysis shows that the vast majority of time expended in a traditional processing stream is in subjective hand-editing of data, followed by line planning and quality control, and that the computer-assisted method is significantly faster than the traditional process through its elimination of human interaction time. The potential improvement in editing time is shown to be on the order of 25-37:1 over traditional methods

    Context2Name: A Deep Learning-Based Approach to Infer Natural Variable Names from Usage Contexts

    Full text link
    Most of the JavaScript code deployed in the wild has been minified, a process in which identifier names are replaced with short, arbitrary and meaningless names. Minified code occupies less space, but also makes the code extremely difficult to manually inspect and understand. This paper presents Context2Name, a deep learningbased technique that partially reverses the effect of minification by predicting natural identifier names for minified names. The core idea is to predict from the usage context of a variable a name that captures the meaning of the variable. The approach combines a lightweight, token-based static analysis with an auto-encoder neural network that summarizes usage contexts and a recurrent neural network that predict natural names for a given usage context. We evaluate Context2Name with a large corpus of real-world JavaScript code and show that it successfully predicts 47.5% of all minified identifiers while taking only 2.9 milliseconds on average to predict a name. A comparison with the state-of-the-art tools JSNice and JSNaughty shows that our approach performs comparably in terms of accuracy while improving in terms of efficiency. Moreover, Context2Name complements the state-of-the-art by predicting 5.3% additional identifiers that are missed by both existing tools

    A Verified Information-Flow Architecture

    Get PDF
    SAFE is a clean-slate design for a highly secure computer system, with pervasive mechanisms for tracking and limiting information flows. At the lowest level, the SAFE hardware supports fine-grained programmable tags, with efficient and flexible propagation and combination of tags as instructions are executed. The operating system virtualizes these generic facilities to present an information-flow abstract machine that allows user programs to label sensitive data with rich confidentiality policies. We present a formal, machine-checked model of the key hardware and software mechanisms used to dynamically control information flow in SAFE and an end-to-end proof of noninterference for this model. We use a refinement proof methodology to propagate the noninterference property of the abstract machine down to the concrete machine level. We use an intermediate layer in the refinement chain that factors out the details of the information-flow control policy and devise a code generator for compiling such information-flow policies into low-level monitor code. Finally, we verify the correctness of this generator using a dedicated Hoare logic that abstracts from low-level machine instructions into a reusable set of verified structured code generators
    • …
    corecore