19,761 research outputs found

    Automated Software Testing in the DoD: Current Practices and Opportunities for Improvement

    Get PDF
    The concept of automating the testing of software-intensive systems has been around for decades, but the practice of automating testing is scarce in many industries, especially in the government defense sector. A one-year project initiated by the Office of the Secretary of Defense (OSD), Scientific Test and Analysis Techniques Center of Excellence (STAT COE) and sponsored by Navy OPNAV N94 set out to: study the degree to which the Department of Defense (DoD) has adopted automated software testing (AST); share the best software practices used by industry; and develop and distribute an AST implementation guide intended for program management and novice DoD software test automators. The Current State of Automated Software Testing in the Department of Defense, AST Practices and Pitfalls Guide, and the AST Implementation Guide are available at www.afit.edu/stat

    A Generic Framework for Automated Quality Assurance of Software Models –Implementation of an Abstract Syntax Tree

    Get PDF
    Abstract—Abstract Syntax Tree’s (AST) are used in language tools, such as compilers, language translators and transformers as well as analysers; to remove syntax and are therefore an ideal construct for a language independent tool. AST’s are also commonly used in static analysis. This increases the value of ASTs for use within a universal Quality Assurance (QA) tool. The Object Management Group (OMG) have outlined a Generic AST Meta-model (GASTM) which may be used to implement the internal representation (IR) for this tool. This paper discusses the implementation and modifications made to the previously published proposal, to use the Object Management Group developed Generic Abstract Syntax Tree Meta-model corecomponents as an internal representation for an automated quality assurance framework. Keywords—software quality assurance; software testing; automated software engineering; programming language paradigms; language independence; abstract syntax tree; static analysis; dynamic analysis I

    FixMiner: Mining Relevant Fix Patterns for Automated Program Repair

    Get PDF
    Patching is a common activity in software development. It is generally performed on a source code base to address bugs or add new functionalities. In this context, given the recurrence of bugs across projects, the associated similar patches can be leveraged to extract generic fix actions. While the literature includes various approaches leveraging similarity among patches to guide program repair, these approaches often do not yield fix patterns that are tractable and reusable as actionable input to APR systems. In this paper, we propose a systematic and automated approach to mining relevant and actionable fix patterns based on an iterative clustering strategy applied to atomic changes within patches. The goal of FixMiner is thus to infer separate and reusable fix patterns that can be leveraged in other patch generation systems. Our technique, FixMiner, leverages Rich Edit Script which is a specialized tree structure of the edit scripts that captures the AST-level context of the code changes. FixMiner uses different tree representations of Rich Edit Scripts for each round of clustering to identify similar changes. These are abstract syntax trees, edit actions trees, and code context trees. We have evaluated FixMiner on thousands of software patches collected from open source projects. Preliminary results show that we are able to mine accurate patterns, efficiently exploiting change information in Rich Edit Scripts. We further integrated the mined patterns to an automated program repair prototype, PARFixMiner, with which we are able to correctly fix 26 bugs of the Defects4J benchmark. Beyond this quantitative performance, we show that the mined fix patterns are sufficiently relevant to produce patches with a high probability of correctness: 81% of PARFixMiner's generated plausible patches are correct.Comment: 31 pages, 11 figure

    Mining Fix Patterns for FindBugs Violations

    Get PDF
    In this paper, we first collect and track a large number of fixed and unfixed violations across revisions of software. The empirical analyses reveal that there are discrepancies in the distributions of violations that are detected and those that are fixed, in terms of occurrences, spread and categories, which can provide insights into prioritizing violations. To automatically identify patterns in violations and their fixes, we propose an approach that utilizes convolutional neural networks to learn features and clustering to regroup similar instances. We then evaluate the usefulness of the identified fix patterns by applying them to unfixed violations. The results show that developers will accept and merge a majority (69/116) of fixes generated from the inferred fix patterns. It is also noteworthy that the yielded patterns are applicable to four real bugs in the Defects4J major benchmark for software testing and automated repair.Comment: Accepted for IEEE Transactions on Software Engineerin

    An automated model-based test oracle for access control systems

    Full text link
    In the context of XACML-based access control systems, an intensive testing activity is among the most adopted means to assure that sensible information or resources are correctly accessed. Unfortunately, it requires a huge effort for manual inspection of results: thus automated verdict derivation is a key aspect for improving the cost-effectiveness of testing. To this purpose, we introduce XACMET, a novel approach for automated model-based oracle definition. XACMET defines a typed graph, called the XAC-Graph, that models the XACML policy evaluation. The expected verdict of a specific request execution can thus be automatically derived by executing the corresponding path in such graph. Our validation of the XACMET prototype implementation confirms the effectiveness of the proposed approach.Comment: 7 page

    A Neural Model for Generating Natural Language Summaries of Program Subroutines

    Full text link
    Source code summarization -- creating natural language descriptions of source code behavior -- is a rapidly-growing research topic with applications to automatic documentation generation, program comprehension, and software maintenance. Traditional techniques relied on heuristics and templates built manually by human experts. Recently, data-driven approaches based on neural machine translation have largely overtaken template-based systems. But nearly all of these techniques rely almost entirely on programs having good internal documentation; without clear identifier names, the models fail to create good summaries. In this paper, we present a neural model that combines words from code with code structure from an AST. Unlike previous approaches, our model processes each data source as a separate input, which allows the model to learn code structure independent of the text in code. This process helps our approach provide coherent summaries in many cases even when zero internal documentation is provided. We evaluate our technique with a dataset we created from 2.1m Java methods. We find improvement over two baseline techniques from SE literature and one from NLP literature

    Automated Functional Testing based on the Navigation of Web Applications

    Full text link
    Web applications are becoming more and more complex. Testing such applications is an intricate hard and time-consuming activity. Therefore, testing is often poorly performed or skipped by practitioners. Test automation can help to avoid this situation. Hence, this paper presents a novel approach to perform automated software testing for web applications based on its navigation. On the one hand, web navigation is the process of traversing a web application using a browser. On the other hand, functional requirements are actions that an application must do. Therefore, the evaluation of the correct navigation of web applications results in the assessment of the specified functional requirements. The proposed method to perform the automation is done in four levels: test case generation, test data derivation, test case execution, and test case reporting. This method is driven by three kinds of inputs: i) UML models; ii) Selenium scripts; iii) XML files. We have implemented our approach in an open-source testing framework named Automatic Testing Platform. The validation of this work has been carried out by means of a case study, in which the target is a real invoice management system developed using a model-driven approach.Comment: In Proceedings WWV 2011, arXiv:1108.208

    Learning to Prove Theorems via Interacting with Proof Assistants

    Full text link
    Humans prove theorems by relying on substantial high-level reasoning and problem-specific insights. Proof assistants offer a formalism that resembles human mathematical reasoning, representing theorems in higher-order logic and proofs as high-level tactics. However, human experts have to construct proofs manually by entering tactics into the proof assistant. In this paper, we study the problem of using machine learning to automate the interaction with proof assistants. We construct CoqGym, a large-scale dataset and learning environment containing 71K human-written proofs from 123 projects developed with the Coq proof assistant. We develop ASTactic, a deep learning-based model that generates tactics as programs in the form of abstract syntax trees (ASTs). Experiments show that ASTactic trained on CoqGym can generate effective tactics and can be used to prove new theorems not previously provable by automated methods. Code is available at https://github.com/princeton-vl/CoqGym.Comment: Accepted to ICML 201
    • …
    corecore