26,600 research outputs found

    Automating test oracles generation

    Get PDF
    Software systems play a more and more important role in our everyday life. Many relevant human activities nowadays involve the execution of a piece of software. Software has to be reliable to deliver the expected behavior, and assessing the quality of software is of primary importance to reduce the risk of runtime errors. Software testing is the most common quality assessing technique for software. Testing consists in running the system under test on a finite set of inputs, and checking the correctness of the results. Thoroughly testing a software system is expensive and requires a lot of manual work to define test inputs (stimuli used to trigger different software behaviors) and test oracles (the decision procedures checking the correctness of the results). Researchers have addressed the cost of testing by proposing techniques to automatically generate test inputs. While the generation of test inputs is well supported, there is no way to generate cost-effective test oracles: Existing techniques to produce test oracles are either too expensive to be applied in practice, or produce oracles with limited effectiveness that can only identify blatant failures like system crashes. Our intuition is that cost-effective test oracles can be generated using information produced as a byproduct of the normal development activities. The goal of this thesis is to create test oracles that can detect faults leading to semantic and non-trivial errors, and that are characterized by a reasonable generation cost. We propose two ways to generate test oracles, one derives oracles from the software redundancy and the other from the natural language comments that document the source code of software systems. We present a technique that exploits redundant sequences of method calls encoding the software redundancy to automatically generate test oracles named CCOracles. We describe how CCOracles are automatically generated, deployed, and executed. We prove the effectiveness of CCOracles by measuring their fault-finding effectiveness when combined with both automatically generated and hand-written test inputs. We also present Toradocu, a technique that derives executable specifications from Javadoc comments of Java constructors and methods. From such specifications, Toradocu generates test oracles that are then deployed into existing test suites to assess the outputs of given test inputs. We empirically evaluate Toradocu, showing that Toradocu accurately translates Javadoc comments into procedure specifications. We also show that Toradocu oracles effectively identify semantic faults in the SUT. CCOracles and Toradocu oracles stem from independent information sources and are complementary in the sense that they check different aspects of the system undertest

    R2O, an extensible and semantically based database-to-ontology mapping language

    Full text link
    We present R2O, an extensible and declarative language to describe mappings between relational DB schemas and ontologies implemented in RDF(S) or OWL. R2O provides an extensible set of primitives with welldefined semantics. This language has been conceived expressive enough to cope with complex mapping cases arisen from situations of low similarity between the ontology and the DB models

    Designing Programming Languages for Writing Maintainable Software

    Get PDF
    Maintainability is crucial to the long-term success of software projects. Among other factors, it is affected by the programming language in which the software is written. Programming language designers should be conscious of how their design decisions can influence software maintainability. Non-functional properties of a language can affect the readability of source code in ways beyond the control of programmers. Language features can cause or prevent certain classes of bugs, and runtime issues especially can require significant maintenance effort. Tools external to the language, especially those developed and distributed by language implementers, can aid in the creation of maintainable software. Languages designed with these aspects in mind will ease the burden placed on software maintainers by facilitating the development of robust, high-quality software

    OpenJML: Software verification for Java 7 using JML, OpenJDK, and Eclipse

    Full text link
    OpenJML is a tool for checking code and specifications of Java programs. We describe our experience building the tool on the foundation of JML, OpenJDK and Eclipse, as well as on many advances in specification-based software verification. The implementation demonstrates the value of integrating specification tools directly in the software development IDE and in automating as many tasks as possible. The tool, though still in progress, has now been used for several college-level courses on software specification and verification and for small-scale studies on existing Java programs.Comment: In Proceedings F-IDE 2014, arXiv:1404.578

    Deep Just-In-Time Inconsistency Detection Between Comments and Source Code

    Full text link
    Natural language comments convey key aspects of source code such as implementation, usage, and pre- and post-conditions. Failure to update comments accordingly when the corresponding code is modified introduces inconsistencies, which is known to lead to confusion and software bugs. In this paper, we aim to detect whether a comment becomes inconsistent as a result of changes to the corresponding body of code, in order to catch potential inconsistencies just-in-time, i.e., before they are committed to a code base. To achieve this, we develop a deep-learning approach that learns to correlate a comment with code changes. By evaluating on a large corpus of comment/code pairs spanning various comment types, we show that our model outperforms multiple baselines by significant margins. For extrinsic evaluation, we show the usefulness of our approach by combining it with a comment update model to build a more comprehensive automatic comment maintenance system which can both detect and resolve inconsistent comments based on code changes.Comment: Accepted in AAAI 202

    A study of the methodologies currently available for the maintenance of the knowledge-base in an expert system

    Get PDF
    This research studies currently available maintenance methodologies for expert system knowledge bases and taxonomically classifies them according to the functions they perform. The classification falls into two broad categories. These are: (1) Methodologies for building a more maintainable expert system knowledge base. This section covers techniques applicable to the development phases. Software engineering approaches as well as other approaches are discussed. (2) Methodologies for maintaining an existing knowledge base. This section is concerned with the continued maintenance of an existing knowledge base. It is divided into three subsections. The first subsection discusses tools and techniques which aid the understanding of a knowledge base. The second looks at tools which facilitate the actual modification of the knowledge base, while the last secttion examines tools used for the verification or validation of the knowledge base. Every main methodology or tool selected for this study is analysed according to the function it was designed to perform (or its objective); the concept or principles behind the tool or methodology: and its implementation details. This is followed by a general comment at the end of the analysis. Although expert systems as a rule contain significant amount of information related to the user interface, database interface, integration with conventional software for numerical calculations, integration with other knowledge bases through black boarding systems or network interactions, this research is confined to the maintenance of the knowledge base only and does not address the maintenance of these interfaces. Also not included in this thesis are Truth Maintenance Systems. While a Truth Maintenance System (TMS) automatically updates a knowledge base during execution time, these update operations are not considered \u27maintenance\u27 in the sense as used in this thesis. Maintenance in the context of this thesis refers to perfective, adaptive, and corrective maintenance (see introduction to chapter 4). TMS on the other hand refers to a collection of techniques for doing belief revision (Martin, 1990) . That is, a TMS maintains a set of beliefs or facts in the knowledge base to ensure that they remain consistent during execution time. From this perspective, TMS is not regarded as a knowledge base maintenance tool for the purpose of this study
    • …
    corecore