53 research outputs found

    END - A Lightweight Algorithm to Estimate the Number of Defects in Software

    Get PDF
    Defect precision provides information on how many defects a given software application appears to have. Existing approaches are usually based on time consuming model-based techniques. A viable alternative is the previously presented Abacus algorithm, which is based on Bayesian fault diagnosis. This paper presents a novel alternative approach- coined End- that uses the same input and produces the same output as the Abacus algorithm, but is considerably more time efficient. An experiment was conducted to compare both the accuracy and performance of these two algorithms. The End algorithm presented the same accuracy as the Abacus algorithm, but outperformed it in the majority of executions.

    The role of limitations and SLAs in the API industry

    Get PDF
    As software architecture design is evolving to a microservice paradigm, RESTful APIs are being established as the preferred choice to build applications. In such a scenario, there is a shift towards a growing market of APIs where providers offer different service levels with tailored limitations typically based on the cost. In this context, while there are well established standards to describe the functional elements of APIs (such as the OpenAPI Specification), having a standard model for Service Level Agreements (SLAs) for APIs may boost an open ecosystem of tools that would represent an improvement for the industry by automating certain tasks during the development such as: SLA-aware scaffolding, SLA-aware testing, or SLA-aware requesters. Unfortunately, despite there have been several proposals to describe SLAs for software in general and web services in particular during the past decades, there is an actual lack of a widely used standard due to the complex landscape of concepts surrounding the notion of SLAs and the multiple perspectives that can be addressed. In this paper, we aim to analyze the landscape for SLAs for APIs in two different directions: i) Clarifying the SLA-driven API development lifecycle: its activities and participants; 2) Developing a catalog of relevant concepts and an ulterior prioritization based on different perspectives from both Industry and Academia. As a main result, we present a scored list of concepts that paves the way to establish a concrete road-map for a standard industry-aligned specification to describe SLAs in APIs.Ministerio de Economía y Competitividad TIN2015-70560-RMinisterio de Ciencia, Innovación y Universidades RTI2018-101204-B-C21Ministerio de Educación, Cultura y Deporte FPU15/0298

    Understanding Concurrency Vulnerabilities in Linux Kernel

    Full text link
    While there is a large body of work on analyzing concurrency related software bugs and developing techniques for detecting and patching them, little attention has been given to concurrency related security vulnerabilities. The two are different in that not all bugs are vulnerabilities: for a bug to be exploitable, there needs be a way for attackers to trigger its execution and cause damage, e.g., by revealing sensitive data or running malicious code. To fill the gap, we conduct the first empirical study of concurrency vulnerabilities reported in the Linux operating system in the past ten years. We focus on analyzing the confirmed vulnerabilities archived in the Common Vulnerabilities and Exposures (CVE) database, which are then categorized into different groups based on bug types, exploit patterns, and patch strategies adopted by developers. We use code snippets to illustrate individual vulnerability types and patch strategies. We also use statistics to illustrate the entire landscape, including the percentage of each vulnerability type. We hope to shed some light on the problem, e.g., concurrency vulnerabilities continue to pose a serious threat to system security, and it is difficult even for kernel developers to analyze and patch them. Therefore, more efforts are needed to develop tools and techniques for analyzing and patching these vulnerabilities.Comment: It was finished in Oct 201

    SAVCBS 2004 Specification and Verification of Component-Based Systems: Workshop Proceedings

    Get PDF
    This is the proceedings of the 2004 SAVCBS workshop. The workshop is concerned with how formal (i.e., mathematical) techniques can be or should be used to establish a suitable foundation for the specification and verification of component-based systems. Component-based systems are a growing concern for the software engineering community. Specification and reasoning techniques are urgently needed to permit composition of systems from components. Component-based specification and verification is also vital for scaling advanced verification techniques such as extended static analysis and model checking to the size of real systems. The workshop considers formalization of both functional and non-functional behavior, such as performance or reliability

    Integration of safety risk assessment techniques into requirement elicitation

    Get PDF
    Incomplete and incorrect requirements may cause the safety-related software systems to fail to achieve their safety goals. It is crucial to ensure software safety by identifying proper software safety requirements during the requirements elicitation activity. Practitioners apply various Safety Risk Assessment Techniques (SRATs) to identify, analyze and assess safety risk.Nevertheless, there is a lack of guidance on how appropriate SRATs and safety process can be integrated into requirements elicitation activity to bridge the gap between the safety and requirements engineering practices. In this research, we proposed an Integration Framework that integrates safety activities and techniques into existing requirements elicitation activity

    Gjxdm Documents and Small Law Enforcement Agencies

    Get PDF
    The purpose of this paper is to demonstrate that while the Global Justice XML Data Model (GJXDM) is a complete and effective solution for criminal justice agencies it is complex to implement and difficult for smaller law enforcement agencies to put into practice. The paper presents the current implementation steps for a new GJXDM document and describes the process of implementing an existing GJXDM document. The paper also presents a tool for agencies to start using and processing GJXDM documents. Also offered within the paper is a design for a central repository for increasing GJXDM information sharing and dissemination of GJXDM software artifacts

    Parallel bug-finding in concurrent programs via reduced interleaving instances

    Get PDF
    Concurrency poses a major challenge for program verification, but it can also offer an opportunity to scale when subproblems can be analysed in parallel. We exploit this opportunity here and use a parametrizable code-to-code translation to generate a set of simpler program instances, each capturing a reduced set of the original program’s interleavings. These instances can then be checked independently in parallel. Our approach does not depend on the tool that is chosen for the final analysis, is compatible with weak memory models, and amplifies the effectiveness of existing tools, making them find bugs faster and with fewer resources. We use Lazy-CSeq as an off-the-shelf final verifier to demonstrate that our approach is able, already with a small number of cores, to find bugs in the hardest known concurrency benchmarks in a matter of minutes, whereas other dynamic and static tools fail to do so in hours

    A scalable mixed-level approach to dynamic analysis of C and C++ programs

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.Includes bibliographical references (p. 107-112).This thesis addresses the difficult task of constructing robust and scalable dynamic program analysis tools for programs written in memory-unsafe languages such as C and C++, especially those that are interested in observing the contents of data structures at run time. In this thesis, I first introduce my novel mixed-level approach to dynamic analysis, which combines the advantages of both source- and binary-based approaches. Second, I present a tool framework that embodies the mixed-level approach. This framework provides memory safety guarantees, allows tools built upon it to access rich source- and binary-level information simultaneously at run time, and enables tools to scale to large, real-world C and C++ programs on the order of millions of lines of code. Third, I present two dynamic analysis tools built upon my framework - one for performing value profiling and the other for performing dynamic inference of abstract types - and describe how they far surpass previous analyses in terms of scalability, robustness, and applicability. Lastly, I present several case studies demonstrating how these tools aid both humans and automated tools in several program analysis tasks: improving human understanding of unfamiliar code, invariant detection, and data structure repair.by Philip Jia Guo.M.Eng

    Selecting fault revealing mutants

    Get PDF
    Mutant selection refers to the problem of choosing, among a large number of mutants, the (few) ones that should be used by the testers. In view of this, we investigate the problem of selecting the fault revealing mutants, i.e., the mutants that are killable and lead to test cases that uncover unknown program faults. We formulate two variants of this problem: the fault revealing mutant selection and the fault revealing mutant prioritization. We argue and show that these problems can be tackled through a set of ‘static’ program features and propose a machine learning approach, named FaRM, that learns to select and rank killable and fault revealing mutants. Experimental results involving 1,692 real faults show the practical benefits of our approach in both examined problems. Our results show that FaRM achieves a good trade-off between application cost and effectiveness (measured in terms of faults revealed). We also show that FaRM outperforms all the existing mutant selection methods, i.e., the random mutant sampling, the selective mutation and defect prediction (mutating the code areas pointed by defect prediction). In particular, our results show that with respect to mutant selection, our approach reveals 23% to 34% more faults than any of the baseline methods, while, with respect to mutant prioritization, it achieves higher average percentage of revealed faults with a median difference between 4% and 9% (from the random mutant orderings)
    corecore