3,544 research outputs found

    An Empirical analysis of Open Source Software Defects data through Software Reliability Growth Models

    Get PDF
    The purpose of this study is to analyze the reliability growth of Open Source Software (OSS) using Software Reliability Growth Models (SRGM). This study uses defects data of twenty five different releases of five OSS projects. For each release of the selected projects two types of datasets have been created; datasets developed with respect to defect creation date (created date DS) and datasets developed with respect to defect updated date (updated date DS). These defects datasets are modelled by eight SRGMs; Musa Okumoto, Inflection S-Shaped, Goel Okumoto, Delayed S-Shaped, Logistic, Gompertz, Yamada Exponential, and Generalized Goel Model. These models are chosen due to their widespread use in the literature. The SRGMs are fitted to both types of defects datasets of each project and the their fitting and prediction capabilities are analysed in order to study the OSS reliability growth with respect to defects creation and defects updating time because defect analysis can be used as a constructive reliability predictor. Results show that SRGMs fitting capabilities and prediction qualities directly increase when defects creation date is used for developing OSS defect datasets to characterize the reliability growth of OSS. Hence OSS reliability growth can be characterized with SRGM in a better way if the defect creation date is taken instead of defects updating (fixing) date while developing OSS defects datasets in their reliability modellin

    Uncovering Bugs in Distributed Storage Systems during Testing (not in Production!)

    Get PDF
    Testing distributed systems is challenging due to multiple sources of nondeterminism. Conventional testing techniques, such as unit, integration and stress testing, are ineffective in preventing serious but subtle bugs from reaching production. Formal techniques, such as TLA+, can only verify high-level specifications of systems at the level of logic-based models, and fall short of checking the actual executable code. In this paper, we present a new methodology for testing distributed systems. Our approach applies advanced systematic testing techniques to thoroughly check that the executable code adheres to its high-level specifications, which significantly improves coverage of important system behaviors. Our methodology has been applied to three distributed storage systems in the Microsoft Azure cloud computing platform. In the process, numerous bugs were identified, reproduced, confirmed and fixed. These bugs required a subtle combination of concurrency and failures, making them extremely difficult to find with conventional testing techniques. An important advantage of our approach is that a bug is uncovered in a small setting and witnessed by a full system trace, which dramatically increases the productivity of debugging

    Developing a distributed electronic health-record store for India

    Get PDF
    The DIGHT project is addressing the problem of building a scalable and highly available information store for the Electronic Health Records (EHRs) of the over one billion citizens of India

    Survivable algorithms and redundancy management in NASA's distributed computing systems

    Get PDF
    The design of survivable algorithms requires a solid foundation for executing them. While hardware techniques for fault-tolerant computing are relatively well understood, fault-tolerant operating systems, as well as fault-tolerant applications (survivable algorithms), are, by contrast, little understood, and much more work in this field is required. We outline some of our work that contributes to the foundation of ultrareliable operating systems and fault-tolerant algorithm design. We introduce our consensus-based framework for fault-tolerant system design. This is followed by a description of a hierarchical partitioning method for efficient consensus. A scheduler for redundancy management is introduced, and application-specific fault tolerance is described. We give an overview of our hybrid algorithm technique, which is an alternative to the formal approach given

    Lightweight and static verification of UML executable models

    Get PDF
    Executable models play a key role in many software development methods by facilitating the (semi)automatic implementation/execution of the software system under development. This is possible because executable models promote a complete and fine-grained specification of the system behaviour. In this context, where models are the basis of the whole development process, the quality of the models has a high impact on the final quality of software systems derived from them. Therefore, the existence of methods to verify the correctness of executable models is crucial. Otherwise, the quality of the executable models (and in turn the quality of the final system generated from them) will be compromised. In this paper a lightweight and static verification method to assess the correctness of executable models is proposed. This method allows us to check whether the operations defined as part of the behavioural model are able to be executed without breaking the integrity of the structural model and returns a meaningful feedback that helps repairing the detected inconsistencies.Peer ReviewedPostprint (author's final draft

    Sharpening Your Tools: Updating bulk_extractor for the 2020s

    Full text link
    Bulk_extractor is a high-performance digital forensics tool written in C++. Between 2018 and 2022 we updated the program from C++98 to C++17, performed a complete code refactoring, and adopted a unit test framework. The new version typically runs with 75\% more throughput than the previous version, which we attribute to improved multithreading. We provide lessons and recommendations for other digital forensics tool maintainers
    • …
    corecore