175,449 research outputs found

    Synthetic Dataset Generation with Itemset-Based Generative Models

    Full text link
    This paper proposes three different data generators, tailored to transactional datasets, based on existing itemset-based generative models. All these generators are intuitive and easy to implement and show satisfactory performance. The quality of each generator is assessed by means of three different methods that capture how well the original dataset structure is preserved.Comment: IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW@RDSA 2019), Oct 201

    Empirical Notes on the Interaction Between Continuous Kernel Fuzzing and Development

    Full text link
    Fuzzing has been studied and applied ever since the 1990s. Automated and continuous fuzzing has recently been applied also to open source software projects, including the Linux and BSD kernels. This paper concentrates on the practical aspects of continuous kernel fuzzing in four open source kernels. According to the results, there are over 800 unresolved crashes reported for the four kernels by the syzkaller/syzbot framework. Many of these have been reported relatively long ago. Interestingly, fuzzing-induced bugs have been resolved in the BSD kernels more rapidly. Furthermore, assertions and debug checks, use-after-frees, and general protection faults account for the majority of bug types in the Linux kernel. About 23% of the fixed bugs in the Linux kernel have either went through code review or additional testing. Finally, only code churn provides a weak statistical signal for explaining the associated bug fixing times in the Linux kernel.Comment: The 4th IEEE International Workshop on Reliability and Security Data Analysis (RSDA), 2019 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW), Berlin, IEE

    Security Assessment and Hardening of Fog Computing Systems

    Full text link
    In recent years, there has been a shift in computing architectures, moving away from centralized cloud computing towards decentralized edge and fog computing. This shift is driven by factors such as the increasing volume of data generated at the edge, the growing demand for real-time processing and low-latency applications, and the need for improved privacy and data locality. Although this new paradigm offers numerous advantages, it also introduces significant security and reliability challenges. This paper aims to review the architectures and technologies employed in fog computing and identify opportunities for developing novel security assessment and security hardening techniques. These techniques include secure configuration and debloating to enhance the security of middleware, testing techniques to assess secure communication mechanisms, and automated rehosting to speed up the security testing of embedded firmware.Comment: 4 pages, Accepted for publication at The 34th IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW

    Enhancing Failure Propagation Analysis in Cloud Computing Systems

    Full text link
    In order to plan for failure recovery, the designers of cloud systems need to understand how their system can potentially fail. Unfortunately, analyzing the failure behavior of such systems can be very difficult and time-consuming, due to the large volume of events, non-determinism, and reuse of third-party components. To address these issues, we propose a novel approach that joins fault injection with anomaly detection to identify the symptoms of failures. We evaluated the proposed approach in the context of the OpenStack cloud computing platform. We show that our model can significantly improve the accuracy of failure analysis in terms of false positives and negatives, with a low computational cost.Comment: 12 pages, The 30th International Symposium on Software Reliability Engineering (ISSRE 2019

    Dependability Evaluation of Middleware Technology for Large-scale Distributed Caching

    Full text link
    Distributed caching systems (e.g., Memcached) are widely used by service providers to satisfy accesses by millions of concurrent clients. Given their large-scale, modern distributed systems rely on a middleware layer to manage caching nodes, to make applications easier to develop, and to apply load balancing and replication strategies. In this work, we performed a dependability evaluation of three popular middleware platforms, namely Twemproxy by Twitter, Mcrouter by Facebook, and Dynomite by Netflix, to assess availability and performance under faults, including failures of Memcached nodes and congestion due to unbalanced workloads and network link bandwidth bottlenecks. We point out the different availability and performance trade-offs achieved by the three platforms, and scenarios in which few faulty components cause cascading failures of the whole distributed system.Comment: 2020 IEEE 31st International Symposium on Software Reliability Engineering (ISSRE 2020

    LLaMA-Reviewer: Advancing Code Review Automation with Large Language Models through Parameter-Efficient Fine-Tuning (Practical Experience Report)

    Full text link
    The automation of code review activities, a long-standing pursuit in software engineering, has been primarily addressed by numerous domain-specific pre-trained models. Despite their success, these models frequently demand extensive resources for pre-training from scratch. In contrast, Large Language Models (LLMs) provide an intriguing alternative, given their remarkable capabilities when supplemented with domain-specific knowledge. However, their potential for automating code review tasks remains largely unexplored. In response to this research gap, we present LLaMA-Reviewer, an innovative framework that leverages the capabilities of LLaMA, a popular LLM, in the realm of code review. Mindful of resource constraints, this framework employs parameter-efficient fine-tuning (PEFT) methods, delivering high performance while using less than 1% of trainable parameters. An extensive evaluation of LLaMA-Reviewer is conducted on two diverse, publicly available datasets. Notably, even with the smallest LLaMA base model consisting of 6.7B parameters and a limited number of tuning epochs, LLaMA-Reviewer equals the performance of existing code-review-focused models. The ablation experiments provide insights into the influence of various fine-tuning process components, including input representation, instruction tuning, and different PEFT methods. To foster continuous progress in this field, the code and all PEFT-weight plugins have been made open-source.Comment: Accepted to the 34th IEEE International Symposium on Software Reliability Engineering (ISSRE 2023
    • …
    corecore