89 research outputs found

    Evaluation and Measurement of Software Process Improvement -- A Systematic Literature Review

    Full text link
    BACKGROUND: Software Process Improvement (SPI) is a systematic approach to increase the efficiency and effectiveness of a software development organization and to enhance software products. OBJECTIVE: This paper aims to identify and characterize evaluation strategies and measurements used to assess the impact of different SPI initiatives. METHOD: The systematic literature review includes 148 papers published between 1991 and 2008. The selected papers were classified according to SPI initiative, applied evaluation strategies, and measurement perspectives. Potential confounding factors interfering with the evaluation of the improvement effort were assessed. RESULTS: Seven distinct evaluation strategies were identified, wherein the most common one, "Pre-Post Comparison" was applied in 49 percent of the inspected papers. Quality was the most measured attribute (62 percent), followed by Cost (41 percent), and Schedule (18 percent). Looking at measurement perspectives, "Project" represents the majority with 66 percent. CONCLUSION: The evaluation validity of SPI initiatives is challenged by the scarce consideration of potential confounding factors, particularly given that "Pre-Post Comparison" was identified as the most common evaluation strategy, and the inaccurate descriptions of the evaluation context. Measurements to assess the short and mid-term impact of SPI initiatives prevail, whereas long-term measurements in terms of customer satisfaction and return on investment tend to be less used

    Exploitability prediction of software vulnerabilities

    Get PDF
    The number of security failure discovered and disclosed publicly are increasing at a pace like never before. Wherein, a small fraction of vulnerabilities encountered in the operational phase are exploited in the wild. It is difficult to find vulnerabilities during the early stages of software development cycle, as security aspects are often not known adequately. To counter these security implications, firms usually provide patches such that these security flaws are not exploited. It is a daunting task for a security manager to prioritize patches for vulnerabilities that are likely to be exploitable. This paper fills this gap by applying different machine learning techniques to classify the vulnerabilities based on previous exploit-history. Our work indicates that various vulnerability characteristics such as severity, type of vulnerabilities, different software configurations, and vulnerability scoring parameters are important features to be considered in judging an exploit. Using such methods, it is possible to predict exploit-prone vulnerabilities with an accuracy >85%. Finally, with this experiment, we conclude that supervised machine learning approach can be a useful technique in predicting exploit-prone vulnerabilities.http://wileyonlinelibrary.com/journal/qrehj2022Industrial and Systems Engineerin

    Variationally consistent computational homogenization of chemomechanical problems with stabilized weakly periodic boundary conditions

    Get PDF
    A variationally consistent model-based computational homogenization approach for transient chemomechanically coupled problems is developed based on the classical assumption of first-order prolongation of the displacement, chemical potential, and (ion) concentration fields within a representative volume element (RVE). The presence of the chemical potential and the concentration as primary global fields represents a mixed formulation, which has definite advantages. Nonstandard diffusion, governed by a Cahn–Hilliard type of gradient model, is considered under the restriction of miscibility. Weakly periodic boundary conditions on the pertinent fields provide the general variational setting for the uniquely solvable RVE-problem(s). These boundary conditions are introduced with a novel approach in order to control the stability of the boundary discretization, thereby circumventing the need to satisfy the LBB-condition: the penalty stabilized Lagrange multiplier formulation, which enforces stability at the cost of an additional Lagrange multiplier for each weakly periodic field (three fields for the current problem). In particular, a neat result is that the classical Neumann boundary condition is obtained when the penalty becomes very large. In the numerical examples, we investigate the following characteristics: the mesh convergence for different boundary approximations, the sensitivity for the choice of penalty parameter, and the influence of RVE-size on the macroscopic response

    Efficiency and Automation in Threat Analysis of Software Systems

    Get PDF
    Context: Security is a growing concern in many organizations. Industries developing software systems plan for security early-on to minimize expensive code refactorings after deployment. In the design phase, teams of experts routinely analyze the system architecture and design to find potential security threats and flaws. After the system is implemented, the source code is often inspected to determine its compliance with the intended functionalities. Objective: The goal of this thesis is to improve on the performance of security design analysis techniques (in the design and implementation phases) and support practitioners with automation and tool support.Method: We conducted empirical studies for building an in-depth understanding of existing threat analysis techniques (Systematic Literature Review, controlled experiments). We also conducted empirical case studies with industrial participants to validate our attempt at improving the performance of one technique. Further, we validated our proposal for automating the inspection of security design flaws by organizing workshops with participants (under controlled conditions) and subsequent performance analysis. Finally, we relied on a series of experimental evaluations for assessing the quality of the proposed approach for automating security compliance checks. Findings: We found that the eSTRIDE approach can help focus the analysis and produce twice as many high-priority threats in the same time frame. We also found that reasoning about security in an automated fashion requires extending the existing notations with more precise security information. In a formal setting, minimal model extensions for doing so include security contracts for system nodes handling sensitive information. The formally-based analysis can to some extent provide completeness guarantees. For a graph-based detection of flaws, minimal required model extensions include data types and security solutions. In such a setting, the automated analysis can help in reducing the number of overlooked security flaws. Finally, we suggested to define a correspondence mapping between the design model elements and implemented constructs. We found that such a mapping is a key enabler for automatically checking the security compliance of the implemented system with the intended design. The key for achieving this is two-fold. First, a heuristics-based search is paramount to limit the manual effort that is required to define the mapping. Second, it is important to analyze implemented data flows and compare them to the data flows stipulated by the design

    Genetic Improvement of Software for Energy E ciency in Noisy and Fragmented Eco-Systems

    Get PDF
    Software has made its way to every aspect of our daily life. Users of smart devices expect almost continuous availability and uninterrupted service. However, such devices operate on restricted energy resources. As energy eficiency of software is relatively a new concern for software practitioners, there is a lack of knowledge and tools to support the development of energy eficient software. Optimising the energy consumption of software requires measuring or estimating its energy use and then optimising it. Generalised models of energy behaviour suffer from heterogeneous and fragmented eco-systems (i.e. diverse hardware and operating systems). The nature of such optimisation environments favours in-vivo optimisation which provides the ground-truth for energy behaviour of an application on a given platform. One key challenge in in-vivo energy optimisation is noisy energy readings. This is because complete isolation of the effects of software optimisation is simply infeasible, owing to random and systematic noise from the platform. In this dissertation we explore in-vivo optimisation using Genetic Improvement of Software (GI) for energy eficiency in noisy and fragmented eco-systems. First, we document expected and unexpected technical challenges and their solutions when conducting energy optimisation experiments. This can be used as guidelines for software practitioners when conducting energy related experiments. Second, we demonstrate the technical feasibility of in-vivo energy optimisation using GI on smart devices. We implement a new approach for mitigating noisy readings based on simple code rewrite. Third, we propose a new conceptual framework to determine the minimum number of samples required to show significant differences between software variants competing in tournaments. We demonstrate that the number of samples can vary drastically between different platforms as well as from one point of time to another within a single platform. It is crucial to take into consideration these observations when optimising in the wild or across several devices in a control environment. Finally, we implement a new validation approach for energy optimisation experiments. Through experiments, we demonstrate that the current validation approaches can mislead software practitioners to draw wrong conclusions. Our approach outperforms the current validation techniques in terms of specificity and sensitivity in distinguishing differences between validation solutions.Thesis (Ph.D.) -- University of Adelaide, School of Computer Science, 202

    Text Similarity Between Concepts Extracted from Source Code and Documentation

    Get PDF
    Context: Constant evolution in software systems often results in its documentation losing sync with the content of the source code. The traceability research field has often helped in the past with the aim to recover links between code and documentation, when the two fell out of sync. Objective: The aim of this paper is to compare the concepts contained within the source code of a system with those extracted from its documentation, in order to detect how similar these two sets are. If vastly different, the difference between the two sets might indicate a considerable ageing of the documentation, and a need to update it. Methods: In this paper we reduce the source code of 50 software systems to a set of key terms, each containing the concepts of one of the systems sampled. At the same time, we reduce the documentation of each system to another set of key terms. We then use four different approaches for set comparison to detect how the sets are similar. Results: Using the well known Jaccard index as the benchmark for the comparisons, we have discovered that the cosine distance has excellent comparative powers, and depending on the pre-training of the machine learning model. In particular, the SpaCy and the FastText embeddings offer up to 80% and 90% similarity scores. Conclusion: For most of the sampled systems, the source code and the documentation tend to contain very similar concepts. Given the accuracy for one pre-trained model (e.g., FastText), it becomes also evident that a few systems show a measurable drift between the concepts contained in the documentation and in the source code.</p

    Supplier Selection and Relationship Management: An Application of Machine Learning Techniques

    Get PDF
    Managing supply chains is an extremely challenging task due to globalization, short product life cycle, and recent advancements in information technology. These changes result in the increasing importance of managing the relationship with suppliers. However, the supplier selection literature mainly focuses on selecting suppliers based on previous performance, environmental and social criteria and ignores supplier relationship management. Moreover, although the explosion of data and the capabilities of machine learning techniques in handling dynamic and fast changing environment show promising results in customer relationship management, especially in customer lifetime value, this area has been untouched in the upstream side of supply chains. This research is an attempt to address this gap by proposing a framework to predict supplier future value, by incorporating the contract history data, relationship value, and supply network properties. The proposed model is empirically tested for suppliers of public works and government services Canada. Methodology wise, this thesis demonstrates the application of machine learning techniques for supplier selection and developing effective strategies for managing relationships. Practically, the proposed framework equips supply chain managers with a proactive and forward-looking approach for managing supplier relationship
    corecore