72 research outputs found

    Research Findings on Empirical Evaluation of Requirements Specifications Approaches

    Get PDF
    Numerous software requirements specification (SRS) approaches have been proposed in software engineering. However, there has been little empirical evaluation of the use of these approaches in specific contexts. This paper describes the results of a mapping study, a key instrument of the evidence-based paradigm, in an effort to understand what aspects of SRS are evaluated, in which context, and by using which research method. On the basis of 46 identified and categorized primary studies, we found that understandability is the most commonly evaluated aspect of SRS, experiments are the most commonly used research method, and the academic environment is where most empirical evaluation takes place

    An Exploratory Survey of Phase-wise Project Cost Estimation Techniques

    Get PDF
    This article explores a number of existing project cost estimation techniques to investigate how the estimation can be done in a more accurate and effective manner. The survey looks into various estimation models that utilize many theoretical techniques such as statistics, fuzzy logic, case-based reasoning, analogies, and neural networks. As the essence of conventional estimation inaccuracy lies in life cycle cost drivers that are unsuitable to be applied across the project life cycle, this study introduces a phase-wise estimation technique that posits some overhead and latency costs. Performance evaluation methods of the underlying phase-wise principle are also presented. Contributions of this phase-wise approach will improve the estimation accuracy owing to less latency cost and increase the project visibility which in turn helps the project managers better scrutinize and administer project activities

    On Evaluating Commercial Cloud Services: A Systematic Review

    Full text link
    Background: Cloud Computing is increasingly booming in industry with many competing providers and services. Accordingly, evaluation of commercial Cloud services is necessary. However, the existing evaluation studies are relatively chaotic. There exists tremendous confusion and gap between practices and theory about Cloud services evaluation. Aim: To facilitate relieving the aforementioned chaos, this work aims to synthesize the existing evaluation implementations to outline the state-of-the-practice and also identify research opportunities in Cloud services evaluation. Method: Based on a conceptual evaluation model comprising six steps, the Systematic Literature Review (SLR) method was employed to collect relevant evidence to investigate the Cloud services evaluation step by step. Results: This SLR identified 82 relevant evaluation studies. The overall data collected from these studies essentially represent the current practical landscape of implementing Cloud services evaluation, and in turn can be reused to facilitate future evaluation work. Conclusions: Evaluation of commercial Cloud services has become a world-wide research topic. Some of the findings of this SLR identify several research gaps in the area of Cloud services evaluation (e.g., the Elasticity and Security evaluation of commercial Cloud services could be a long-term challenge), while some other findings suggest the trend of applying commercial Cloud services (e.g., compared with PaaS, IaaS seems more suitable for customers and is particularly important in industry). This SLR study itself also confirms some previous experiences and reveals new Evidence-Based Software Engineering (EBSE) lessons

    Quality measurement in agile and rapid software development: A systematic mapping

    Get PDF
    Context: In despite of agile and rapid software development (ARSD) being researched and applied extensively, managing quality requirements (QRs) are still challenging. As ARSD processes produce a large amount of data, measurement has become a strategy to facilitate QR management. Objective: This study aims to survey the literature related to QR management through metrics in ARSD, focusing on: bibliometrics, QR metrics, and quality-related indicators used in quality management. Method: The study design includes the definition of research questions, selection criteria, and snowballing as search strategy. Results: We selected 61 primary studies (2001-2019). Despite a large body of knowledge and standards, there is no consensus regarding QR measurement. Terminology is varying as are the measuring models. However, seemingly different measurement models do contain similarities. Conclusion: The industrial relevance of the primary studies shows that practitioners have a need to improve quality measurement. Our collection of measures and data sources can serve as a starting point for practitioners to include quality measurement into their decision-making processes. Researchers could benefit from the identified similarities to start building a common framework for quality measurement. In addition, this could help researchers identify what quality aspects need more focus, e.g., security and usability with few metrics reported.This work has been funded by the European Union’s Horizon 2020 research and innovation program through the Q-Rapids project (grant no. 732253). This research was also partially supported by the Spanish Ministerio de Economía, Industria y Competitividad through the DOGO4ML project (grant PID2020-117191RB-I00). Silverio Martínez-Fernández worked in Fraunhofer IESE before January 2020.Peer ReviewedPostprint (published version

    Statistical Process Control for Software: Fill the Gap

    Get PDF
    The characteristic of software processes, unlike manufacturing ones, is that they have a very high human-centered component and are primarily based on cognitive activities. As so, each time a software process is executed, inputs and outputs may vary, as well as the process performances. This phenomena is better identified in literature with the terminology of “Process Diversity” (IEEE, 2000). Given the characteristics of a software process, its intrinsic diversity implies the difficulty to predict, monitor and improve it, unlike what happens in other contexts. In spite of the previous observations, Software Process Improvement (SPI) is a very important activity that cannot be neglected. To face these problems, the software engineering community stresses the use of measurement based approaches such as QIP/GQM (Basili et al., 1994) and time series analysis: the first approach is usually used to determine what improvement is needed; the time series analysis is adopted to monitor process performances. As so, it supports decision making in terms of when the process should be improved, and provides a manner to verify the effectiveness of the improvement itself. A technique for time series analysis, well-established in literature, which has given insightful results in the manufacturing contexts, although not yet in software process ones is known as Statistical Process Control (SPC) (Shewhart, 1980; Shewhart, 1986). The technique was originally developed by Shewhart in the 1920s and then used in many other contexts. The basic idea it relies on consists in the use of so called “control charts” together with their indicators, called run tests, to: establish operational limits for acceptable process variation; monitor and evaluate process performances evolution in time. In general, process performance variations are mainly due to two types of causes classified as follows:  Common cause variations: the result of normal interactions of people, machines, environment, techniques used and so on.  Assignable cause variations: arise from events that are not part of the process and make it unstable. In this sense, the statistically based approach, SPC, helps determine if a process is stable or not by discriminating between common cause variation and assignable cause variation. We can classify a process as “stable” or “under control” if only common causes occur. More precisely, in SPC data points representing measures of process performances are collected. These values are then compared to the values of central tendency, upper and lower limit of admissible performance variations. While SPC is a well established technique in manufacturing contexts, there are only few works in literature (Card, 1994; Florac et al., 2000; Weller, 2000(a); Weller, 2000(b); Florence, 2001; Sargut & Demirors, 2006; Weller, & Card. 2008; Raczynski & Curtis, 2008) that present successful outcomes of SPC adoption to software. In each case, not only are there few cases of successful applications but they don’t clearly illustrate the meaning of control charts and related indicators in the context of software process application. Given the above considerations, the aim of this work is to generalize and put together the experiences collected by the authors in previous studies on the use of Statistical Process Control in the software context (Baldassarre et al, 2004; Baldassarre et al, 2005; Caivano 2005; Boffoli, 2006; Baldassarre et al, 2008; Baldassarre et al, 2009) and present the resulting stepwise approach that: starting from stability tests, known in literature, selects the most suitable ones for software processes (tests set), reinterprets them from a software process perspective (tests interpretation) and suggest a recalculation strategy for tuning the SPC control limits. The paper is organized as follows: section 2 briefly presents SPC concepts and its peculiarities; section 3 discusses the main differences and lacks of SPC for software and presents the approach proposed by the authors; finally, in section 4 conclusions are drawn

    Technical Debt Prioritization: State of the Art. A Systematic Literature Review

    Get PDF
    Background. Software companies need to manage and refactor Technical Debt issues. Therefore, it is necessary to understand if and when refactoring of Technical Debt should be prioritized with respect to developing features or fixing bugs.Objective. The goal of this study is to investigate the existing body of knowledge in software engineering to understand what Technical Debt prioritization approaches have been proposed in research and industry. Method. We conducted a Systematic Literature Review of 557 unique papers published until 2019, following a consolidated methodology applied in software engineering. We included 44 primary studies.Results. Different approaches have been proposed for Technical Debt prioritization, all having different goals and proposing optimization regarding different criteria. The proposed measures capture only a small part of the plethora of factors used to prioritize Technical Debt qualitatively in practice. We present an impact map of such factors. However, there is a lack of empirical and validated set of tools.Conclusion. We observed that Technical Debt prioritization research is preliminary and there is no consensus on what the important factors are and how to measure them. Consequently, we cannot consider current research\ua0conclusive. In this paper, we therefore outline different directions for necessary future investigations
    • …
    corecore