13,391 research outputs found

    Naming the Pain in Requirements Engineering: A Design for a Global Family of Surveys and First Results from Germany

    Get PDF
    For many years, we have observed industry struggling in defining a high quality requirements engineering (RE) and researchers trying to understand industrial expectations and problems. Although we are investigating the discipline with a plethora of empirical studies, they still do not allow for empirical generalisations. To lay an empirical and externally valid foundation about the state of the practice in RE, we aim at a series of open and reproducible surveys that allow us to steer future research in a problem-driven manner. We designed a globally distributed family of surveys in joint collaborations with different researchers and completed the first run in Germany. The instrument is based on a theory in the form of a set of hypotheses inferred from our experiences and available studies. We test each hypothesis in our theory and identify further candidates to extend the theory by correlation and Grounded Theory analysis. In this article, we report on the design of the family of surveys, its underlying theory, and the full results obtained from Germany with participants from 58 companies. The results reveal, for example, a tendency to improve RE via internally defined qualitative methods rather than relying on normative approaches like CMMI. We also discovered various RE problems that are statistically significant in practice. For instance, we could corroborate communication flaws or moving targets as problems in practice. Our results are not yet fully representative but already give first insights into current practices and problems in RE, and they allow us to draw lessons learnt for future replications. Our results obtained from this first run in Germany make us confident that the survey design and instrument are well-suited to be replicated and, thereby, to create a generalisable empirical basis of RE in practice

    Preventing Incomplete/Hidden Requirements: Reflections on Survey Data from Austria and Brazil

    Get PDF
    Many software projects fail due to problems in requirements engineering (RE). The goal of this paper is analyzing a specific and relevant RE problem in detail: incomplete/hidden requirements. We replicated a global family of RE surveys with representatives of software organizations in Austria and Brazil. We used the data to (a) characterize the criticality of the selected RE problem, and to (b) analyze the reported main causes and mitigation actions. Based on the analysis, we discuss how to prevent the problem. The survey includes 14 different organizations in Austria and 74 in Brazil, including small, medium and large sized companies, conducting both, plan-driven and agile development processes. Respondents from both countries cited the incomplete/hidden requirements problem as one of the most critical RE problems. We identified and graphically represented the main causes and documented solution options to address these causes. Further, we compiled a list of reported mitigation actions. From a practical point of view, this paper provides further insights into common causes of incomplete/hidden requirements and on how to prevent this problem.Comment: in Proceedings of the Software Quality Days, 201

    Making Progress in Forecasting

    Get PDF
    Twenty-five years ago, the International Institute of Forecasters was established “to bridge the gap between theory and practice.” Its primary vehicle was the Journal of Forecasting and is now the International Journal of Forecasting. The Institute emphasizes empirical comparisons of reasonable forecasting approaches. Such studies can be used to identify the best forecasting procedures to use under given conditions, a process we call evidence-based forecasting. Unfortunately, evidence-based forecasting meets resistance from academics and practitioners when the findings differ from currently accepted beliefs. As a consequence, although much progress has been made in developing improved forecasting methods, the diffusion of useful forecasting methods has been disappointing. To bridge the gap between theory and practice, we recommend a stronger emphasis on the method of multiple hypotheses and on invited replications of important research. It is then necessary to translate the findings into principles that are easy to understand and apply. The Internet and software provide important opportunities for making the latest findings available to researchers and practitioners. Because researchers and practitioners believe that their areas are unique, we should organize findings so that they are relevant to each area and make them easily available when people search for information about forecasting in their area. Organisational barriers to change still remain to be overcome. Research into the specific issues faced when forecasting remains a priority

    Replicability or reproducibility? On the replication crisis in computational neuroscience and sharing only relevant detail

    Get PDF
    Replicability and reproducibility of computational models has been somewhat understudied by “the replication movement.” In this paper, we draw on methodological studies into the replicability of psychological experiments and on the mechanistic account of explanation to analyze the functions of model replications and model reproductions in computational neuroscience. We contend that model replicability, or independent researchers' ability to obtain the same output using original code and data, and model reproducibility, or independent researchers' ability to recreate a model without original code, serve different functions and fail for different reasons. This means that measures designed to improve model replicability may not enhance (and, in some cases, may actually damage) model reproducibility. We claim that although both are undesirable, low model reproducibility poses more of a threat to long-term scientific progress than low model replicability. In our opinion, low model reproducibility stems mostly from authors' omitting to provide crucial information in scientific papers and we stress that sharing all computer code and data is not a solution. Reports of computational studies should remain selective and include all and only relevant bits of code

    Does “Evaluating Journal Quality and the Association for Information Systems Senior Scholars Journal Basket…” Support the Basket with Bibliometric Measures?

    Get PDF
    We re-examine “Evaluating Journal Quality and the Association for Information Systems Senior Scholars Journal Basket…” by Lowry et al. (2013). They sought to use bibliometric methods to validate the Basket as the eight top quality journals that are “strictly speaking, IS journals” (Lowry et al., 2013, pp. 995, 997). They examined 21 journals out of 140 journals considered as possible IS journals. We also expand the sample to 73 of the 140 journals. Our sample includes a wider range of approaches to IS, although all were suggested by IS scholars in a survey by Lowry and colleagues. We also use the same sample of 21 journals in Lowry et al. with the same methods of analysis so far as possible. With the narrow sample, we replicate Lowry et al. as closely as we can, whereas with the broader sample we employ a conceptual replication. This latter replication also employs alternative methods. For example, we consider citations (a quality measure) and centrality (a relevance measure in this context) as distinct, rather than merging them as in Lowry et al. High centrality scores from the sample of 73 journals do not necessarily indicate close connections with IS. Therefore, we determine which journals are of high quality and closely connected with the Basket and with their sample. These results support the broad purpose of Lowry et al., finding a wider set of high quality and relevant journals than just MISQ and ISR, and find a wider set of relevant, top quality journals

    On Integrating Student Empirical Software Engineering Studies with Research and Teaching Goals

    Get PDF
    Background: Many empirical software engineering studies use students as subjects and are conducted as part of university courses. Aim: We aim at reporting our experiences with using guidelines for integrating empirical studies with our research and teaching goals. Method: We document our experience from conducting three studies with graduate students in two software architecture courses. Results: Our results show some problems that we faced when following the guidelines and deviations we made from the original guidelines. Conclusions: Based on our results we propose recommendations for empirical software engineering studies that are integrated in university courses.
    • …
    corecore