101,969 research outputs found

    Guidelines for conducting interactive rapid reviews in software engineering -- from a focus on technology transfer to knowledge exchange

    Get PDF
    Evidence-based software engineering (EBSE) aims to improve research utilization in practice. It relies on systematic methods (like systematic literature reviews, systematic mapping studies, and rapid reviews) to identify, appraise, and synthesize existing research findings to answer questions of interest. However, the lack of practitioners' involvement in the design, execution, and reporting of these methods indicates a lack of appreciation for knowledge exchange between researchers and practitioners. Within EBSE, the main reason for conducting these systematic studies is to answer the practitioner's questions and impact practice. However, in many cases, academics have undertaken these studies without any direct involvement of practitioners.This report focuses on the rapid review guidelines and presents practical advice on conducting these with practitioner involvement to facilitate knowledge co-creation.Based on a literature review of rapid reviews and stakeholders engagement in medicine and our experience of using secondary studies in software engineering, we propose extensions to an existing proposal for rapid reviews in software engineering to increase researchers-practitioners knowledge exchange. We refer to the extended method as an interactive rapid review.An interactive rapid review is a streamlined approach to conduct agile literature reviews in close collaboration between researchers and practitioners in software engineering. This report describes the process and discusses possible usage scenarios and some reflections from the proposal's ongoing evaluation.The proposed guidelines will potentially boost knowledge co-creation through active researcher-practitioner interaction by streamlining practitioners' involvement and recognizing the need for an agile process

    Tool support for systematic reviews in software engineering

    Get PDF
    Background: Systematic reviews have become an established methodology in software engineering. However, they are labour intensive, error prone and time consuming. These and other challenges have led to the development of tools to support the process. However, there is limited evidence about their usefulness. Aim: To investigate the usefulness of tools to support systematic reviews in software engineering and develop an evaluation framework for an overall support tool. Method: A literature review, taking the form of a mapping study, was undertaken to identify and classify tools supporting systematic reviews in software engineering. Motivated by its results, a feature analysis was performed to independently compare and evaluate a selection of tools which aimed to support the whole systematic review process. An initial version of an evaluation framework was developed to carry out the feature analysis and later refined based on its results. To obtain a deeper understanding of the technology, a survey was undertaken to explore systematic review tools in other domains. Semi-structured interviews with researchers in healthcare and social science were carried out. Quantitative and qualitative data was collected, analysed and used to further refine the framework. Results: The literature review showed an encouraging growth of tools to support systematic reviews in software engineering, although many had received limited evaluation. The feature analysis provided new insight into the usefulness of tools, determined the strongest and weakest candidate and established the feasibility of an evaluation framework. The survey provided knowledge about tools used in other domains, which helped further refine the framework. Conclusions: Tools to support systematic reviews in software engineering are still immature. Their potential, however, remains high and it is anticipated that the need for tools within the community will increase. The evaluation framework presented aims to support the future development, assessment and selection of appropriate tools

    Performance evaluation metrics for multi-objective evolutionary algorithms in search-based software engineering: Systematic literature review

    Get PDF
    Many recent studies have shown that various multi-objective evolutionary algorithms have been widely applied in the field of search-based software engineering (SBSE) for optimal solutions. Most of them either focused on solving newly re-formulated problems or on proposing new approaches, while a number of studies performed reviews and comparative studies on the performance of proposed algorithms. To evaluate such performance, it is necessary to consider a number of performance metrics that play important roles during the evaluation and comparison of investigated algorithms based on their best-simulated results. While there are hundreds of performance metrics in the literature that can quantify in performing such tasks, there is a lack of systematic review conducted to provide evidence of using these performance metrics, particularly in the software engineering problem domain. In this paper, we aimed to review and quantify the type of performance metrics, number of objectives, and applied areas in software engineering that reported in primary studies-this will eventually lead to inspiring the SBSE community to further explore such approaches in depth. To perform this task, a formal systematic review protocol was applied for planning, searching, and extracting the desired elements from the studies. After considering all the relevant inclusion and exclusion criteria for the searching process, 105 relevant articles were identified from the targeted online databases as scientific evidence to answer the eight research questions. The preliminary results show that remarkable studies were reported without considering performance metrics for the purpose of algorithm evaluation. Based on the 27 performance metrics that were identified, hypervolume, inverted generational distance, generational distance, and hypercube-based diversity metrics appear to be widely adopted in most of the studies in software requirements engineering, software design, software project management, software testing, and software verification. Additionally, there are increasing interest in the community in re-formulating many objective problems with more than three objectives, yet, currently are dominated in re-formulating two to three objectives

    How reliable are systematic reviews in empirical software engineering?

    Get PDF
    BACKGROUND – the systematic review is becoming a more commonly employed research instrument in empirical software engineering. Before undue reliance is placed on the outcomes of such reviews it would seem useful to consider the robustness of the approach in this particular research context. OBJECTIVE – the aim of this study is to assess the reliability of systematic reviews as a research instrument. In particular we wish to investigate the consistency of process and the stability of outcomes. METHOD – we compare the results of two independent reviews under taken with a common research question. RESULTS – the two reviews find similar answers to the research question, although the means of arriving at those answers vary. CONCLUSIONS – in addressing a well-bounded research question, groups of researchers with similar domain experience can arrive at the same review outcomes, even though they may do so in different ways. This provides evidence that, in this context at least, the systematic review is a robust research method

    Simulation in manufacturing and business: A review

    Get PDF
    Copyright @ 2009 Elsevier B.V.This paper reports the results of a review of simulation applications published within peer-reviewed literature between 1997 and 2006 to provide an up-to-date picture of the role of simulation techniques within manufacturing and business. The review is characterised by three factors: wide coverage, broad scope of the simulation techniques, and a focus on real-world applications. A structured methodology was followed to narrow down the search from around 20,000 papers to 281. Results include interesting trends and patterns. For instance, although discrete event simulation is the most popular technique, it has lower stakeholder engagement than other techniques, such as system dynamics or gaming. This is highly correlated with modelling lead time and purpose. Considering application areas, modelling is mostly used in scheduling. Finally, this review shows an increasing interest in hybrid modelling as an approach to cope with complex enterprise-wide systems

    Safety-Critical Systems and Agile Development: A Mapping Study

    Full text link
    In the last decades, agile methods had a huge impact on how software is developed. In many cases, this has led to significant benefits, such as quality and speed of software deliveries to customers. However, safety-critical systems have widely been dismissed from benefiting from agile methods. Products that include safety critical aspects are therefore faced with a situation in which the development of safety-critical parts can significantly limit the potential speed-up through agile methods, for the full product, but also in the non-safety critical parts. For such products, the ability to develop safety-critical software in an agile way will generate a competitive advantage. In order to enable future research in this important area, we present in this paper a mapping of the current state of practice based on {a mixed method approach}. Starting from a workshop with experts from six large Swedish product development companies we develop a lens for our analysis. We then present a systematic mapping study on safety-critical systems and agile development through this lens in order to map potential benefits, challenges, and solution candidates for guiding future research.Comment: Accepted at Euromicro Conf. on Software Engineering and Advanced Applications 2018, Prague, Czech Republi
    corecore