470,866 research outputs found

    A Unified Checklist for Observational and Experimental Research in Software Engineering (Version 1)

    Get PDF
    Current checklists for empirical software engineering cover either experimental research or case study research but ignore the many commonalities that exist across all kinds of empirical research. Identifying these commonalities, and explaining why they exist, would enhance our understanding of empirical research in general and of the differences between experimental and case study research in particular. In this report we design a unified checklist for empirical research, and identify commonalities and differences between experimental and case study research. We design the unified checklist as a specialization of the general engineering cycle, which itself is a special case of the rational choice cycle. We then compare the resulting empirical research cycle with two checklists for experimental research, and with one checklist for case study research. The resulting checklist identifies important questions to be answered in experimental and case study research design and reports. The checklist provides insights in two different types of empirical research design and their relationships. Its limitations are that it ignores other research methods such as meta-research or surveys. It has been tested so far only in our own research designs and in teaching empirical methods. Future work includes expanding the comparison with other methods and application in more cases, by others than ourselves

    Evolution of statistical analysis in empirical software engineering research: Current state and steps forward

    Full text link
    Software engineering research is evolving and papers are increasingly based on empirical data from a multitude of sources, using statistical tests to determine if and to what degree empirical evidence supports their hypotheses. To investigate the practices and trends of statistical analysis in empirical software engineering (ESE), this paper presents a review of a large pool of papers from top-ranked software engineering journals. First, we manually reviewed 161 papers and in the second phase of our method, we conducted a more extensive semi-automatic classification of papers spanning the years 2001--2015 and 5,196 papers. Results from both review steps was used to: i) identify and analyze the predominant practices in ESE (e.g., using t-test or ANOVA), as well as relevant trends in usage of specific statistical methods (e.g., nonparametric tests and effect size measures) and, ii) develop a conceptual model for a statistical analysis workflow with suggestions on how to apply different statistical methods as well as guidelines to avoid pitfalls. Lastly, we confirm existing claims that current ESE practices lack a standard to report practical significance of results. We illustrate how practical significance can be discussed in terms of both the statistical analysis and in the practitioner's context.Comment: journal submission, 34 pages, 8 figure

    Empirical Evidence of Large-Scale Diversity in API Usage of Object-Oriented Software

    Get PDF
    In this paper, we study how object-oriented classes are used across thousands of software packages. We concentrate on "usage diversity'", defined as the different statically observable combinations of methods called on the same object. We present empirical evidence that there is a significant usage diversity for many classes. For instance, we observe in our dataset that Java's String is used in 2460 manners. We discuss the reasons of this observed diversity and the consequences on software engineering knowledge and research

    The Scalability-Efficiency/Maintainability-Portability Trade-off in Simulation Software Engineering: Examples and a Preliminary Systematic Literature Review

    Full text link
    Large-scale simulations play a central role in science and the industry. Several challenges occur when building simulation software, because simulations require complex software developed in a dynamic construction process. That is why simulation software engineering (SSE) is emerging lately as a research focus. The dichotomous trade-off between scalability and efficiency (SE) on the one hand and maintainability and portability (MP) on the other hand is one of the core challenges. We report on the SE/MP trade-off in the context of an ongoing systematic literature review (SLR). After characterizing the issue of the SE/MP trade-off using two examples from our own research, we (1) review the 33 identified articles that assess the trade-off, (2) summarize the proposed solutions for the trade-off, and (3) discuss the findings for SSE and future work. Overall, we see evidence for the SE/MP trade-off and first solution approaches. However, a strong empirical foundation has yet to be established; general quantitative metrics and methods supporting software developers in addressing the trade-off have to be developed. We foresee considerable future work in SSE across scientific communities.Comment: 9 pages, 2 figures. Accepted for presentation at the Fourth International Workshop on Software Engineering for High Performance Computing in Computational Science and Engineering (SEHPCCSE 2016

    An Empirical Evaluation of a Historical Data Warehouse

    Get PDF
    Computing is widely regarded as a scientific discipline that emphasizes on three different perspectives: mathematics, present in the development of formalisms, theories and algorithms; engineering, linked to the goal of making things better, faster, smaller, cheaper and, finally, the science that can be defined as the activity to develop general and predictive theories that allow these theories to be evaluated and tested. However, research in software engineering rarely describes explicitly its research paradigms and standards to assess the quality of its results. Due to a growing understanding in the computer science community that empirical studies are needed to improve processes, methods and tools for the development and maintenance of software, an emerging area in software engineering is developed: the Empirical Software Engineering. This subarea is one step down in the claims of scientificity but it aims to address this shortcoming. The objective of this work is to conduct an empirical corroboration for developing a method of a Historical Data Warehouse, the temporal data model and the associated query interface.Sociedad Argentina de Informática e Investigación Operativa (SADIO

    An Empirical Evaluation of a Historical Data Warehouse

    Get PDF
    Computing is widely regarded as a scientific discipline that emphasizes on three different perspectives: mathematics, present in the development of formalisms, theories and algorithms; engineering, linked to the goal of making things better, faster, smaller, cheaper and, finally, the science that can be defined as the activity to develop general and predictive theories that allow these theories to be evaluated and tested. However, research in software engineering rarely describes explicitly its research paradigms and standards to assess the quality of its results. Due to a growing understanding in the computer science community that empirical studies are needed to improve processes, methods and tools for the development and maintenance of software, an emerging area in software engineering is developed: the Empirical Software Engineering. This subarea is one step down in the claims of scientificity but it aims to address this shortcoming. The objective of this work is to conduct an empirical corroboration for developing a method of a Historical Data Warehouse, the temporal data model and the associated query interface.Sociedad Argentina de Informática e Investigación Operativa (SADIO

    Effective and Appropriate Use of Controlled Experimentation in Software Development Research

    Get PDF
    Although there is a large body of research and publication on software development, only a limited amount of this work includes empirical demonstration of its effectiveness. Yet, it is this empirical understanding which will help move software development from a craft to an engineering discipline. Of the empirical methods for research, controlled experiments are the most commonly thought of in scientific studies, and yet the least used to study software development. This thesis begins with a brief review of the different empirical methods commonly used to study software development. This review provides a quick introduction to each empirical method, compares the main advantages and weaknesses of each method, and provides a context for how controlled experimentation compares to other empirical methods for studying software development. Using empirical methods to study software development is not easy or straightforward. There are limitations which appear to be inherent in the nature of software and issues due to the improper understanding or application of empirical methods. These limitations and issues are identified, specifically for controlled experiments, and approaches for dealing with them are proposed. A controlled experiment was designed and conducted to demonstrate the method and explore the limitations and issues for empirical research in software development. This experiment and its results are presented. This example experiment demonstrates that conducting even a simple experiment in software development is challenging. Lessons learned from this experience are reported. Controlled experiments require that the researcher have a high degree of control over the environment where the experiment is carried out. This can be costly and difficult to achieve. This thesis concludes by discussing how controlled experiments can be used effectively in studies of software development

    Open Science in Software Engineering

    Full text link
    Open science describes the movement of making any research artefact available to the public and includes, but is not limited to, open access, open data, and open source. While open science is becoming generally accepted as a norm in other scientific disciplines, in software engineering, we are still struggling in adapting open science to the particularities of our discipline, rendering progress in our scientific community cumbersome. In this chapter, we reflect upon the essentials in open science for software engineering including what open science is, why we should engage in it, and how we should do it. We particularly draw from our experiences made as conference chairs implementing open science initiatives and as researchers actively engaging in open science to critically discuss challenges and pitfalls, and to address more advanced topics such as how and under which conditions to share preprints, what infrastructure and licence model to cover, or how do it within the limitations of different reviewing models, such as double-blind reviewing. Our hope is to help establishing a common ground and to contribute to make open science a norm also in software engineering.Comment: Camera-Ready Version of a Chapter published in the book on Contemporary Empirical Methods in Software Engineering; fixed layout issue with side-note
    corecore