264 research outputs found

    “Computing” Requirements for Open Source Software: A Distributed Cognitive Approach

    Get PDF
    Most requirements engineering (RE) research has been conducted in the context of structured and agile software development. Software, however, is increasingly developed in open source software (OSS) forms which have several unique characteristics. In this study, we approach OSS RE as a sociotechnical, distributed cognitive process where distributed actors “compute” requirements—i.e., transform requirements-related knowledge into forms that foster a shared understanding of what the software is going to do and how it can be implemented. Such computation takes place through social sharing of knowledge and the use of heterogeneous artifacts. To illustrate the value of this approach, we conduct a case study of a popular OSS project, Rubinius—a runtime environment for the Ruby programming language—and identify ways in which cognitive workload associated with RE becomes distributed socially, structurally, and temporally across actors and artifacts. We generalize our observations into an analytic framework of OSS RE, which delineates three stages of requirements computation: excavation, instantiation, and testing-in-the-wild. We show how the distributed, dynamic, and heterogeneous computational structure underlying OSS development builds an effective mechanism for managing requirements. Our study contributes to sorely needed theorizing of appropriate RE processes within highly distributed environments as it identifies and articulates several novel mechanisms that undergird cognitive processes associated with distributed forms of RE

    ‘Computing’ Requirements in Open Source Software Projects

    Get PDF
    Due to high dissimilarity with traditional software development, Requirements Engineering (RE) in Open Source Software (OSS) remains poorly understood, despite the visible success of many OSS projects. In this study, we approach OSS RE as a sociotechnical and distributed cognitive activity where multiple actors deploy heterogeneous artifacts to ‘compute’ requirements as to reach a collectively-held understanding of what the software is going to do. We conduct a case study of a popular OSS project, Rubinius (a Ruby programming language runtime environment). Specifically, we investigate the ways in which this project exhibits distribution of cognitive efforts along social, structural, and temporal dimensions and how its requirements computation takes place accordingly. In particular, we seek to generalize to a theoretical framework that explains how three temporally-ordered processes of distributed cognition in OSS projects, denoted excavation, instantiation, and testing-in-the-wild, tie together to form a powerful distributed computational structure to manage requirements

    Social Network Analysis of Open Source Projects

    Get PDF
    A large amount of widespread software used today is either open source or includes open-source projects. Much open-source software has proved to be of very high quality despite being developed through unconventional methods. The success of open-source products has sparked an interest in the software industry in why these projects are so successful and how this seemingly unstructured development process can yield such great results. This thesis presents a study done on the projects hosted by one of the largest and most well-known open-software communities that exists. The study involves gathering developer collaboration data and then using social network analysis to find trends in the data that eventually might be used to create benchmarks for open-source software development. The results show that several interesting trends can be found.By applying social network analysis on the collaboration of open-source developers for a wide variety of projects a few observations can be made that can give some valuable insight in the development process of open-source projects

    TOWARDS A THEORY ON THE SUSTAINABILITY AND PERFORMANCE OF FLOSS COMMUNITIES

    Get PDF
    With the emergence of Free/Libre and Open Source Software as a significant force that is reshaping the software industry, it becomes more important to reassess conventionally held wisdom about software development. Recent literature on the FLOSS development process suggests that our previously held knowledge about software development might be obsolete. We specifically highlight the tension between the views embodied by the Linus\u27 Law and Brooks\u27 Law. Linus\u27 Law was forwarded by Eric Raymond and suggests that the FLOSS development process benefits greatly from large numbers of developers. Brooks\u27 Law, which is part of currently held wisdom on software development, suggests that adding developers is detrimental to the progress of software projects. Raymond explains that the distributed nature of the FLOSS development process and the capacity of source code to convey rich information between developers are the main causes of the obsolescence Brooks\u27 Law in the FLOSS development context. By performing two separate studies, we show how both views of software development can be complementary. Using the lens of Transaction Cost Theory (TCT) in the first study, we identify the characteristics of the development knowledge as being the main factors constraining new members from contributing source code to FLOSS development projects. We also conceptualize of these knowledge characteristics as being analogous to what Brooks\u27 described as the ramp-up effect. We forward the argument, and offer empirical validation, that managing these characteristics of knowledge would result in an increase the number of contributors to a FLOSS projects. The second study is concerned with the impact if having these new members added to the development team in a FLOSS project. Using the lens of Organizational Information Processing Theory (OIPT), we forward the argument, and offer empirical validation, that more contributors can be detrimental to progress if the committers of a FLOSS project are overwhelmed. Our findings also suggest that large development teams are indeed possible in FLOSS, however, they must be supported by proper source code design and community structures

    Studying the evolution of libre software projects using publicly available data

    Get PDF
    Libre software projects offer abundant information about themselves in publicly available storages (source code snapshots, CVS repositories, etc), which are a good source of quantitative data about the project itself, and the software it produces. The retrieval (and partially the analysis) of all those data can be automated, following a simple methodology aimed at characterizing the evolution of the project. Since the base information is public, and the tools used are libre and readily available, other groups can easily reproduce and review the results. Since the characterization offers some insight on the details of the project, it can be used as the basis for qualitative analysis (including correlations and comparative studies). In some cases, this methodology could also be used for proprietary software (although usually losing the benefits of peer review). This approach is shown, as an example, applied to MONO, a libre software project implementing parts of the .NET framewor

    Studying the laws of software evolution in a long-lived FLOSS project

    Get PDF
    ome free, open-source software projects have been around for quite a long time, the longest living ones dating from the early 1980s. For some of them, detailed information about their evolution is available in source code management systems tracking all their code changes for periods of more than 15 years. This paper examines in detail the evolution of one of such projects, glibc, with the main aim of understanding how it evolved and how it matched Lehman's laws of software evolution. As a result, we have developed a methodology for studying the evolution of such long-lived projects based on the information in their source code management repository, described in detail several aspects of the history of glibc, including some activity and size metrics, and found how some of the laws of software evolution may not hold in this cas

    Faults in Linux 2.6

    Get PDF
    In August 2011, Linux entered its third decade. Ten years before, Chou et al. published a study of faults found by applying a static analyzer to Linux versions 1.0 through 2.4.1. A major result of their work was that the drivers directory contained up to 7 times more of certain kinds of faults than other directories. This result inspired numerous efforts on improving the reliability of driver code. Today, Linux is used in a wider range of environments, provides a wider range of services, and has adopted a new development and release model. What has been the impact of these changes on code quality? To answer this question, we have transported Chou et al.'s experiments to all versions of Linux 2.6; released between 2003 and 2011. We find that Linux has more than doubled in size during this period, but the number of faults per line of code has been decreasing. Moreover, the fault rate of drivers is now below that of other directories, such as arch. These results can guide further development and research efforts for the decade to come. To allow updating these results as Linux evolves, we define our experimental protocol and make our checkers available
    corecore