172 research outputs found

    Dimensions of consistency in source versions and system compositions

    Get PDF
    In building systems there are various levels at which we consider the problems reasoning about consistency and it means different things at those various levels. At the version management level, consistency means what it does in databases: no data is lost due to concurrency problems (eg, race conditions). At the composition and substitution (or creation and evolution) levels it means something that is significantly different--- namely, the syntactic and semantic consistency of the various pieces that make up the system. I first address the issue of what makes a system composition well-formed both syntactically and semantically. I then address the issue of substitution in well-formed system compositions, first in the context of simple substitution and then in the context of compound substitution (that is, the simultaneous substitution of multiple components). Note: This paper derived and extended from papers: the well-formed system composition paper [9] was published only as a technical report at CMU (though variously used without references or with misleading ones) the version control paper from ICSE9 [16], the extended abstract for SCM3 [19], and the shared dependency paper from SCM6 [20] all of which have been published only in conference or workshop versions. There may be parts of the other Inscape papers (ICSE9 [15], ICSE11 [17], and TAV3 [18]) included as well- all of which have been published only in conference versions

    Validity concerns in software engineering research

    Full text link
    Empirical studies that use software repository artifacts have become popular in the last decade due to the ready avail-ability of open source project archives. In this paper, we survey empirical studies in the last three years of ICSE and FSE proceedings, and categorize these studies in terms of open source projects vs. proprietary source projects and the diversity of subject programs used in these studies. Our survey has shown that almost half (49%) of recent empirical studies used solely open source projects. Existing studies either draw general conclusions from these results or explic-itly disclaim any conclusions that can extend beyond specific subject software. We conclude that researchers in empirical software engi-neering must consider the external validity concerns that arise from using only several well-known open source soft-ware projects, and that discussion of data source selection is an important discussion topic in software engineering re-search. Furthermore, we propose a community research in-frastructure for software repository benchmarks and sharing the empirical analysis results, in order to address external validity concerns and to raise the bar for empirical software engineering research that analyzes software artifacts

    Understanding software development: Processes, organisations and technologies

    Get PDF
    Our primary goal is to understand what people do when they develop software and how long it takes them to do it. To get a proper perspective on software development processes we must study them in their context — that is, in their organizational and technological context. An extremely important means of gaining the needed understanding and perspective is to measure what goes on. Time and motion studies constitute a proven approach to understanding and improving any engineering processes. We believe software processes are no different in this respect; however, the fact that software development yields a collaborative intellectual, as opposed to physical, output calls for careful and creative measurement techniques. In attempting to answer the question "what do people do in software development? " we have experimented with two novel forms of data collection in the software development field: time diaries and direct observation. We found both methods to be feasible and to yield useful information about time utilization. In effect, we have quantified the effect of these social processes using the observational data. Among the insights gained from our time diary experiment are 1) developers switch between developments to minimize blocking and maximize overall throughput, and 2) there is a high degree of dynamic reassignment in response to changing project and organizational priorities. Among the insights gained from our direct observation experiment are 1) time diaries are a valid and accurate instrument with respect to their level of resolution, 2) unplanned interruptions constitute a significant time factor, and 3) the amount and kinds of communication are significant time and social factors.- 2-1

    Are These Bugs Really "Normal"?

    Get PDF
    International audienceUnderstanding the severity of reported bugs is important in both research and practice. In particular, a number of recently proposed mining-based software engineering techniques predict bug severity, bug report quality, and bug-fix time, according to this information. Many bug tracking systems provide a field "severity" offering options such as "severe", "normal", and "minor", with "normal" as the default. However, there is a widespread perception that for many bug reports the label "normal" may not reflect the actual severity, because reporters may overlook setting the severity or may not feel confident enough to do so. In many cases, researchers ignore "normal" bug reports, and thus overlook a large percentage of the reports provided. On the other hand, treating them all together risks mixing reports that have very diverse properties. In this study, we investigate the extent to which "normal" bug reports actually have the "normal" severity. We find that many "normal" bug reports in practice are not normal. Furthermore, this misclassification can have a significant impact on the accuracy of mining-based tools and studies that rely on bug report severity information

    System Compositions and Shared Dependencies

    No full text
    . Much of the work in configuration management has addressed the problems of version history and derivation. Little has been done to address the problems of reasoning about the consistency of composed components or the effects of substituting one version for another. In my paper, "Version Control in the Inscape Environment" [13], I defined a number of concepts to be used in reasoning about substituting one component for another. In this paper, I discuss the problem of shared dependencies (that is, substituting one of more interdependent components in a context), propose an approach for specifying such dependencies, and show how this approach can be used to reason about the substitution in the context of interdependent components in a configuration. 1 Introduction In building software systems from components, there are two important concerns that we must address: keeping track of how components in a system are derived, and determining that the components comprising a system are consist..

    “Large ” Abstractions for Software Engineering

    No full text
    Abstraction is one of the primary intellectual tools we have for managing complexity in software systems. When we think of abstractions we usually think about “small ” abstractions, such as data abstraction (parameterization), type abstraction (polymorphism) and procedural or functional abstraction. These are the everyday kinds of things we work with – finding the right concepts to make the expression of our software solutions easier to understand and easier to reason about. Here I propose we think about “large ” abstractions – abstractions that provide critical distinctions about our field of software engineering as a whole; abstractions that enable us to see what we do in different and important ways and provide significant improvements in how we do software engineering. I give a number of examples and delineate why I think they have been, and still are, important

    A Product Line Architecture for a Network Product

    No full text
    Given a set of related (and existing) network products, the goal of this architectural exercise was to define a generic architecture that was sufficient to encompass existing and future products in such away as to satisfy the following two requirements: 1) represent the range of products from single board, centralized systems to multiple board, distributed systems; and 2) support dynamic reconfigurability. We first describe the basic system abstractions and the typical organization for these kinds of projects. We then describe our generic architecture and show how these two requirements have been met. Our approach using late binding, reflection, indirection and location transparency combines the two requirements neatly into an interdependent solution -- though they could be easily separated into independent ones. We then address the ubiquitous problem of how to deal with multiple dimensions of organization. In many types of systems there are several competing ways in which the system might be organized. We show how architectural styles can be an effective mechanism for dealing with such issues as initialization and exception handling in a uniform way across the system components. Finally, we summarize the lessons learned from this experience
    • …
    corecore