38 research outputs found

    A Proposal for Analysis and Prediction for Software Projects using Collaborative Filtering, In-Process Measurements and a Benchmarks Database

    Full text link
    PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON SOFTWARE PROCESS AND PRODUCT MEASUREMENT(MENSURA 2006)CÁDIZ – SPAINNOVEMBER, 6 – 8, 2006Alain Abran Reiner Dumke Mercedes Ruiz (Eds.)刊行年月日は会議開催日を参考にし

    Mega Software Engineering

    Full text link
    Techinical Report of Software Engineering Lab in Osaka Univ. SEL-Sep-22-200

    The consistency of empirical comparisons of regression and analogy-based software project cost prediction

    Get PDF
    OBJECTIVE - to determine the consistency within and between results in empirical studies of software engineering cost estimation. We focus on regression and analogy techniques as these are commonly used. METHOD – we conducted an exhaustive search using predefined inclusion and exclusion criteria and identified 67 journal papers and 104 conference papers. From this sample we identified 11 journal papers and 9 conference papers that used both methods. RESULTS – our analysis found that about 25% of studies were internally inconclusive. We also found that there is approximately equal evidence in favour of, and against analogy-based methods. CONCLUSIONS – we confirm the lack of consistency in the findings and argue that this inconsistent pattern from 20 different studies comparing regression and analogy is somewhat disturbing. It suggests that we need to ask more detailed questions than just: “What is the best prediction system?

    Quality of Design, Analysis and Reporting of Software Engineering Experiments:A Systematic Review

    Get PDF
    Background: Like any research discipline, software engineering research must be of a certain quality to be valuable. High quality research in software engineering ensures that knowledge is accumulated and helpful advice is given to the industry. One way of assessing research quality is to conduct systematic reviews of the published research literature. Objective: The purpose of this work was to assess the quality of published experiments in software engineering with respect to the validity of inference and the quality of reporting. More specifically, the aim was to investigate the level of statistical power, the analysis of effect size, the handling of selection bias in quasi-experiments, and the completeness and consistency of the reporting of information regarding subjects, experimental settings, design, analysis, and validity. Furthermore, the work aimed at providing suggestions for improvements, using the potential deficiencies detected as a basis. Method: The quality was assessed by conducting a systematic review of the 113 experiments published in nine major software engineering journals and three conference proceedings in the decade 1993-2002. Results: The review revealed that software engineering experiments were generally designed with unacceptably low power and that inadequate attention was paid to issues of statistical power. Effect sizes were sparsely reported and not interpreted with respect to their practical importance for the particular context. There seemed to be little awareness of the importance of controlling for selection bias in quasi-experiments. Moreover, the review revealed a need for more complete and standardized reporting of information, which is crucial for understanding software engineering experiments and judging their results. Implications: The consequence of low power is that the actual effects of software engineering technologies will not be detected to an acceptable extent. The lack of reporting of effect sizes and the improper interpretation of effect sizes result in ignorance of the practical importance, and thereby the relevance to industry, of experimental results. The lack of control for selection bias in quasi-experiments may make these experiments less credible than randomized experiments. This is an unsatisfactory situation, because quasi-experiments serve an important role in investigating cause-effect relationships in software engineering, for example, in industrial settings. Finally, the incomplete and unstandardized reporting makes it difficult for the reader to understand an experiment and judge its results. Conclusions: Insufficient quality was revealed in the reviewed experiments. This has implications for inferences drawn from the experiments and might in turn lead to the accumulation of erroneous information and the offering of misleading advice to the industry. Ways to improve this situation are suggested

    Going Beyond the Blueprint: Unravelling the Compex Reality of Software Architectures

    Get PDF
    The term Software Architecture captures a complex amalgam of representations and uses, real and figurative, that is rendered and utilized by different stakeholders throughout the software development process. Current approaches to documenting Software Architecture, in contrast, rely on the notion of a blueprint that may not be sufficient to capture this multi-faceted concept. We argue that it might not even be feasible in practice to have such a unified understanding of this concept for a given setting. We demonstrate, with the help of in-depth case studies, that four key metaphors govern the creation and use of software architecture by different communities: “blueprint”, “literature”, “language”, and “decision”. The results challenge the current, somewhat narrow, understanding of the concept of software architecture that focuses on description languages, suggesting new directions for more effective representation and use of software architecture in practice

    Experimental Evaluation of a Tool for Change Impact Prediction in Requirements Models: Design, Results and Lessons Learned

    Get PDF
    There are commercial tools like IBM Rational RequisitePro and DOORS that support semi-automatic change impact analysis for requirements. These tools capture the requirements relations and allow tracing the paths they form. In most of these tools, relation types do not say anything about the meaning of the relations except the direction. When a change is introduced to a requirement, the requirements engineer analyzes the impact of the change in related requirements. In case semantic information is missing to determine precisely how requirements are related to each other, the requirements engineer generally has to assume the worst case dependencies based on the available syntactic information only. We developed a tool that uses formal semantics of requirements relations to support change impact analysis and prediction in requirements models. The tool TRIC (Tool for Requirements Inferencing and Consistency checking) works on models that explicitly represent requirements and the relations among them with their formal semantics. In this paper we report on the evaluation of how TRIC improves the quality of change impact predictions. A quasiexperiment is systematically designed and executed to empirically validate the impact of TRIC. We conduct the quasi-experiment with 21 master’s degree students predicting change impact for five change scenarios in a real software requirements specification. The participants are assigned with Microsoft Excel, IBM RequisitePro or TRIC to perform change impact prediction for the change scenarios. It is hypothesized that using TRIC would positively impact the quality of change impact predictions. Two formal hypotheses are developed. As a result of the experiment, we are not able to reject the null hypotheses, and thus we are not able to show experimentally the effectiveness of our tool. In the paper we discuss reasons for the failure to reject the null hypotheses in the experiment

    Interpreting Online Discussions: Connecting Artifacts and Experiences in User Studies

    Get PDF
    This paper presents a methodological effort to connect the specifics of technologies to the details of social practices, in an attempt to deepen our understanding of evolving sociotechnical cultures. More specifically, this paper describes a methodological framework that makes use of online discussions as a vital source of data. The reason the paper focuses on online discussion is that the Internet has become a natural habitat for discussions of high-end technologies, be they physical products or online services. The framework combines interpretative research and attribute-consequence-value (ACV) chain theory – a theory commonly applied to market and consumer research – to conceptualize and explore evolving prosumer cultures through online discussions. The benefit of using ACV chain theory is that it explicitly connects products and services to practices and values. The proposed methodological framework identifies three central techniques to elicit and analyse ACV chains from online prosumer discussions: (1) attribute analysis (2) Internet forum data collection and (3) thematic analysis. The paper goes on to exemplify the application of this framework by examining the sociotechnical co-evolution of the friend list – a backbone feature of many social networking services. In summary, this paper shows how ACV chains can be fruitfully applied to explore evolving prosumer cultures and make the vital connection between technical features and emerging cultures

    Visualizing the customization endeavor in product-based-evolving software product lines: a case of action design research

    Get PDF
    [EN] Software Product Lines (SPLs) aim at systematically reusing software assets, and deriving products (a.k.a., variants) out of those assets. However, it is not always possible to handle SPL evolution directly through these reusable assets. Time-to-market pressure, expedited bug fixes, or product specifics lead to the evolution to first happen at the product level, and to be later merged back into the SPL platform where the core assets reside. This is referred to as product-based evolution. In this scenario, deciding when and what should go into the next SPL release is far from trivial. Distinct questions arise. How much effort are developers spending on product customization? Which are the most customized core assets? To which extent is the core asset code being reused for a given product? We refer to this endeavor as Customization Analysis, i.e., understanding the functional increments in adjusting products from the last SPL platform release. The scale of the SPLs' code-base calls for customization analysis to be conducted through Visual Analytics tools. This work addresses the design principles for such tools through a joint effort between academia and industry, specifically, Danfoss Drives, a company division in charge of the P400 SPL. Accordingly, we adopt an Action Design Research approach where answers are sought by interacting with the practitioners in the studied situations. We contribute by providing informed goals for customization analysis as well as an intervention in terms of a visual analytics tool. We conclude by discussing to what extent this experience can be generalized to product-based evolving SPL organizations other than Danfoss Drives.Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. This work is supported by the Spanish Ministry of Science, Innovation and Universities grant number RTI2018099818-B-I00 and MCIU-AEI TIN2017-90644-REDT (TASOVA). ONEKIN enjoys support from the program 'Grupos de Investigacion del Sistema Univesitario Vasco 2019-2021' under contract IT1235-19. Raul Medeiros enjoys a doctoral grant from the Spanish Ministry of Science and Innovation
    corecore