33,035 research outputs found

    Are Delayed Issues Harder to Resolve? Revisiting Cost-to-Fix of Defects throughout the Lifecycle

    Full text link
    Many practitioners and academics believe in a delayed issue effect (DIE); i.e. the longer an issue lingers in the system, the more effort it requires to resolve. This belief is often used to justify major investments in new development processes that promise to retire more issues sooner. This paper tests for the delayed issue effect in 171 software projects conducted around the world in the period from 2006--2014. To the best of our knowledge, this is the largest study yet published on this effect. We found no evidence for the delayed issue effect; i.e. the effort to resolve issues in a later phase was not consistently or substantially greater than when issues were resolved soon after their introduction. This paper documents the above study and explores reasons for this mismatch between this common rule of thumb and empirical data. In summary, DIE is not some constant across all projects. Rather, DIE might be an historical relic that occurs intermittently only in certain kinds of projects. This is a significant result since it predicts that new development processes that promise to faster retire more issues will not have a guaranteed return on investment (depending on the context where applied), and that a long-held truth in software engineering should not be considered a global truism.Comment: 31 pages. Accepted with minor revisions to Journal of Empirical Software Engineering. Keywords: software economics, phase delay, cost to fi

    Method-Level Bug Severity Prediction using Source Code Metrics and LLMs

    Full text link
    In the past couple of decades, significant research efforts are devoted to the prediction of software bugs. However, most existing work in this domain treats all bugs the same, which is not the case in practice. It is important for a defect prediction method to estimate the severity of the identified bugs so that the higher-severity ones get immediate attention. In this study, we investigate source code metrics, source code representation using large language models (LLMs), and their combination in predicting bug severity labels of two prominent datasets. We leverage several source metrics at method-level granularity to train eight different machine-learning models. Our results suggest that Decision Tree and Random Forest models outperform other models regarding our several evaluation metrics. We then use the pre-trained CodeBERT LLM to study the source code representations' effectiveness in predicting bug severity. CodeBERT finetuning improves the bug severity prediction results significantly in the range of 29%-140% for several evaluation metrics, compared to the best classic prediction model on source code metric. Finally, we integrate source code metrics into CodeBERT as an additional input, using our two proposed architectures, which both enhance the CodeBERT model effectiveness

    Validation of top-down, intelligent reservoir modeling using numerical reservoir simulation

    Get PDF
    The technique, that is named Top-Down Intelligent Reservoir Modeling, (not to be confused with BP\u27s TDRM history matching technique), integrates traditional reservoir engineering analysis with Artificial Intelligence & Data Mining (AI&DM) technology to generate a full field model. The distinguishing feature of this novel technique is its incredibly low data requirement in order to perform analysis which leads to savings of time and research resources to obtain accurate predictions. It only requires field production rate and some well log data as porosity, thickness, and initial water saturation to start the analysis and provide complete development strategies of the field. Although it can incorporate almost any type and amount of data that is available in the modeling process to increase the accuracy and validity of the developed model.;In this work three different reservoir models with different characteristics and operational conditions have been generated using a commercial simulator and also using the proposed Top Down Modeling Method. The models were built with different PVT-Initial reservoir conditions (saturated or under-saturated), a different number of wells, and different distributions of reservoir characteristics (introducing heterogeneity).;Production rates and well log data, which had been used in the commercial simulator to produce particular models. The same values of data were imported into Top Down Modeling Software (IPDA & IDEA) to develop a new empirical reservoir model in order to validate the capabilities of Top Down Modeling in predicting production issues of an oil reservoir against the commercial simulator.;Investigation and validation of Top Down Modeling\u27s capabilities included identification of the gas cap development within the formation, identification of infill locations by mapping the remaining reserves and prediction of the production performance for the newly drilled wells. Then the results of Top Down Modeling analysis were closer to the commercial simulation models/results

    Understanding requirements engineering process: a challenge for practice and education

    Get PDF
    Reviews of the state of the professional practice in Requirements Engineering (RE) stress that the RE process is both complex and hard to describe, and suggest there is a significant difference between competent and "approved" practice. "Approved" practice is reflected by (in all likelihood, in fact, has its genesis in) RE education, so that the knowledge and skills taught to students do not match the knowledge and skills required and applied by competent practitioners. A new understanding of the RE process has emerged from our recent study. RE is revealed as inherently creative, involving cycles of building and major reconstruction of the models developed, significantly different from the systematic and smoothly incremental process generally described in the literature. The process is better characterised as highly creative, opportunistic and insight driven. This mismatch between approved and actual practice provides a challenge to RE education - RE requires insight and creativity as well as technical knowledge. Traditional learning models applied to RE focus, however, on notation and prescribed processes acquired through repetition. We argue that traditional learning models fail to support the learning required for RE and propose both a new model based on cognitive flexibility and a framework for RE education to support this model

    Estimating Project Performance through a System Dynamics Learning Model

    Get PDF
    This is the author accepted manuscript. The final version is available from Wiley via the DOI in this recordMonitoring of the technical progression of projects is highly difficult, especially for complex projects where the current state may be obscured by the use of traditional project metrics. Late detection of technical problems leads to high resolution costs and delayed delivery of projects. To counter this, we report on the development of a updated technical metrics process designed to help ensure the on-time delivery, to both cost and schedule, of high quality products by a U.K. Systems Engineering Company. Published best practice suggests the necessity of using planned parameter profiles crafted to support technical metrics; but these have proven difficult to create due to the variance in project types and noise within individual project systems. This paper presents research findings relevant to the creation of a model to help set valid planned parameter profiles for a diverse range of system engineering products; and in establishing how to help project users get meaningful use out of these planned parameter profiles. We present a solution using a System Dynamics (SD) model capable of generating suitable planned parameter profiles. The final validated and verified model overlays the idea of a learning “S-curve” abstraction onto a rework cycle system archetype. Once applied in SD this matched the mental models of experienced engineering managers within the company, and triangulates with validated empirical data from within the literature. This has delivered three key benefits in practice: the development of a heuristic for understanding the work flow within projects, as a result of the interaction between a project learning system and defect discovery; the ability to produce morphologically accurate performance baselines for metrics; and an approach for enabling teams to generate benefit from the model via the use of problem structuring methodology.Engineering and Physical Sciences Research Council (EPSRC

    The power and vulnerability of the ‘new professional’: Web management in UK universities

    No full text
    Research paper Purpose: To explore the character of an emergent occupational role, that of university web manager. Design/methodology/approach: The primary data used were 15 semi-structured interviews conducted in 2004. These were analysed partly for factual and attitudinal data, but also for the discursive interpretative repertoires in use. Findings: The paper examines the diverse backgrounds, occupational trajectories, organisational positions, job roles and status of practitioners working in ‘web management’ in UK Higher Education. The discursive divide between the marketing and IT approaches to the web is investigated. Two case studies explore further the complexity and creativity involved in individuals’ construction of coherent and successful occupational identities. Research implications / limitations: The paper examines the position of web managers within the framework of the notions of the marginal but powerful ‘new professional’ or ‘broker’ technician. It gives a vivid insight into how the web as a dynamic and open technology opens up opportunities for new forms of expertise; but also explores the potential vulnerabilities of such new roles. In order to examine personal experience in depth, data was gathered for only a relatively small number of individuals. The research was also limited to the UK university sector and to those with a broad responsibility for the web site of the whole institution, i.e. not library web managers and other web authors who work primarily to produce a departmental web presence. These limits imply obvious ways in which the research could be extended. Practical implications: There are implications for how institutions support people in such roles, and for how they can support each other. Originality: There is a vast literature about the web, little about the new work roles that have grown up around it
    • 

    corecore