514,747 research outputs found

    Is It Safe to Uplift This Patch? An Empirical Study on Mozilla Firefox

    Full text link
    In rapid release development processes, patches that fix critical issues, or implement high-value features are often promoted directly from the development channel to a stabilization channel, potentially skipping one or more stabilization channels. This practice is called patch uplift. Patch uplift is risky, because patches that are rushed through the stabilization phase can end up introducing regressions in the code. This paper examines patch uplift operations at Mozilla, with the aim to identify the characteristics of uplifted patches that introduce regressions. Through statistical and manual analyses, we quantitatively and qualitatively investigate the reasons behind patch uplift decisions and the characteristics of uplifted patches that introduced regressions. Additionally, we interviewed three Mozilla release managers to understand organizational factors that affect patch uplift decisions and outcomes. Results show that most patches are uplifted because of a wrong functionality or a crash. Uplifted patches that lead to faults tend to have larger patch size, and most of the faults are due to semantic or memory errors in the patches. Also, release managers are more inclined to accept patch uplift requests that concern certain specific components, and-or that are submitted by certain specific developers.Comment: In proceedings of the 33rd International Conference on Software Maintenance and Evolution (ICSME 2017

    Integrating automated support for a software management cycle into the TAME system

    Get PDF
    Software managers are interested in the quantitative management of software quality, cost and progress. An integrated software management methodology, which can be applied throughout the software life cycle for any number purposes, is required. The TAME (Tailoring A Measurement Environment) methodology is based on the improvement paradigm and the goal/question/metric (GQM) paradigm. This methodology helps generate a software engineering process and measurement environment based on the project characteristics. The SQMAR (software quality measurement and assurance technology) is a software quality metric system and methodology applied to the development processes. It is based on the feed forward control principle. Quality target setting is carried out before the plan-do-check-action activities are performed. These methodologies are integrated to realize goal oriented measurement, process control and visual management. A metric setting procedure based on the GQM paradigm, a management system called the software management cycle (SMC), and its application to a case study based on NASA/SEL data are discussed. The expected effects of SMC are quality improvement, managerial cost reduction, accumulation and reuse of experience, and a highly visual management reporting system

    Measures for assessing the impact of ICT use on attainment

    Get PDF
    "Building on ImpaCT2, this study aims to design a measure or measures capable of tracking 'snapshot' data, such that it will be possible to monitor the development of ICT use to support attainment" -- page 4

    Software cost estimation

    Get PDF
    The paper gives an overview of the state of the art of software cost estimation (SCE). The main questions to be answered in the paper are: (1) What are the reasons for overruns of budgets and planned durations? (2) What are the prerequisites for estimating? (3) How can software development effort be estimated? (4) What can software project management expect from SCE models, how accurate are estimations which are made using these kind of models, and what are the pros and cons of cost estimation models

    On Evidence-based Risk Management in Requirements Engineering

    Full text link
    Background: The sensitivity of Requirements Engineering (RE) to the context makes it difficult to efficiently control problems therein, thus, hampering an effective risk management devoted to allow for early corrective or even preventive measures. Problem: There is still little empirical knowledge about context-specific RE phenomena which would be necessary for an effective context- sensitive risk management in RE. Goal: We propose and validate an evidence-based approach to assess risks in RE using cross-company data about problems, causes and effects. Research Method: We use survey data from 228 companies and build a probabilistic network that supports the forecast of context-specific RE phenomena. We implement this approach using spreadsheets to support a light-weight risk assessment. Results: Our results from an initial validation in 6 companies strengthen our confidence that the approach increases the awareness for individual risk factors in RE, and the feedback further allows for disseminating our approach into practice.Comment: 20 pages, submitted to 10th Software Quality Days conference, 201

    European White Book on Real-Time Power Hardware in the Loop Testing : DERlab Report No. R- 005.0

    Get PDF
    The European White Book on Real-Time-Powerhardware-in-the-Loop testing is intended to serve as a reference document on the future of testing of electrical power equipment, with specifi c focus on the emerging hardware-in-the-loop activities and application thereof within testing facilities and procedures. It will provide an outlook of how this powerful tool can be utilised to support the development, testing and validation of specifi cally DER equipment. It aims to report on international experience gained thus far and provides case studies on developments and specifi c technical issues, such as the hardware/software interface. This white book compliments the already existing series of DERlab European white books, covering topics such as grid-inverters and grid-connected storag

    Community-Based Production of Open Source Software: What Do We Know About the Developers Who Participate?

    Get PDF
    This paper seeks to close an empirical gap regarding the motivations, personal attributes and behavioral patterns among free/libre and open source (FLOSS) developers, especially those involved in community-based production, and its findings on the existing literature and the future directions for research. Respondents to an extensive web-survey’s (FLOSS-US 2003) questions about their reasons for work on FLOSS are classified according to their distinct “motivational profiles” by hierarchical cluster analysis. Over half of them also are matched to projects of known membership sizes, revealing that although some members from each of the clusters are present in the small, medium and large ranges of the distribution of project sizes, the mixing fractions for the large and the very small project ranges are statistically different. Among developers who changed projects, there is a discernable flow from the bottom toward the very small towards to large projects, some of which is motivated by individuals seeking to improve their programming skills. It is found that the profile of early motivation, along with other individual attributes, significantly affects individual developers’ selections of projects from different regions of the size range.Open source software, FLOSS project, community-based peer production, population heterogeneity, micro-motives, motivational profiles, web-cast surveys, hierarchical cluster analysis

    Entrepreneurship, Entry and Exit in Creative Industries: an explorative Survey

    Get PDF
    Series: Creative Industries in Vienna: Development, Dynamics and Potential

    Are Delayed Issues Harder to Resolve? Revisiting Cost-to-Fix of Defects throughout the Lifecycle

    Full text link
    Many practitioners and academics believe in a delayed issue effect (DIE); i.e. the longer an issue lingers in the system, the more effort it requires to resolve. This belief is often used to justify major investments in new development processes that promise to retire more issues sooner. This paper tests for the delayed issue effect in 171 software projects conducted around the world in the period from 2006--2014. To the best of our knowledge, this is the largest study yet published on this effect. We found no evidence for the delayed issue effect; i.e. the effort to resolve issues in a later phase was not consistently or substantially greater than when issues were resolved soon after their introduction. This paper documents the above study and explores reasons for this mismatch between this common rule of thumb and empirical data. In summary, DIE is not some constant across all projects. Rather, DIE might be an historical relic that occurs intermittently only in certain kinds of projects. This is a significant result since it predicts that new development processes that promise to faster retire more issues will not have a guaranteed return on investment (depending on the context where applied), and that a long-held truth in software engineering should not be considered a global truism.Comment: 31 pages. Accepted with minor revisions to Journal of Empirical Software Engineering. Keywords: software economics, phase delay, cost to fi
    corecore