425,378 research outputs found

    An industrial study on the risk of software changes

    Get PDF
    ABSTRACT Modelling and understanding bugs has been the focus of much of the Software Engineering research today. However, organizations are interested in more than just bugs. In particular, they are more concerned about managing risk, i.e., the likelihood that a code or design change will cause a negative impact on their products and processes, regardless of whether or not it introduces a bug. In this paper, we conduct a year-long study involving more than 450 developers of a large enterprise, spanning more than 60 teams, to better understand risky changes, i.e., changes for which developers believe that additional attention is needed in the form of careful code or design reviewing and/or more testing. Our findings show that different developers and different teams have their own criteria for determining risky changes. Using factors extracted from the changes and the history of the files modified by the changes, we are able to accurately identify risky changes with a recall of more than 67%, and a precision improvement of 87% (using developer specific models) and 37% (using team specific models), over a random model. We find that the number of lines and chunks of code added by the change, the bugginess of the files being changed, the number of bug reports linked to a change and the developer experience are the best indicators of change risk. In addition, we find that when a change has many related changes, the reliability of developers in marking risky changes is negatively affected. Our findings and models are being used today in practice to manage the risk of software projects

    An industrial study on the risk of software changes

    Get PDF
    ABSTRACT Modelling and understanding bugs has been the focus of much of the Software Engineering research today. However, organizations are interested in more than just bugs. In particular, they are more concerned about managing risk, i.e., the likelihood that a code or design change will cause a negative impact on their products and processes, regardless of whether or not it introduces a bug. In this paper, we conduct a year-long study involving more than 450 developers of a large enterprise, spanning more than 60 teams, to better understand risky changes, i.e., changes for which developers believe that additional attention is needed in the form of careful code/design reviewing and/or more testing. Our findings show that different developers and different teams have their own criteria for determining risky changes. Using factors extracted from the changes and the history of the files modified by the changes, we are able to accurately identify risky changes with a recall of more than 67%, and a precision improvement of 87% (using developer specific models) and 37% (using team specific models), over a random model. We find that the number of lines and chunks of code added by the change, the bugginess of the files being changed, the number of bug reports linked to a change and the developer experience are the best indicators of change risk. In addition, we find that when a change has many related changes, the reliability of developers in marking risky changes is affected. Our findings and models are being used today by an industrial partner to manage the risk of their software projects

    Investigating requirements volatility during software development projects : an empirical study

    Full text link
    University of Technology, Sydney. Faculty of Information Technology.Changes to software requirements are inevitable during the development process. Despite advances in software engineering over the past three decades, requirements changes are a source of project risk in software development, particularly when businesses and technologies are evolving rapidly. This so-called requirements volatility has attracted much attention, but its extent and consequences are not well understood. The research literature lacks empirical studies investigating requirements volatility, particularly its underlying causes and consequences, and there are no effective strategies to deal with the associated problems throughout software development. We address these issues with a long-term case study in an industrial software development setting to identify and characterise the causes of requirements volatility, its impacts on the software development process, and the strategies used by current system development practitioners to deal with requirements volatility problems. We analysed requirements change request data from two software project releases, and investigated the organisation's handling of requirements changes. Our data include the change request database, project documents, interviews, observations, and regular discussions with the key informants from the project members. We used a combination of qualitative and quantitative research techniques. We first present a critical review of the literature on requirements volatility issues, from which an analytic synthesis for a currently lacking comprehensive coverage of requirements volatility phenomena is derived. The review clarifies the terms used, the sources and adverse impacts of requirements volatility, and the strategies available to current software development teams. We also provide a detailed description of a repeatable research design that researchers and practitioners could use to conduct similar investigation of requirements volatility in any industry setting. We developed requirements change classifications from the change request data. Project members also classified requirements change requests using a card sorting technique. The resulting categories play a vital role in the empirical analysis of several aspects of requirements volatility. Its extent can be characterised by such classification attributes as the types of change (addition, deletion, and modification), reasons for change, and change origin. The classification is useful in analysing the cost of requirements change in terms of rework or effort required. Based on an empirical analysis using the proposed classification, effective strategies were defined to match organisational needs. The organisation was able to use these results to improve its change control process and its change request form, thereby improving management and reducing the impacts of requirements volatility

    Application of an Agent Based Model to Study the Resource Exchanges within Eco-industrial Parks

    Get PDF
    Industrial symbiosis (IS), emerges when diverse organizations interact to share resources with each other in order to increase their overall economic outcomes simultaneously reducing the overall environmental impact. However, it is difficult for companies to identify waste and potential resources. The European Union project SHAREBOX is developing an online platform that supports companies identifying each other resources and nucleate industrial symbiosis. When such opportunities are energy related, conversion technologies are typically required depending on nature of the energy resource and the mismatch between time of supply and user needs may necessitate energy storage. This research work focused on forecasting supply and demand time series as this data is important but typically difficult to obtain. To model demand and supply time series, the Réseau agent-based model was developed. Here the agents; factories (internal agents), market buyers and market sellers (external agents) represent the players in the industrial ecosystems. The agents have dynamic behaviour (e.g. varying price) and heterogeneous characteristics (e.g. production method). Agents combine complex decision rationale with process models (here simplified as input-output model and maintaining the material and energy balance). The decision strategies implemented in the model are; random seller selection and seller sells based on best price, random price changes and risk based price changes. The model was demonstrated on three different case studies with increasing complexity. Case study one demonstrated random decision strategies on single input single output industrial ecosystem. This validated the software concept. Case study two evaluates all combinations of decision strategies in and industrial ecosystem with factories that have multiple input multiple output. This showed that the risk based seller decision strategy developed in this work provides significantly more realistic demand and supply time series. This is independent on whether buyer choses the seller randomly or based on best price. For the third case study, Réseau was extended with multiple period contracts between factories within the ecosystem. We compared scenario with and without such contracts. This showed that the industrial ecosystem is more stable and the Symbiosis Relationship Index (the ratio between internal and external transaction) increased significantly when long duration contracts are available. To summarise, I created Réseau a demand and supply simulation tool, to model the manufacturing processes and the decision rationale of players (agents) in the industrial ecosystem. The three case studies validate the software concept, demonstrate that the seller risk based decision criteria developed in this work generate the most realistic supply and demand time series and shows that contract based relationship between factories significantly increases the duration of industrial symbiosis. The output of Réseau is used in SHAREBOX to support identification of feasible industrial symbiosis projects

    Involving External Stakeholders in Project Courses

    Full text link
    Problem: The involvement of external stakeholders in capstone projects and project courses is desirable due to its potential positive effects on the students. Capstone projects particularly profit from the inclusion of an industrial partner to make the project relevant and help students acquire professional skills. In addition, an increasing push towards education that is aligned with industry and incorporates industrial partners can be observed. However, the involvement of external stakeholders in teaching moments can create friction and could, in the worst case, lead to frustration of all involved parties. Contribution: We developed a model that allows analysing the involvement of external stakeholders in university courses both in a retrospective fashion, to gain insights from past course instances, and in a constructive fashion, to plan the involvement of external stakeholders. Key Concepts: The conceptual model and the accompanying guideline guide the teachers in their analysis of stakeholder involvement. The model is comprised of several activities (define, execute, and evaluate the collaboration). The guideline provides questions that the teachers should answer for each of these activities. In the constructive use, the model allows teachers to define an action plan based on an analysis of potential stakeholders and the pedagogical objectives. In the retrospective use, the model allows teachers to identify issues that appeared during the project and their underlying causes. Drawing from ideas of the reflective practitioner, the model contains an emphasis on reflection and interpretation of the observations made by the teacher and other groups involved in the courses. Key Lessons: Applying the model retrospectively to a total of eight courses shows that it is possible to reveal hitherto implicit risks and assumptions and to gain a better insight into the interaction...Comment: Abstract shortened since arxiv.org limits length of abstracts. See paper/pdf for full abstract. Paper is forthcoming, accepted August 2017. Arxiv version 2 corrects misspelled author nam

    On Evidence-based Risk Management in Requirements Engineering

    Full text link
    Background: The sensitivity of Requirements Engineering (RE) to the context makes it difficult to efficiently control problems therein, thus, hampering an effective risk management devoted to allow for early corrective or even preventive measures. Problem: There is still little empirical knowledge about context-specific RE phenomena which would be necessary for an effective context- sensitive risk management in RE. Goal: We propose and validate an evidence-based approach to assess risks in RE using cross-company data about problems, causes and effects. Research Method: We use survey data from 228 companies and build a probabilistic network that supports the forecast of context-specific RE phenomena. We implement this approach using spreadsheets to support a light-weight risk assessment. Results: Our results from an initial validation in 6 companies strengthen our confidence that the approach increases the awareness for individual risk factors in RE, and the feedback further allows for disseminating our approach into practice.Comment: 20 pages, submitted to 10th Software Quality Days conference, 201

    Toxic release dispersion modelling with PHAST : parametric sensitivity analysis

    Get PDF
    Recent changes to French legislation, concerning the prevention of technological and natural risk, require industrial sites to calculate the safety perimeters for different accident scenarios, based on a detailed probabilistic risk assessment. It is important that the safety perimeters resulting from risk assessment studies are based on the best scientific knowledge available, and that the level of uncertainty is minimised. A significant contribution to the calculation of the safety perimeters comes from the modelling of atmospheric dispersion, particularly of the accidental release of toxic products. One of the most widely used tools for dispersion modelling in several European countries is PHASTTM[1]. This software application is quite flexible, allowing the user to alter values for a wide range of model parameters. Users of the software have found that simulation results may depend quite strongly on the values chosen for some of these parameters. While this flexibility is useful, it can lead different users to calculate effect distances that vary considerably even when studying the same scenario. In order better to understand the influence of these input parameters, we have carried out a parametric sensitivity study of the PHAST dispersion models. This allows us to obtain global sensitivity indices for the input parameters, which quantify the level of influence of each parameter on the output of the model and the interactions. The FAST (Fourier Amplitude Sensitivity Test) sensitivity analysis method that we have applied (using the SimLab software tool [2]) provides both first order indices (that characterize the parameter’s influence on the model output when it varies in isolation) and total indices (that characterize one parameter’s influence including its joint interaction with other input parameters). We shall present results of this analysis on a number of toxic gas dispersion scenarios. The analysis has considered parameters related to the physical release scenario (release rate, release height, etc.), to weather conditions (wind speed, stability class, atmospheric temperature, etc.) and to the numerical resolution (step size, etc.). We compare the results of several sensitivity analysis methods, both local one-at-a-time methods and global methods. We discuss the importance of selecting an appropriate model output value when studying the model sensitivity (output measures considered include the concentration of the released gas at a long distance, at a short distance, and the maximal distance at which a specified concentration is attained). Our experimental results assume that input parameters to the dispersion model are independent. However, correlations exist between several input parameters that we have analysed, such as wind speed and atmospheric stability class. We discuss various approaches to calculate sensitivity indices that take this correlation into account. [1] DNV Software, London, UK [2] Saltelli, A., Chan, K., Scott, E. M. Sensitivity Analysis, 2004, John Wiley & Sons Publishers

    Mapping environmental injustices: pitfalls and potential of geographic information systems in assessing environmental health and equity.

    Get PDF
    Geographic Information Systems (GIS) have been used increasingly to map instances of environmental injustice, the disproportionate exposure of certain populations to environmental hazards. Some of the technical and analytic difficulties of mapping environmental injustice are outlined in this article, along with suggestions for using GIS to better assess and predict environmental health and equity. I examine 13 GIS-based environmental equity studies conducted within the past decade and use a study of noxious land use locations in the Bronx, New York, to illustrate and evaluate the differences in two common methods of determining exposure extent and the characteristics of proximate populations. Unresolved issues in mapping environmental equity and health include lack of comprehensive hazards databases; the inadequacy of current exposure indices; the need to develop realistic methodologies for determining the geographic extent of exposure and the characteristics of the affected populations; and the paucity and insufficiency of health assessment data. GIS have great potential to help us understand the spatial relationship between pollution and health. Refinements in exposure indices; the use of dispersion modeling and advanced proximity analysis; the application of neighborhood-scale analysis; and the consideration of other factors such as zoning and planning policies will enable more conclusive findings. The environmental equity studies reviewed in this article found a disproportionate environmental burden based on race and/or income. It is critical now to demonstrate correspondence between environmental burdens and adverse health impacts--to show the disproportionate effects of pollution rather than just the disproportionate distribution of pollution sources

    Surfacing ERP exploitation risks through a risk ontology

    Get PDF
    Purpose – The purpose of this paper is to develop a risk identification checklist for facilitating user companies to surface, organise and manage potential risks associated with the post-adoption of Enterprise Resource Planning (ERP) systems. Design/methodology/approach – A desktop study, based on the process of a critical literature review, was conducted by the researchers. The critical review focused on IS and business research papers, books, case studies and theoretical articles, etc. Findings – By systematically and critically analysing and synthesising the literature reviewed, the researchers identified and proposed a total of 40 ERP post-implementation risks related to diverse operational, analytical, organisation-wide and technical aspects. A risk ontology was subsequently established to highlight these ERP risks, as well as to present their potential causal relationships. Research limitations/implications – For researchers, the established ERP risk ontology represents a starting point for further research, and provides early insights into a research field that will become increasingly important as more and more companies progress from implementation to exploitation of ERPs. Practical implications – For practitioners, the risk ontology is an important tool and checklist to support risk identification, prevention, management and control, as well as to facilitate strategic planning and decision making. Originality/value – There is a scarcity of studies focusing on ERP post-implementation in contrast with an over abundance of studies focusing on system implementation and project management aspects. This paper aims to fill this significant research gap by presenting a risk ontology of ERP post-adoption. It represents a first attempt in producing a comprehensive model in its area. No other such models could be found from the literature reviewed
    corecore