77,508 research outputs found

    Predicting and Evaluating Software Model Growth in the Automotive Industry

    Full text link
    The size of a software artifact influences the software quality and impacts the development process. In industry, when software size exceeds certain thresholds, memory errors accumulate and development tools might not be able to cope anymore, resulting in a lengthy program start up times, failing builds, or memory problems at unpredictable times. Thus, foreseeing critical growth in software modules meets a high demand in industrial practice. Predicting the time when the size grows to the level where maintenance is needed prevents unexpected efforts and helps to spot problematic artifacts before they become critical. Although the amount of prediction approaches in literature is vast, it is unclear how well they fit with prerequisites and expectations from practice. In this paper, we perform an industrial case study at an automotive manufacturer to explore applicability and usability of prediction approaches in practice. In a first step, we collect the most relevant prediction approaches from literature, including both, approaches using statistics and machine learning. Furthermore, we elicit expectations towards predictions from practitioners using a survey and stakeholder workshops. At the same time, we measure software size of 48 software artifacts by mining four years of revision history, resulting in 4,547 data points. In the last step, we assess the applicability of state-of-the-art prediction approaches using the collected data by systematically analyzing how well they fulfill the practitioners' expectations. Our main contribution is a comparison of commonly used prediction approaches in a real world industrial setting while considering stakeholder expectations. We show that the approaches provide significantly different results regarding prediction accuracy and that the statistical approaches fit our data best

    Making Software Cost Data Available for Meta-Analysis

    Get PDF
    In this paper we consider the increasing need for meta-analysis within empirical software engineering. However, we also note that a necessary precondition to such forms of analysis is to have both the results in an appropriate format and sufficient contextual information to avoid misleading inferences. We consider the implications in the field of software project effort estimation and show that for a sample of 12 seemingly similar published studies, the results are difficult to compare let alone combine. This is due to different reporting conventions. We argue that a protocol is required and make some suggestions as to what it should contain

    Quantile Regression Estimates of Confidence Intervals for WASDE Price Forecasts

    Get PDF
    This study uses quantile regressions to estimate historical forecast error distributions for WASDE forecasts of corn, soybean, and wheat prices, and then compute confidence limits for the forecasts based on the empirical distributions. Quantile regressions with fit errors expressed as a function of forecast lead time are consistent with theoretical forecast variance expressions while avoiding assumptions of normality and optimality. Based on out-of-sample accuracy tests over 1995/96–2006/07, quantile regression methods produced intervals consistent with the target confidence level. Overall, this study demonstrates that empirical approaches may be used to construct accurate confidence intervals for WASDE corn, soybean, and wheat price forecasts.commodity, evaluating forecasts, government forecasting, judgmental forecasting, prediction intervals, price forecasting, Crop Production/Industries, Demand and Price Analysis,
    corecore