12,322 research outputs found

    A Systematic Mapping of Factors Affecting Accuracy of Software Development Effort Estimation

    Get PDF
    Software projects often do not meet their scheduling and budgeting targets. Inaccurate estimates are often responsible for this mismatch. This study investigates extant research on factors that affect accuracy of software development effort estimation. The purpose is to synthesize existing knowledge, propose directions for future research, and improve estimation accuracy in practice. A systematic mapping study (a comprehensive review of existing research) is conducted to identify such factors and their impact on estimation accuracy. Thirty-two factors assigned to four categories (estimation process, estimator’s characteristics, project to be estimated, and external context) are identified in a variety of research studies. Although the significant impact of several factors has been shown, results are limited by the lack of insight into the extent of these impacts. Our results imply a shift in research focus and design to gather more in-depth insights. Moreover, our results emphasize the need to argue for specific design decisions to enable a better understanding of possible influences of the study design on the credibility of the results. For software developers, our results provide a useful map to check the assumptions that undergird their estimates, to build comprehensive experience databases, and to adequately staff design projects

    Software project economics: A roadmap

    Get PDF
    The objective of this paper is to consider research progress in the field of software project economics with a view to identifying important challenges and promising research directions. I argue that this is an important sub-discipline since this will underpin any cost-benefit analysis used to justify the resourcing, or otherwise, of a software project. To accomplish this I conducted a bibliometric analysis of peer reviewed research articles to identify major areas of activity. My results indicate that the primary goal of more accurate cost prediction systems remains largely unachieved. However, there are a number of new and promising avenues of research including: how we can combine results from primary studies, integration of multiple predictions and applying greater emphasis upon the human aspects of prediction tasks. I conclude that the field is likely to remain very challenging due to the people-centric nature of software engineering, since it is in essence a design task. Nevertheless the need for good economic models will grow rather than diminish as software becomes increasingly ubiquitous

    Assimilating Non-Probabilistic Assessments of the Estimation of Uncertainty Bias in Expert Judgment Elicitation Using an Evidence Based Approach in High Consequence Conceptual Designs

    Get PDF
    One of the major challenges in conceptual designs of complex systems is the identification of uncertainty embedded in the information due to lack of historic data. This becomes of increased concern especially in high-risk industries. This document reports a developed methodology that allows for the cognitive bias, estimation of uncertainty, to be elucidated to improve the quality of elicited data. It consists of a comprehensive literature review that begins by defining a \u27High Consequence Conceptual Engineering Environment\u27 and identifies the high-risk industries in which these environments are found. It proceeds with a discussion that differentiates risk and uncertainty in decision-making in these environments. An argument was built around the identified epistemic category of uncertainty, the impact on hard data for decision-making, and from whom we obtain this data. The review shifts to defining and selecting the experts, the elicitation process in terms of the components, the process phases and steps involved, and an examination of a probabilistic and a fuzzy example. This sets the stage for this methodology that uses evidence theory for the mathematical analysis after the data is elicited using a tailored elicitation process. Yager\u27s combination rule is used to combine evidence and fully recognize the ignorance without ignoring available information. Engineering and management teams from NASA Langley Research Center were the population from which the experts for this study were identified. NASA officials were interested in obtaining uncertainty estimates, and a comparison of these estimates, associated with their Crew Launch Vehicle (CLV) designs; the existing Exploration Systems Architecture Study Crew Launch Vehicle (ESAS CLV) and the Parallel-Staged Crew Launch Vehicle (P-S CLV) which is currently being worked. This evidence-based approach identified that the estimation of cost parameters uncertainty is not specifically over or underestimated in High Consequence Conceptual Engineering Environments; rather, there is more uncertainty present than what is being anticipated. From the perspective of maturing designs, it was concluded that the range of cost parameters\u27 uncertainty at different error-state-values were interchangeably larger or smaller when compared to each other even as the design matures

    Integration of Statistical Methods and Judgment for Time Series

    Get PDF
    We consider how judgment and statistical methods should be integrated for time-series forecasting. Our review of published empirical research identified 47 studies, all but four published since 1985. Five procedures were identified: revising judgment; combining forecasts; revising extrapolations; rule-based forecasting; and econometric forecasting. This literature suggests that integration generally improves accuracy when the experts have domain knowledge and when significant trends are involved. Integration is valuable to the extent that judgments are used as inputs to the statistical methods, that they contain additional relevant information, and that the integration scheme is well structured. The choice of an integration approach can have a substantial impact on the accuracy of the resulting forecasts. Integration harms accuracy when judgment is biased or its use is unstructured. Equal-weights combining should be regarded as the benchmark and it is especially appropriate where series have high uncertainty or high instability. When the historical data involve high uncertainty or high instability, we recommend revising judgment, revising extrapolations, or combining. When good domain knowledge is available for the future as well as for the past, we recommend rule- based forecasting or econometric methods.statistical methods, statistics, time series, forecasting, empirical research

    Demand Forecasting: Evidence-Based Methods

    Get PDF
    In recent decades, much comparative testing has been conducted to determine which forecasting methods are more effective under given conditions. This evidence-based approach leads to conclusions that differ substantially from current practice. This paper summarizes the primary findings on what to do – and what not to do. When quantitative data are scarce, impose structure by using expert surveys, intentions surveys, judgmental bootstrapping, prediction markets, structured analogies, and simulated interaction. When quantitative data are abundant, use extrapolation, quantitative analogies, rule-based forecasting, and causal methods. Among causal methods, use econometrics when prior knowledge is strong, data are reliable, and few variables are important. When there are many important variables and extensive knowledge, use index models. Use structured methods to incorporate prior knowledge from experiments and experts’ domain knowledge as inputs to causal forecasts. Combine forecasts from different forecasters and methods. Avoid methods that are complex, that have not been validated, and that ignore domain knowledge; these include intuition, unstructured meetings, game theory, focus groups, neural networks, stepwise regression, and data mining

    Autoencoders for strategic decision support

    Full text link
    In the majority of executive domains, a notion of normality is involved in most strategic decisions. However, few data-driven tools that support strategic decision-making are available. We introduce and extend the use of autoencoders to provide strategically relevant granular feedback. A first experiment indicates that experts are inconsistent in their decision making, highlighting the need for strategic decision support. Furthermore, using two large industry-provided human resources datasets, the proposed solution is evaluated in terms of ranking accuracy, synergy with human experts, and dimension-level feedback. This three-point scheme is validated using (a) synthetic data, (b) the perspective of data quality, (c) blind expert validation, and (d) transparent expert evaluation. Our study confirms several principal weaknesses of human decision-making and stresses the importance of synergy between a model and humans. Moreover, unsupervised learning and in particular the autoencoder are shown to be valuable tools for strategic decision-making

    NASA Lewis Research Center Futuring Workshop

    Get PDF
    On October 21 and 22, 1986, the Futures Group ran a two-day Futuring Workshop on the premises of NASA Lewis Research Center. The workshop had four main goals: to acquaint participants with the general history of technology forecasting; to familiarize participants with the range of forecasting methodologies; to acquaint participants with the range of applicability, strengths, and limitations of each method; and to offer participants some hands-on experience by working through both judgmental and quantitative case studies. Among the topics addressed during this workshop were: information sources; judgmental techniques; quantitative techniques; merger of judgment with quantitative measurement; data collection methods; and dealing with uncertainty
    • …
    corecore