10,411 research outputs found
Is it Possible to Disregard Obsolete Requirements? A Family of Experiments in Software Effort Estimation
Context: Expert judgement is a common method for software effort estimations
in practice today. Estimators are often shown extra obsolete requirements
together with the real ones to be implemented. Only one previous study has been
conducted on if such practices bias the estimations. Objective: We conducted
six experiments with both students and practitioners to study, and quantify,
the effects of obsolete requirements on software estimation. Method By
conducting a family of six experiments using both students and practitioners as
research subjects (N = 461), and by using a Bayesian Data Analysis approach, we
investigated different aspects of this effect. We also argue for, and show an
example of, how we by using a Bayesian approach can be more confident in our
results and enable further studies with small sample sizes. Results: We found
that the presence of obsolete requirements triggered an overestimation in
effort across all experiments. The effect, however, was smaller in a field
setting compared to using students as subjects. Still, the over-estimations
triggered by the obsolete requirements were systematically around twice the
percentage of the included obsolete ones, but with a large 95% credible
interval. Conclusions: The results have implications for both research and
practice in that the found systematic error should be accounted for in both
studies on software estimation and, maybe more importantly, in estimation
practices to avoid over-estimation due to this systematic error. We partly
explain this error to be stemming from the cognitive bias of
anchoring-and-adjustment, i.e. the obsolete requirements anchored a much larger
software. However, further studies are needed in order to accurately predict
this effect
Recommended from our members
Predicting with sparse data
It is well known that effective prediction of project cost related factors is an important aspect of software engineering. Unfortunately, despite extensive research over more than 30 years, this remains a significant problem for many practitioners. A major obstacle is the absence of reliable and systematic historic data, yet this is a sine qua non for almost all proposed methods: statistical, machine learning or calibration of existing models. In this paper we describe our sparse data method (SDM) based upon a pairwise comparison technique and Saaty's Analytic Hierarchy Process (AHP). Our minimum data requirement is a single known point. The technique is supported by a software tool known as DataSalvage. We show, for data from two companies, how our approach — based upon expert judgement — adds value to expert judgement by producing significantly more accurate and less biased results. A sensitivity analysis shows that our approach is robust to pairwise comparison errors. We then describe the results of a small usability trial with a practising project manager. From this empirical work we conclude that the technique is promising and may help overcome some of the present barriers to effective project prediction
Statistical inference with anchored Bayesian mixture of regressions models: A case study analysis of allometric data
We present a case study in which we use a mixture of regressions model to
improve on an ill-fitting simple linear regression model relating log brain
mass to log body mass for 100 placental mammalian species. The slope of this
regression model is of particular scientific interest because it corresponds to
a constant that governs a hypothesized allometric power law relating brain mass
to body mass. A specific line of investigation is to determine whether the
regression parameters vary across subgroups of related species.
We model these data using an anchored Bayesian mixture of regressions model,
which modifies the standard Bayesian Gaussian mixture by pre-assigning small
subsets of observations to given mixture components with probability one. These
observations (called anchor points) break the relabeling invariance typical of
exchangeable model specifications (the so-called label-switching problem). A
careful choice of which observations to pre-classify to which mixture
components is key to the specification of a well-fitting anchor model.
In the article we compare three strategies for the selection of anchor
points. The first assumes that the underlying mixture of regressions model
holds and assigns anchor points to different components to maximize the
information about their labeling. The second makes no assumption about the
relationship between x and y and instead identifies anchor points using a
bivariate Gaussian mixture model. The third strategy begins with the assumption
that there is only one mixture regression component and identifies anchor
points that are representative of a clustering structure based on case-deletion
importance sampling weights. We compare the performance of the three strategies
on the allometric data set and use auxiliary taxonomic information about the
species to evaluate the model-based classifications estimated from these
models
How and Why Decision Models Influence Marketing Resource Allocations
We study how and why model-based Decision Support Systems (DSSs) influence managerial decision making, in the context of marketing budgeting and resource allocation. We consider several questions: (1) What does it mean for a DSS to be "good?"; (2) What is the relationship between an anchor or reference condition, DSS-supported recommendation and decision quality? (3) How does a DSS influence the decision process, and how does the process influence outcomes? (4) Is the effect of the DSS on the decision process and outcome robust, or context specific? We test hypotheses about the effects of DSSs in a controlled experiment with two award winning DSSs and find that, (1) DSSs improve users' objective decision outcomes (an index of likely realized revenue or profit); (2) DSS users often do not report enhanced subjective perceptions of outcomes; (3) DSSs, that provide feedback in the form of specific recommendations and their associated projected benefits had a stronger effect both on the decision making process and on the outcomes.Our results suggest that although managers actually achieve improved outcomes from DSS use, they may not perceive that the DSS has improved the outcomes. Therefore, there may be limited interest in managerial uses of DSSs, unless they are designed to: (1) encourage discussion (e.g., by providing explanations and support for the recommendations), (2) provide feedback to users on likely marketplace results, and (3) help reduce the perceived complexity of the problem so that managers will consider more alternatives and invest more cognitive effort in searching for improved outcomes.marketing models;resource allocation;DSS;decision process;decision quality
Estimating the Underground Economy using MIMIC Models
MIMIC models are being used to estimate the size of the underground economy or the tax gap in various countries. In this paper I examine critically both the method in general and three applications of the method by Giles and Tedds (2002), Bajada and Schneider (2005) and Dell’Anno and Schneider (2003). Connections are shown to familiar econometric models of linear regression and simultaneous equations. I also investigate the auxiliary procedures used in this literature, including differencing as a treatment for unit roots and the calibration of results using other data. The three applications demonstrate how the method is subjective and pliable in practice. I conclude that the MIMIC method is unfit for the purpose.underground economy, MIMIC, structural modelling, LISREL® software
ESTIMATING WILLINGNESS-TO-PAY USING A POLYCHOTOMOUS CHOICE FUNCTION: AN APPLICATION TO PORK PRODUCTS WITH ENVIRONMENTAL ATTRIBUTES
This paper utilizes a polychotomous choice function to investigate the relationship between socioeconomic characteristics and willingness-to-pay for embedded environmental attributes. Specifically, a two-stage estimation procedure with an ordered probit selection rule is used to predict the premium payers and the magnitude of the premium they are willing to pay.Research Methods/ Statistical Methods,
Estimating Willingness to Pay Using a Polychotomous Choice Function: An Application to Pork Products with Environmental Attributes
Bid data from a Vickrey auction for pork chops with embedded environmental attributes were analyzed. It was found that approximately 62% of the participants had a positive WTP for the most "environmentally friendly" package of pork. Thirty percent of the participants had no WTP, and 8% had a negative WTP. A polychotomous choice model was used to accommodate data having an anchoring point within the distribution of the data. Standard variables found in the WTP literature coupled with this model were used to predict participants who were premium payers and non-premium payers using an estimated ordered probit equation.anchoring points, environmental attributes, ordered probit, polychotomous choice functions, pork, Vickrey auction, willingness to pay, Consumer/Household Economics, Demand and Price Analysis,
Comparability of Health Care Responsiveness in Europe using anchoring vignettes from SHARE
The aim of this paper is to measure and to correct for the potential incomparability of responses to the SHARE survey on health care responsiveness. A parametric approach based on the use of anchoring vignettes is applied to cross-sectional data (2006-07) in ten European countries. More than 6,000 respondents aged 50 years old and over were asked to assess the quality of health care responsiveness in three domains: waiting time for medical treatment, quality of the conditions in visited health facilities, and communication and involvement in decisions about the treatment. Chopit models estimates suggest that reporting heterogenity is influenced by both individual (socio-economic, health) and national characteristics. Although correction for differential item functioning does not considerably modify countries ranking after controlling for the usual covariates, about two thirds of the respondents' self-assessments have been re-scaled in each domain. Our results suggest that reporting heterogenity tends to overestimate health care responsiveness for "time to wait for treatment", whereas it seems to underestimate people's self-assessment in the two other domains.Anchoring Vignettes, Cross-Country Comparison, Chopit Model
Evolutionary Selection of Individual Expectations and Aggregate Outcomes
In recent 'learning to forecast' experiments with human subjects (Hommes, et al. 2005), three different patterns in aggregate asset price behavior have been observed: slow monotonic convergence, permanent oscillations and dampened fluctuations. We construct a simple model of individual learning, based on performance based evolutionary selectionor reinforcement learning among heterogeneous expectations rules, explaining these different aggregate outcomes. Out-of-sample predictive power of our switching model is higher compared to the rational or other homogeneous expectations benchmarks. Our results show that heterogeneity in expectations is crucial to describe individual forecasting behavior as well as aggregate price behavior.
- …