265 research outputs found
Use of Equivalent Relative Utility (ERU) to Evaluate Artificial Intelligence-Enabled Rule-Out Devices
We investigated the use of equivalent relative utility (ERU) to evaluate the
effectiveness of artificial intelligence (AI)-enabled rule-out devices that use
AI to identify and autonomously remove non-cancer patient images from
radiologist review in screening mammography.We reviewed two performance metrics
that can be used to compare the diagnostic performance between the
radiologist-with-rule-out-device and radiologist-without-device workflows:
positive/negative predictive values (PPV/NPV) and equivalent relative utility
(ERU). To demonstrate the use of the two evaluation metrics, we applied both
methods to a recent US-based study that reported an improved performance of the
radiologist-with-device workflow compared to the one without the device by
retrospectively applying their AI algorithm to a large mammography dataset. We
further applied the ERU method to a European study utilizing their reported
recall rates and cancer detection rates at different thresholds of their AI
algorithm to compare the potential utility among different thresholds. For the
study using US data, neither the PPV/NPV nor the ERU method can conclude a
significant improvement in diagnostic performance for any of the algorithm
thresholds reported. For the study using European data, ERU values at lower AI
thresholds are found to be higher than that at a higher threshold because more
false-negative cases would be ruled-out at higher threshold, reducing the
overall diagnostic performance. Both PPV/NPV and ERU methods can be used to
compare the diagnostic performance between the radiologist-with-device workflow
and that without. One limitation of the ERU method is the need to measure the
baseline, standard-of-care relative utility (RU) value for mammography
screening in the US. Once the baseline value is known, the ERU method can be
applied to large US datasets without knowing the true prevalence of the
dataset
On reminder effects, drop-outs and dominance: evidence from an online experiment on charitable giving
We present the results of an experiment that (a) shows the usefulness of screening out drop-outs and (b) tests whether different methods of payment and reminder intervals affect charitable giving. Following a lab session, participants could make online donations to charity for a total duration of three months. Our procedure justifying the exclusion of drop-outs consists in requiring participants to collect payments in person flexibly and as known in advance and as highlighted to them later. Our interpretation is that participants who failed to collect their positive payments under these circumstances are likely not to satisfy dominance. If we restrict the sample to subjects who did not drop out, but not otherwise, reminders significantly increase the overall amount of charitable giving. We also find that weekly reminders are no more effective than monthly reminders in increasing charitable giving, and that, in our three months duration experiment, standing orders do not increase giving relative to one-off donations
Beating the random walk: a performance assessment of long-term interest rate forecasts
This article assesses the performance of a number of long-term interest rate forecast approaches, namely time series models, structural economic models, expert forecasts and combinations thereof. The predictive performance of these approaches is compared using outside sample forecast errors, where a random walk forecast acts as benchmark. It is found that for five major Organization for Economic Co-operation and Development (OECD) countries, namely the US, Germany, UK, The Netherlands and Japan, the other forecasting approaches do not outperform the random walk on a 3-month forecast horizon. On a 12-month forecast horizon, the random walk model is outperformed by a model that combines economic data and expert forecasts. Several methods of combination are considered: equal weights, optimized weights and weights based on the forecast error. It seems that the additional information contents of the structural models and expert knowledge adds considerably to the performance of forecasting 12 months ahead. © 2013 Taylor & Francis
Evidence and Ideology in Macroeconomics: The Case of Investment Cycles
The paper reports the principal findings of a long term research project on the description and explanation of business cycles. The research strongly confirmed the older view that business cycles have large systematic components that take the form of investment cycles. These quasi-periodic movements can be represented as low order, stochastic, dynamic processes with complex eigenvalues. Specifically, there is a fixed investment cycle of about 8 years and an inventory cycle of about 4 years. Maximum entropy spectral analysis was employed for the description of the cycles and continuous time econometrics for the explanatory models. The central explanatory mechanism is the second order accelerator, which incorporates adjustment costs both in relation to the capital stock and the rate of investment. By means of parametric resonance it was possible to show, both theoretically and empirically how cycles aggregate from the micro to the macro level. The same mathematical tool was also used to explain the international convergence of cycles. I argue that the theory of investment cycles was abandoned for ideological, not for evidential reasons. Methodological issues are also discussed
The Origin of Behavior
We propose a single evolutionary explanation for the origin of several behaviors that have been observed in organisms ranging from ants to human subjects, including risk-sensitive foraging, risk aversion, loss aversion, probability matching, randomization, and diversification. Given an initial population of individuals, each assigned a purely arbitrary behavior with respect to a binary choice problem, and assuming that offspring behave identically to their parents, only those behaviors linked to reproductive success will survive, and less reproductively successful behaviors will disappear at exponential rates. When the uncertainty in reproductive success is systematic, natural selection yields behaviors that may be individually sub-optimal but are optimal from the population perspective; when reproductive uncertainty is idiosyncratic, the individual and population perspectives coincide. This framework generates a surprisingly rich set of behaviors, and the simplicity and generality of our model suggest that these derived behaviors are primitive and nearly universal within and across species
Effect of training-sample size and classification difficulty on the accuracy of genomic predictors
Introduction: As part of the MicroArray Quality Control (MAQC)-II project, this analysis examines how the choice of univariate feature-selection methods and classification algorithms may influence the performance of genomic predictors under varying degrees of prediction difficulty represented by three clinically relevant endpoints.
Methods: We used gene-expression data from 230 breast cancers (grouped into training and independent validation sets), and we examined 40 predictors (five univariate feature-selection methods combined with eight different classifiers) for each of the three endpoints. Their classification performance was estimated on the training set by using two different resampling methods and compared with the accuracy observed in the independent validation set.
Results: A ranking of the three classification problems was obtained, and the performance of 120 models was estimated and assessed on an independent validation set. The bootstrapping estimates were closer to the validation performance than were the cross-validation estimates. The required sample size for each endpoint was estimated, and both gene-level and pathway-level analyses were performed on the obtained models.
Conclusions: We showed that genomic predictor accuracy is determined largely by an interplay between sample size and classification difficulty. Variations on univariate feature-selection methods and choice of classification algorithm have only a modest impact on predictor performance, and several statistically equally good predictors can be developed for any given classification problem
LeChatelier-Samuelson Principle in Games and Pass-Through of Shocks
The LeChatelier-Samuelson principle ("the principle") states that as a reaction to a shock, an agent's short-run adjustment of an action is smaller than the long-run adjustment of that action when the other related actions can also be adjusted. We extend the principle to strategic environments and to shocks that affect more than one action directly. We define long run as an adjustment that also includes the affected player adjusting its other actions and other players adjusting their strategies. We show that the principle holds for 1) supermodular games (strategic complements), 2) submodular games (strategic substitutes) for shocks that affect only one player's action directly and when the players' payoffs depend only on their own strategies and the sum of the rivals' strategies (for example, homogeneous Cournot oligopoly). We also provide other sufficient conditions for the principle to hold in games of strategic substitutes. Our results imply that when the principle holds a multiproduct oligopoly might have lower cost pass-through in the short run than in the long run. Hence, we argue that the principle might explain the empirical findings of overshifting of cost and unit tax by multiproduct firms
Happiness economics
There is enough evidence to be confident that individuals are able and willing to provide a meaningful answer when asked to value on a finite scale their satisfaction with their own lives, a question that psychologists have long and often posed to respondents of large questionnaires. Without taking its limitations and criticisms too lightly, some economists have been using thismeasure of self-reported satisfaction as a proxy for utility so as to contribute to a better understanding of individuals' tastes and hopefully behavior. By means of satisfaction questions we can elicit information on individual likes and dislikes over a large set of relevant issues, such as income, working status and job amenities, the risk of becoming unemployed, inflation, and health status. This information can be used to evaluate existing ideas from a new perspective, understand individual behavior, evaluate and design public policies, study poverty and inequality, and develop a preference based valuation method. In this article I first critically assess the pros and cons of using satisfaction variables, and then discuss its main applications
Transparency systems: do businesses in North Rhine-Westphalia (Germany) regret the cancellation of the Smiley scheme?
Abstract Our paper explores how participants of voluntary transparency systems react to the cancellation of such programmes. We concern ourselves with participants of the voluntary transparency scheme known as the “North Rhine-Westphalia Smiley”. The Smiley system, which awarded the compliant behavior of businesses that joined it, was established in 2007 but cancelled in 2013 due to lack of participants. In our survey, the vast majority of the respondents express regret at the cancellation of the scheme. The goals of this paper are to (i) econometrically explain how socio-demographic, monetary, and non-monetary determinants influence participants’ willingness to continue with the voluntary transparency system and (ii) find reasons for the inconsistency between the lack of participants and the expression of regret within our survey. We find evidence that the non-monetary variables “revenue” and “award” and the monetary variable “revenue” influence participants’ regret. We speculate that status quo bias and loss aversion are the reasons why businesses favour maintaining the Smiley scheme once they have experienced it
- …