2,045 research outputs found
On Cognitive Preferences and the Plausibility of Rule-based Models
It is conventional wisdom in machine learning and data mining that logical
models such as rule sets are more interpretable than other models, and that
among such rule-based models, simpler models are more interpretable than more
complex ones. In this position paper, we question this latter assumption by
focusing on one particular aspect of interpretability, namely the plausibility
of models. Roughly speaking, we equate the plausibility of a model with the
likeliness that a user accepts it as an explanation for a prediction. In
particular, we argue that, all other things being equal, longer explanations
may be more convincing than shorter ones, and that the predominant bias for
shorter models, which is typically necessary for learning powerful
discriminative models, may not be suitable when it comes to user acceptance of
the learned models. To that end, we first recapitulate evidence for and against
this postulate, and then report the results of an evaluation in a
crowd-sourcing study based on about 3.000 judgments. The results do not reveal
a strong preference for simple rules, whereas we can observe a weak preference
for longer rules in some domains. We then relate these results to well-known
cognitive biases such as the conjunction fallacy, the representative heuristic,
or the recogition heuristic, and investigate their relation to rule length and
plausibility.Comment: V4: Another rewrite of section on interpretability to clarify focus
on plausibility and relation to interpretability, comprehensibility, and
justifiabilit
Measuring time preferences
We review research that measures time preferences—i.e., preferences over intertemporal tradeoffs. We distinguish between studies using financial flows, which we call “money earlier or later” (MEL) decisions and studies that use time-dated consumption/effort. Under different structural models, we show how to translate what MEL experiments directly measure (required rates of return for financial flows) into a discount function over utils. We summarize empirical regularities found in MEL studies and the predictive power of those studies. We explain why MEL choices are driven in part by some factors that are distinct from underlying time preferences.National Institutes of Health (NIA R01AG021650 and P01AG005842) and the Pershing Square Fund for Research in the Foundations of Human Behavior
The Performance of Performance Standards
This paper examines the performance of the JTPA performance system, a widely emulated model for inducing efficiency in government organizations. We present a model of how performance incentives may distort bureaucratic decisions. We define cream skimming within the model. Two major empirical findings are (a) that the short run measures used to monitor performance are weakly, and sometimes perversely, related to long run impacts and (b) that the efficiency gains or losses from cream skimming are small. We find evidence that centers respond to performance standards.
The weak rationality principle in economics
The weak rationality principle is not an empirical statement but a heuristic rule of how to
proceed in social sciences. It is a necessary ingredient of any ?understanding? social science in
the Weberian sense. In this paper, first this principle and its role in economic theorizing is
discussed. It is also explained why it makes sense to use a micro-foundation and, therefore,
employ the rationality assumption in economic models. Then, with reference to the ?bounded
rationality? approach, the informational assumptions are discussed. Third, we address the
assumption of self-interest which is often seen as a part of the rationality assumption. We
conclude with some remarks on handling the problems of ?free will? as well as ?weakness of
the will? within the economic approach
Predicting Health Impacts of the World Trade Center Disaster: 1. Halogenated hydrocarbons, symptom syndromes, secondary victimization, and the burdens of history
The recent attack on the World Trade Center, in addition to direct injury and psychological trauma, has exposed a vast population to dioxins, dibenzofurans, related endocrine disruptors, and a multitude of other physiologically active chemicals arising from the decomposition of the massive quantities of halogenated hydrocarbons and other plastics within the affected buildings. The impacts of these chemical species have been compounded by exposure to asbestos, fiberglass, crushed glass, concrete, plastic, and other irritating dusts. To address the manifold complexities of this incident we combine recent theoretical perspectives on immune, CNS, and sociocultural cognition with empirical studies on survivors of past large toxic fires, other community-scale chemical exposure incidents, and the aftereffects of war. Our analysis suggests the appearance of complex, but distinct and characteristic, spectra of synergistically linked social, psychosocial, psychological and physical symptoms among the 100,000 or so persons most directly affected by the WTC attack. The different 'eigenpatterns' should become increasingly comorbid as a function of exposure. The expected outcome greatly transcends a simple 'Post Traumatic Stress Disorder' model, and may resemble a particularly acute form of Gulf War Syndrome. We explore the role of external social factors in subsequent exacerbation of the syndrome -- secondary victimization -- and study the path-dependent influence of individual and community-level historical patterns of stress. We suggest that workplace and other organizations can act as ameliorating intermediaries. Those without acess to such buffering structures appear to face a particularly bleak future
The Cyclical Behavior of Industrial Labor Markets: A Comparison of the Pre-War and Post-War Eras
This paper studies the cyclical behavior of a number of industrial labor markets of the pre-war (1923-1939) and post-war (1954-1982) eras. In the spirit of Burns and Mitchell we do not test a specific structural model of the labor market but instead concentrate on describing the qualitative features of the (monthly, industry-level) data.The two principal questions we ask are: First, how is labor input (as measured by the number of workers, the hours of work, and the intensity of utilization) varied over the cycle ? Second, what is the cyclical behaviorof labor compensation (as measured by real wages, product wages, and real weekly earnings) ? We study these questions in both the frequency domain and the time domain. Many of our findings simply reinforce, or perhaps refine, existing perceptions of cyclical labor market behavior. However, we do find some interesting differences between the pre-war and the post-war periods in ther elative use of layoffs and short hours in downturns, and in the cyclical behavior of the real wage.
Growth Econometrics
This paper provides a survey and synthesis of econometric tools that have been employed to study economic growth. While these tools range across a variety of statistical methods, they are united in the common goals of first, identifying interesting contemporaneous patterns in growth data and second, drawing inferences on long-run economic outcomes from cross-section and temporal variation in growth. We describe the main stylized facts that have motivated the development of growth econometrics, the major statistical tools that have been employed to provide structural explanations for these facts, and the primary statistical issues that arise in the study of growth data. An important aspect of the survey is attention to the limits that exist in drawing conclusions from growth data, limits that reflect model uncertainty and the general weakness of available data relative to the sorts of questions for which they are employed.
- …
