60,641 research outputs found
Risk Estimation and Expert Judgment: The Case of Yucca Mountain
Professor Shrader-Frechette discusses factors responsible for acute disagreement between the federal government and Nevada citizens over potential Risks at Yucca Mountain and focuses on the use of expert judgment, concluding that some of them appear to exemplify bad science. That aside, she argues that 1,000 year predictions cannot be made from current knowledge of geology or, e.g., institutional behavior and concludes that permanent disposal of radioactive waste is currently impossible
Recommended from our members
A quantum theoretical explanation for probability judgment errors
A quantum probability model is introduced and used to explain human probability judgment errors including the conjunction, disjunction, inverse, and conditional fallacies, as well as unpacking effects and partitioning effects. Quantum probability theory is a general and coherent theory based on a set of (von Neumann) axioms which relax some of the constraints underlying classic (Kolmogorov) probability theory. The quantum model is compared and contrasted with other competing explanations for these judgment errors including the representativeness heuristic, the averaging model, and a memory retrieval model for probability judgments. The quantum model also provides ways to extend Bayesian, fuzzy set, and fuzzy trace theories. We conclude that quantum information processing principles provide a viable and promising new way to understand human judgment and reasoning
On Cognitive Preferences and the Plausibility of Rule-based Models
It is conventional wisdom in machine learning and data mining that logical
models such as rule sets are more interpretable than other models, and that
among such rule-based models, simpler models are more interpretable than more
complex ones. In this position paper, we question this latter assumption by
focusing on one particular aspect of interpretability, namely the plausibility
of models. Roughly speaking, we equate the plausibility of a model with the
likeliness that a user accepts it as an explanation for a prediction. In
particular, we argue that, all other things being equal, longer explanations
may be more convincing than shorter ones, and that the predominant bias for
shorter models, which is typically necessary for learning powerful
discriminative models, may not be suitable when it comes to user acceptance of
the learned models. To that end, we first recapitulate evidence for and against
this postulate, and then report the results of an evaluation in a
crowd-sourcing study based on about 3.000 judgments. The results do not reveal
a strong preference for simple rules, whereas we can observe a weak preference
for longer rules in some domains. We then relate these results to well-known
cognitive biases such as the conjunction fallacy, the representative heuristic,
or the recogition heuristic, and investigate their relation to rule length and
plausibility.Comment: V4: Another rewrite of section on interpretability to clarify focus
on plausibility and relation to interpretability, comprehensibility, and
justifiabilit
Simplicity Effects in the Experience of Near-Miss
Near-miss experiences are one of the main sources of intense emotions.
Despite people's consistency when judging near-miss situations and when
communicating about them, there is no integrated theoretical account of the
phenomenon. In particular, individuals' reaction to near-miss situations is not
correctly predicted by rationality-based or probability-based optimization. The
present study suggests that emotional intensity in the case of near-miss is in
part predicted by Simplicity Theory.Comment: jld-11040601; Proceedings of the 33rd Annual Conference of the
Cognitive Science Society, Austin, TX : United States (2011
How should peer-review panels behave?
Many governments wish to assess the quality of their universities. A prominent example is the UKâs new Research Excellence Framework (REF) 2014. In the REF, peer-review panels will be provided with information on publications and citations. This paper suggests a way
in which panels could choose the weights to attach to these two indicators. The analysis draws in an intuitive way on the concept of Bayesian updating (where citations gradually reveal information about the initially imperfectly-observed importance of the research). Our study should not be interpreted as the argument that only mechanistic measures ought to be used in a REF
- âŠ