143,210 research outputs found
Uncertainty in Ontologies: Dempster-Shafer Theory for Data Fusion Applications
Nowadays ontologies present a growing interest in Data Fusion applications.
As a matter of fact, the ontologies are seen as a semantic tool for describing
and reasoning about sensor data, objects, relations and general domain
theories. In addition, uncertainty is perhaps one of the most important
characteristics of the data and information handled by Data Fusion. However,
the fundamental nature of ontologies implies that ontologies describe only
asserted and veracious facts of the world. Different probabilistic, fuzzy and
evidential approaches already exist to fill this gap; this paper recaps the
most popular tools. However none of the tools meets exactly our purposes.
Therefore, we constructed a Dempster-Shafer ontology that can be imported into
any specific domain ontology and that enables us to instantiate it in an
uncertain manner. We also developed a Java application that enables reasoning
about these uncertain ontological instances.Comment: Workshop on Theory of Belief Functions, Brest: France (2010
Mathematical models of games of chance: Epistemological taxonomy and potential in problem-gambling research
Games of chance are developed in their physical consumer-ready form on the basis of mathematical models, which stand as the premises of their existence and represent their physical processes. There is a prevalence of statistical and probabilistic models in the interest of all parties involved in the study of gambling – researchers, game producers and operators, and players – while functional models are of interest more to math-inclined players than problem-gambling researchers. In this paper I present a structural analysis of the knowledge attached to mathematical models of games of chance and the act of modeling, arguing that such knowledge holds potential in the prevention and cognitive treatment of excessive gambling, and I propose further research in this direction
Moral Uncertainty for Deontologists
Defenders of deontological constraints in normative ethics face a challenge: how should an agent decide what to do when she is uncertain whether some course of action would violate a constraint? The most common response to this challenge has been to defend a threshold principle on which it is subjectively permissible to act iff the agent's credence that her action would be constraint-violating is below some threshold t. But the threshold approach seems arbitrary and unmotivated: what would possibly determine where the threshold should be set, and why should there be any precise threshold at all? Threshold views also seem to violate ought agglomeration, since a pair of actions each of which is below the threshold for acceptable moral risk can, in combination, exceed that threshold. In this paper, I argue that stochastic dominance reasoning can vindicate and lend rigor to the threshold approach: given characteristically deontological assumptions about the moral value of acts, it turns out that morally safe options will stochastically dominate morally risky alternatives when and only when the likelihood that the risky option violates a moral constraint is greater than some precisely definable threshold (in the simplest case, .5). I also show how, in combination with the observation that deontological moral evaluation is relativized to particular choice situations, this approach can overcome the agglomeration problem. This allows the deontologist to give a precise and well-motivated response to the problem of uncertainty
Bounded Rationality and Heuristics in Humans and in Artificial Cognitive Systems
In this paper I will present an analysis of the impact that the notion of “bounded rationality”,
introduced by Herbert Simon in his book “Administrative Behavior”, produced in the
field of Artificial Intelligence (AI). In particular, by focusing on the field of Automated
Decision Making (ADM), I will show how the introduction of the cognitive dimension into
the study of choice of a rational (natural) agent, indirectly determined - in the AI field - the
development of a line of research aiming at the realisation of artificial systems whose decisions
are based on the adoption of powerful shortcut strategies (known as heuristics) based
on “satisficing” - i.e. non optimal - solutions to problem solving. I will show how the
“heuristic approach” to problem solving allowed, in AI, to face problems of combinatorial
complexity in real-life situations and still represents an important strategy for the design
and implementation of intelligent systems
- …