3,883 research outputs found

    Depth-bounded Belief functions

    Get PDF
    This paper introduces and investigates Depth-bounded Belief functions, a logic-based representation of quantified uncertainty. Depth-bounded Belief functions are based on the framework of Depth-bounded Boolean logics [4], which provide a hierarchy of approximations to classical logic. Similarly, Depth-bounded Belief functions give rise to a hierarchy of increasingly tighter lower and upper bounds over classical measures of uncertainty. This has the rather welcome consequence that “higher logical abilities” lead to sharper uncertainty quantification. In particular, our main results identify the conditions under which Dempster-Shafer Belief functions and probability functions can be represented as a limit of a suitable sequence of Depth-bounded Belief functions

    Depth-bounded Belief functions

    Get PDF
    This paper introduces and investigates Depth-bounded Belief functions, a logic-based representation of quantified uncertainty. Depth-bounded Belief functions are based on the framework of Depth-bounded Boolean logics [4], which provide a hierarchy of approximations to classical logic. Similarly, Depth-bounded Belief functions give rise to a hierarchy of increasingly tighter lower and upper bounds over classical measures of uncertainty. This has the rather welcome consequence that \u201chigher logical abilities\u201d lead to sharper uncertainty quantification. In particular, our main results identify the conditions under which Dempster-Shafer Belief functions and probability functions can be represented as a limit of a suitable sequence of Depth-bounded Belief functions

    Believing Probabilistic Contents: On the Expressive Power and Coherence of Sets of Sets of Probabilities

    Get PDF
    Moss (2018) argues that rational agents are best thought of not as having degrees of belief in various propositions but as having beliefs in probabilistic contents, or probabilistic beliefs. Probabilistic contents are sets of probability functions. Probabilistic belief states, in turn, are modeled by sets of probabilistic contents, or sets of sets of probability functions. We argue that this Mossean framework is of considerable interest quite independently of its role in Moss’ account of probabilistic knowledge or her semantics for epistemic modals and probability operators. It is an extremely general model of uncertainty. Indeed, it is at least as general and expressively powerful as every other current imprecise probability framework, including lower probabilities, lower previsions, sets of probabilities, sets of desirable gambles, and choice functions. In addition, we partially answer an important question that Moss leaves open, viz., why should rational agents have consistent probabilistic beliefs? We show that an important subclass of Mossean believers avoid Dutch bookability iff they have consistent probabilistic beliefs

    Learning backward induction: a neural network agent approach

    Get PDF
    This paper addresses the question of whether neural networks (NNs), a realistic cognitive model of human information processing, can learn to backward induce in a two-stage game with a unique subgame-perfect Nash equilibrium. The NNs were found to predict the Nash equilibrium approximately 70% of the time in new games. Similarly to humans, the neural network agents are also found to suffer from subgame and truncation inconsistency, supporting the contention that they are appropriate models of general learning in humans. The agents were found to behave in a bounded rational manner as a result of the endogenous emergence of decision heuristics. In particular a very simple heuristic socialmax, that chooses the cell with the highest social payoff explains their behavior approximately 60% of the time, whereas the ownmax heuristic that simply chooses the cell with the maximum payoff for that agent fares worse explaining behavior roughly 38%, albeit still significantly better than chance. These two heuristics were found to be ecologically valid for the backward induction problem as they predicted the Nash equilibrium in 67% and 50% of the games respectively. Compared to various standard classification algorithms, the NNs were found to be only slightly more accurate than standard discriminant analyses. However, the latter do not model the dynamic learning process and have an ad hoc postulated functional form. In contrast, a NN agent’s behavior evolves with experience and is capable of taking on any functional form according to the universal approximation theorem.

    Approximations from Anywhere and General Rough Sets

    Full text link
    Not all approximations arise from information systems. The problem of fitting approximations, subjected to some rules (and related data), to information systems in a rough scheme of things is known as the \emph{inverse problem}. The inverse problem is more general than the duality (or abstract representation) problems and was introduced by the present author in her earlier papers. From the practical perspective, a few (as opposed to one) theoretical frameworks may be suitable for formulating the problem itself. \emph{Granular operator spaces} have been recently introduced and investigated by the present author in her recent work in the context of antichain based and dialectical semantics for general rough sets. The nature of the inverse problem is examined from number-theoretic and combinatorial perspectives in a higher order variant of granular operator spaces and some necessary conditions are proved. The results and the novel approach would be useful in a number of unsupervised and semi supervised learning contexts and algorithms.Comment: 20 Pages. Scheduled to appear in IJCRS'2017 LNCS Proceedings, Springe

    Dilating and contracting arbitrarily

    Get PDF
    Standard accuracy-based approaches to imprecise credences have the consequence that it is rational to move between precise and imprecise credences arbitrarily, without gaining any new evidence. Building on the Educated Guessing Framework of Horowitz (2019), we develop an alternative accuracy-based approach to imprecise credences that does not have this shortcoming. We argue that it is always irrational to move from a precise state to an imprecise state arbitrarily, however it can be rational to move from an imprecise state to a precise state arbitrarily

    A statistical inference method for the stochastic reachability analysis.

    Get PDF
    The main contribution of this paper is the characterization of reachability problem associated to stochastic hybrid systems in terms of imprecise probabilities. This provides the connection between reachability problem and Bayesian statistics. Using generalised Bayesian statistical inference, a new concept of conditional reach set probabilities is defined. Then possible algorithms to compute the reach set probabilities are derived

    Epistemic virtues, metavirtues, and computational complexity

    Get PDF
    I argue that considerations about computational complexity show that all finite agents need characteristics like those that have been called epistemic virtues. The necessity of these virtues follows in part from the nonexistence of shortcuts, or efficient ways of finding shortcuts, to cognitively expensive routines. It follows that agents must possess the capacities – metavirtues –of developing in advance the cognitive virtues they will need when time and memory are at a premium

    Technology Adoption in Poorly Specified Environments

    Get PDF
    This article extends the characteristics-based choice framework of technology adoption to account for decisions taken by boundedly-rational individuals in environments where traits are not fully observed. It is applied to an agricultural setting and introduces the concept of ambiguity in the agricultural technology adoption literature by relaxing strict informational and cognition related assumptions that are implied by traditional Bayesian analysis. The main results confirm that ambiguity increases as local conditions become less homogeneous and as computational ability, own experience and nearby adoption rates decrease. Measurement biases associated with full rationality assumptions are found to increase when decision makers have low computational ability, low experience and when their farming conditions differ widely from average adopter ones. A complementary empirical paper (Useche 2006) finds that models assuming low confidence in observed data, ambiguity and pessimistic expectations about traits predict sample shares better than models which assume that farmers do not face ambiguity or are optimistic about the traits of new varieties.Research and Development/Tech Change/Emerging Technologies,
    corecore