8,510 research outputs found

    Characterizing perfect recall using next-step temporal operators in S5 and sub-S5 Epistemic Temporal Logic

    Full text link
    We review the notion of perfect recall in the literature on interpreted systems, game theory, and epistemic logic. In the context of Epistemic Temporal Logic (ETL), we give a (to our knowledge) novel frame condition for perfect recall, which is local and can straightforwardly be translated to a defining formula in a language that only has next-step temporal operators. This frame condition also gives rise to a complete axiomatization for S5 ETL frames with perfect recall. We then consider how to extend and consolidate the notion of perfect recall in sub-S5 settings, where the various notions discussed are no longer equivalent

    Wasserstein Introspective Neural Networks

    Full text link
    We present Wasserstein introspective neural networks (WINN) that are both a generator and a discriminator within a single model. WINN provides a significant improvement over the recent introspective neural networks (INN) method by enhancing INN's generative modeling capability. WINN has three interesting properties: (1) A mathematical connection between the formulation of the INN algorithm and that of Wasserstein generative adversarial networks (WGAN) is made. (2) The explicit adoption of the Wasserstein distance into INN results in a large enhancement to INN, achieving compelling results even with a single classifier --- e.g., providing nearly a 20 times reduction in model size over INN for unsupervised generative modeling. (3) When applied to supervised classification, WINN also gives rise to improved robustness against adversarial examples in terms of the error reduction. In the experiments, we report encouraging results on unsupervised learning problems including texture, face, and object modeling, as well as a supervised classification task against adversarial attacks.Comment: Accepted to CVPR 2018 (Oral

    The impact of Eysenck's extraversion-introversion personality dimension on prospective memory

    Get PDF
    Prospective memory (PM) is memory for future events. PM is a developing area of research (e.g., Brandimonte, Einstein & McDaniel, 1996) with recent work linking personality types and their utilisation of PM (Goschke & Kuhl, 1996; Searleman, 1996). The present study compared 28 extraverts and 28 introverts on their short- and long-term prospective memory using the Prospective Memory Scale developed by Hannon, Adams, Harrington, Fries-Dias & Gibson (1995). The main finding was that extraverts reported significantly fewer errors on short- and long-term PM than introverts, and this difference could not be explained in terms of the number of strategies used to support prospective remembering. These findings are discussed in relation to differences between the personality types

    A Processive View of Perceptual Experience

    Get PDF
    The goal of this piece is to put some pressure on Brian O’Shaughnessy’s claim that perceptual experiences are necessarily mental processes. The author targets two motivations behind the development of that view. First, O’Shaughnessy resorts to pure conceptual analysis to argue that perceptual experiences are processes. The author argues that this line of reasoning is inconclusive. Secondly, he repeatedly invokes a thought experiment concerning the total freeze of a subject’s experiential life. Even if this case is coherent, however, it does not show that perceptual experiences are processes

    Forgetting complex propositions

    Full text link
    This paper uses possible-world semantics to model the changes that may occur in an agent's knowledge as she loses information. This builds on previous work in which the agent may forget the truth-value of an atomic proposition, to a more general case where she may forget the truth-value of a propositional formula. The generalization poses some challenges, since in order to forget whether a complex proposition π\pi is the case, the agent must also lose information about the propositional atoms that appear in it, and there is no unambiguous way to go about this. We resolve this situation by considering expressions of the form [π]φ[\boldsymbol{\ddagger} \pi]\varphi, which quantify over all possible (but minimal) ways of forgetting whether π\pi. Propositional atoms are modified non-deterministically, although uniformly, in all possible worlds. We then represent this within action model logic in order to give a sound and complete axiomatization for a logic with knowledge and forgetting. Finally, some variants are discussed, such as when an agent forgets π\pi (rather than forgets whether π\pi) and when the modification of atomic facts is done non-uniformly throughout the model
    corecore