44,257 research outputs found

    How to entrain your evil demon

    Get PDF
    The notion that the brain is a prediction error minimizer entails, via the notion of Markov blankets and self-evidencing, a form of global scepticism — an inability to rule out evil demon scenarios. This type of scepticism is viewed by some as a sign of a fatally flawed conception of mind and cognition. Here I discuss whether this scepticism is ameliorated by acknowledging the role of action in the most ambitious approach to prediction error minimization, namely under the free energy principle. I argue that the scepticism remains but that the role of action in the free energy principle constrains the demon’s work. This yields new insights about the free energy principle, epistemology, and the place of mind in nature

    Literal Perceptual Inference

    Get PDF
    In this paper, I argue that theories of perception that appeal to Helmholtz’s idea of unconscious inference (“Helmholtzian” theories) should be taken literally, i.e. that the inferences appealed to in such theories are inferences in the full sense of the term, as employed elsewhere in philosophy and in ordinary discourse. In the course of the argument, I consider constraints on inference based on the idea that inference is a deliberate acton, and on the idea that inferences depend on the syntactic structure of representations. I argue that inference is a personal-level but sometimes unconscious process that cannot in general be distinguished from association on the basis of the structures of the representations over which it’s defined. I also critique arguments against representationalist interpretations of Helmholtzian theories, and argue against the view that perceptual inference is encapsulated in a module

    Models, Brains, and Scientific Realism

    Get PDF
    Prediction Error Minimization theory (PEM) is one of the most promising attempts to model perception in current science of mind, and it has recently been advocated by some prominent philosophers as Andy Clark and Jakob Hohwy. Briefly, PEM maintains that “the brain is an organ that on aver-age and over time continually minimizes the error between the sensory input it predicts on the basis of its model of the world and the actual sensory input” (Hohwy 2014, p. 2). An interesting debate has arisen with regard to which is the more adequate epistemological interpretation of PEM. Indeed, Hohwy maintains that given that PEM supports an inferential view of perception and cognition, PEM has to be considered as conveying an internalist epistemological perspective. Contrary to this view, Clark maintains that it would be incorrect to interpret in such a way the indirectness of the link between the world and our inner model of it, and that PEM may well be combined with an externalist epistemological perspective. The aim of this paper is to assess those two opposite interpretations of PEM. Moreover, it will be suggested that Hohwy’s position may be considerably strengthened by adopting Carlo Cellucci’s view on knowledge (2013)

    Complexity of Nested Circumscription and Nested Abnormality Theories

    Full text link
    The need for a circumscriptive formalism that allows for simple yet elegant modular problem representation has led Lifschitz (AIJ, 1995) to introduce nested abnormality theories (NATs) as a tool for modular knowledge representation, tailored for applying circumscription to minimize exceptional circumstances. Abstracting from this particular objective, we propose L_{CIRC}, which is an extension of generic propositional circumscription by allowing propositional combinations and nesting of circumscriptive theories. As shown, NATs are naturally embedded into this language, and are in fact of equal expressive capability. We then analyze the complexity of L_{CIRC} and NATs, and in particular the effect of nesting. The latter is found to be a source of complexity, which climbs the Polynomial Hierarchy as the nesting depth increases and reaches PSPACE-completeness in the general case. We also identify meaningful syntactic fragments of NATs which have lower complexity. In particular, we show that the generalization of Horn circumscription in the NAT framework remains CONP-complete, and that Horn NATs without fixed letters can be efficiently transformed into an equivalent Horn CNF, which implies polynomial solvability of principal reasoning tasks. Finally, we also study extensions of NATs and briefly address the complexity in the first-order case. Our results give insight into the ``cost'' of using L_{CIRC} (resp. NATs) as a host language for expressing other formalisms such as action theories, narratives, or spatial theories.Comment: A preliminary abstract of this paper appeared in Proc. Seventeenth International Joint Conference on Artificial Intelligence (IJCAI-01), pages 169--174. Morgan Kaufmann, 200

    Premise Selection for Mathematics by Corpus Analysis and Kernel Methods

    Get PDF
    Smart premise selection is essential when using automated reasoning as a tool for large-theory formal proof development. A good method for premise selection in complex mathematical libraries is the application of machine learning to large corpora of proofs. This work develops learning-based premise selection in two ways. First, a newly available minimal dependency analysis of existing high-level formal mathematical proofs is used to build a large knowledge base of proof dependencies, providing precise data for ATP-based re-verification and for training premise selection algorithms. Second, a new machine learning algorithm for premise selection based on kernel methods is proposed and implemented. To evaluate the impact of both techniques, a benchmark consisting of 2078 large-theory mathematical problems is constructed,extending the older MPTP Challenge benchmark. The combined effect of the techniques results in a 50% improvement on the benchmark over the Vampire/SInE state-of-the-art system for automated reasoning in large theories.Comment: 26 page
    • …
    corecore