54,189 research outputs found

    The situated common-sense knowledge in FunGramKB

    Full text link
    It has been widely demonstrated that expectation-based schemata, along the lines of Lakoff's propositional Idealized Cognitive Models, play a crucial role in text comprehension. Discourse inferences are grounded on the shared generalized knowledge which is activated from the situational model underlying the text surface dimension. From a cognitive-plausible and linguistic-aware approach to knowledge representation, FunGramKB stands out for being a dynamic repository of lexical, constructional and conceptual knowledge which contributes to simulate human-level reasoning. The objective of this paper is to present a script model as a carrier of the situated common-sense knowledge required to help knowledge engineers construct more "intelligent" natural language processing systems.Periñán Pascual, JC. (2012). The situated common-sense knowledge in FunGramKB. Review of Cognitive Linguistics. 10(1):184-214. doi:10.1075/rcl.10.1.06perS18421410

    The Roles of Symbols in Neural-based AI: They are Not What You Think!

    Full text link
    We propose that symbols are first and foremost external communication tools used between intelligent agents that allow knowledge to be transferred in a more efficient and effective manner than having to experience the world directly. But, they are also used internally within an agent through a form of self-communication to help formulate, describe and justify subsymbolic patterns of neural activity that truly implement thinking. Symbols, and our languages that make use of them, not only allow us to explain our thinking to others and ourselves, but also provide beneficial constraints (inductive bias) on learning about the world. In this paper we present relevant insights from neuroscience and cognitive science, about how the human brain represents symbols and the concepts they refer to, and how today's artificial neural networks can do the same. We then present a novel neuro-symbolic hypothesis and a plausible architecture for intelligent agents that combines subsymbolic representations for symbols and concepts for learning and reasoning. Our hypothesis and associated architecture imply that symbols will remain critical to the future of intelligent systems NOT because they are the fundamental building blocks of thought, but because they are characterizations of subsymbolic processes that constitute thought.Comment: 28 page

    Spinoza's Panpsychism

    Get PDF

    Aristotle on natural slavery

    Get PDF
    Aristotle's claim that natural slaves do not possess autonomous rationality (Pol. 1.5, 1254b20-23) cannot plausibly be interpreted in an unrestricted sense, since this would conflict with what Aristotle knew about non-Greek societies. Aristotle's argument requires only a lack of autonomous practical rationality. An impairment of the capacity for integrated practical deliberation, resulting from an environmentally induced excess or deficiency in thumos (Pol. 7.7, 1327b18-31), would be sufficient to make natural slaves incapable of eudaimonia without being obtrusively implausible relative to what Aristotle is likely to have believed about non-Greeks. Since Aristotle seems to have believed that the existence of people who can be enslaved without injustice is a hypothetical necessity, if those capable of eudaimonia are to achieve it, the existence of natural slaves has implications for our understanding of Aristotle's natural teleology

    Puzzles of Anthropic Reasoning Resolved Using Full Non-indexical Conditioning

    Full text link
    I consider the puzzles arising from four interrelated problems involving `anthropic' reasoning, and in particular the `Self-Sampling Assumption' (SSA) - that one should reason as if one were randomly chosen from the set of all observers in a suitable reference class. The problem of Freak Observers might appear to force acceptance of SSA if any empirical evidence is to be credited. The Sleeping Beauty problem arguably shows that one should also accept the `Self-Indication Assumption' (SIA) - that one should take one's own existence as evidence that the number of observers is more likely to be large than small. But this assumption produces apparently absurd results in the Presumptuous Philosopher problem. Without SIA, however, a definitive refutation of the counterintuitive Doomsday Argument seems difficult. I show that these problems are satisfyingly resolved by applying the principle that one should always condition on all evidence - not just on the fact that you are an intelligent observer, or that you are human, but on the fact that you are a human with a specific set of memories. This `Full Non-indexical Conditioning' (FNC) approach usually produces the same results as assuming both SSA and SIA, with a sufficiently broad reference class, while avoiding their ad hoc aspects. I argue that the results of FNC are correct using the device of hypothetical ``companion'' observers, whose existence clarifies what principles of reasoning are valid. I conclude by discussing how one can use FNC to infer how densely we should expect intelligent species to occur, and by examining recent anthropic arguments in inflationary and string theory cosmology

    Emotion, deliberation, and the skill model of virtuous agency

    Get PDF
    A recent skeptical challenge denies deliberation is essential to virtuous agency: what looks like genuine deliberation is just a post hoc rationalization of a decision already made by automatic mechanisms (Haidt 2001; Doris 2015). Annas’s account of virtue seems well-equipped to respond: by modeling virtue on skills, she can agree that virtuous actions are deliberation-free while insisting that their development requires significant thought. But Annas’s proposal is flawed: it over-intellectualizes deliberation’s developmental role and under-intellectualizes its significance once virtue is acquired. Doing better requires paying attention to a distinctive form of anxiety—one that functions to engage deliberation in the face of decisions that automatic mechanisms alone cannot resolve

    Continuous Improvement Through Knowledge-Guided Analysis in Experience Feedback

    Get PDF
    Continuous improvement in industrial processes is increasingly a key element of competitiveness for industrial systems. The management of experience feedback in this framework is designed to build, analyze and facilitate the knowledge sharing among problem solving practitioners of an organization in order to improve processes and products achievement. During Problem Solving Processes, the intellectual investment of experts is often considerable and the opportunities for expert knowledge exploitation are numerous: decision making, problem solving under uncertainty, and expert configuration. In this paper, our contribution relates to the structuring of a cognitive experience feedback framework, which allows a flexible exploitation of expert knowledge during Problem Solving Processes and a reuse such collected experience. To that purpose, the proposed approach uses the general principles of root cause analysis for identifying the root causes of problems or events, the conceptual graphs formalism for the semantic conceptualization of the domain vocabulary and the Transferable Belief Model for the fusion of information from different sources. The underlying formal reasoning mechanisms (logic-based semantics) in conceptual graphs enable intelligent information retrieval for the effective exploitation of lessons learned from past projects. An example will illustrate the application of the proposed approach of experience feedback processes formalization in the transport industry sector

    Irreducible incoherence and intelligent design : a look into the conceptual toolbox of a pseudoscience

    Get PDF
    The concept of Irreducible Complexity (IC) has played a pivotal role in the resurgence of the creationist movement over the past two decades. Evolutionary biologists and philosophers have unambiguously rejected the purported demonstration of “intelligent design” in nature, but there have been several, apparently contradictory, lines of criticism. We argue that this is in fact due to Michael Behe's own incoherent definition and use of IC. This paper offers an analysis of several equivocations inherent in the concept of Irreducible Complexity and discusses the way in which advocates of the Intelligent Design Creationism (IDC) have conveniently turned IC into a moving target. An analysis of these rhetorical strategies helps us to understand why IC has gained such prominence in the IDC movement, and why, despite its complete lack of scientific merits, it has even convinced some knowledgeable persons of the impending demise of evolutionary theory
    • …
    corecore