129 research outputs found

    Reasoning with conditionals

    Get PDF
    This paper reviews the psychological investigation of reasoning with conditionals, putting an emphasis on recent work. In the first part, a few methodological remarks are presented. In the second part, the main theories of deductive reasoning (mental rules, mental models, and the probabilistic approach) are considered in turn; their content is summarised and the semantics they assume for if and the way they explain formal conditional reasoning are discussed, in particular in the light of experimental work on the probability of conditionals. The last part presents the recent shift of interest towards the study of conditional reasoning in context, that is, with large knowledge bases and uncertain premises

    Four essays in mathematical philosophy

    Get PDF

    Statistical language learning

    Get PDF
    Theoretical arguments based on the "poverty of the stimulus" have denied a priori the possibility that abstract linguistic representations can be learned inductively from exposure to the environment, given that the linguistic input available to the child is both underdetermined and degenerate. I reassess such learnability arguments by exploring a) the type and amount of statistical information implicitly available in the input in the form of distributional and phonological cues; b) psychologically plausible inductive mechanisms for constraining the search space; c) the nature of linguistic representations, algebraic or statistical. To do so I use three methodologies: experimental procedures, linguistic analyses based on large corpora of naturally occurring speech and text, and computational models implemented in computer simulations. In Chapters 1,2, and 5, I argue that long-distance structural dependencies - traditionally hard to explain with simple distributional analyses based on ngram statistics - can indeed be learned associatively provided the amount of intervening material is highly variable or invariant (the Variability effect). In Chapter 3, I show that simple associative mechanisms instantiated in Simple Recurrent Networks can replicate the experimental findings under the same conditions of variability. Chapter 4 presents successes and limits of such results across perceptual modalities (visual vs. auditory) and perceptual presentation (temporal vs. sequential), as well as the impact of long and short training procedures. In Chapter 5, I show that generalisation to abstract categories from stimuli framed in non-adjacent dependencies is also modulated by the Variability effect. In Chapter 6, I show that the putative separation of algebraic and statistical styles of computation based on successful speech segmentation versus unsuccessful generalisation experiments (as published in a recent Science paper) is premature and is the effect of a preference for phonological properties of the input. In chapter 7 computer simulations of learning irregular constructions suggest that it is possible to learn from positive evidence alone, despite Gold's celebrated arguments on the unlearnability of natural languages. Evolutionary simulations in Chapter 8 show that irregularities in natural languages can emerge from full regularity and remain stable across generations of simulated agents. In Chapter 9 I conclude that the brain may endowed with a powerful statistical device for detecting structure, generalising, segmenting speech, and recovering from overgeneralisations. The experimental and computational evidence gathered here suggests that statistical language learning is more powerful than heretofore acknowledged by the current literature

    Computational Complexity of Strong Admissibility for Abstract Dialectical Frameworks

    Get PDF
    Abstract dialectical frameworks (ADFs) have been introduced as a formalism for modeling and evaluating argumentation allowing general logical satisfaction conditions. Different criteria used to settle the acceptance of arguments arecalled semantics. Semantics of ADFs have so far mainly been defined based on the concept of admissibility. Recently, the notion of strong admissibility has been introduced for ADFs. In the current work we study the computational complexityof the following reasoning tasks under strong admissibility semantics. We address 1. the credulous/skeptical decision problem; 2. the verification problem; 3. the strong justification problem; and 4. the problem of finding a smallest witness of strong justification of a queried argument

    Metasemantics and fuzzy mathematics

    Get PDF
    The present thesis is an inquiry into the metasemantics of natural languages, with a particular focus on the philosophical motivations for countenancing degreed formal frameworks for both psychosemantics and truth-conditional semantics. Chapter 1 sets out to offer a bird's eye view of our overall research project and the key questions that we set out to address. Chapter 2 provides a self-contained overview of the main empirical findings in the cognitive science of concepts and categorisation. This scientific background is offered in light of the fact that most variants of psychologically-informed semantics see our network of concepts as providing the raw materials on which lexical and sentential meanings supervene. Consequently, the metaphysical study of internalistically-construed meanings and the empirical study of our mental categories are overlapping research projects. Chapter 3 closely investigates a selection of species of conceptual semantics, together with reasons for adopting or disavowing them. We note that our ultimate aim is not to defend these perspectives on the study of meaning, but to argue that the project of making them formally precise naturally invites the adoption of degreed mathematical frameworks (e.g. probabilistic or fuzzy). In Chapter 4, we switch to the orthodox framework of truth-conditional semantics, and we present the limitations of a philosophical position that we call "classicism about vagueness". In the process, we come up with an empirical hypothesis for the psychological pull of the inductive soritical premiss and we make an original objection against the epistemicist position, based on computability theory. Chapter 5 makes a different case for the adoption of degreed semantic frameworks, based on their (quasi-)superior treatments of the paradoxes of vagueness. Hence, the adoption of tools that allow for graded membership are well-motivated under both semantic internalism and semantic externalism. At the end of this chapter, we defend an unexplored view of vagueness that we call "practical fuzzicism". Chapter 6, viz. the final chapter, is a metamathematical enquiry into both the fuzzy model-theoretic semantics and the fuzzy Davidsonian semantics for formal languages of type-free truth in which precise truth-predications can be expressed

    Reasoning with uncertainty using Nilsson's probabilistic logic and the maximum entropy formalism

    Get PDF
    An expert system must reason with certain and uncertain information. This thesis is concerned with the process of Reasoning with Uncertainty. Nilsson's elegant model of "Probabilistic Logic" has been chosen as the framework for this investigation, and the information theoretical aspect of the maximum entropy formalism as the inference engine. These two formalisms, although semantically compelling, offer major complexity problems to the implementor. Probabilistic Logic models the complete uncertainty space, and the maximum entropy formalism finds the least commitment probability distribution within the uncertainty space. The main finding in this thesis is that Nilsson's Probabilistic Logic can be successfully developed beyond the structure proposed by Nilsson. Some deficiencies in Nilsson's model have been uncovered in the area of probabilistic representation, making Probabilistic Logic less powerful than Bayesian Inference techniques. These deficiencies are examined and a new model of entailment is presented which overcomes these problems, allowing Probabilistic Logic the full representational power of Bayesian Inferencing. The new model also preserves an important extension which Nilsson's Probabilistic Logic has over Bayesian Inference: the ability to use uncertain evidence. Traditionally, the probabilistic, solution proposed by the maximum entropy formalism is arrived at by solving non-linear simultaneous equations for the aggregate factors of the non- linear terms. In the new model the maximum entropy algorithms are shown to have the highly desirable property of tractability. Although these problems have been solved for probabilistic entailment the problems of complexity are still prevalent in large databases of expert rules. This thesis also considers the use of heuristics and meta level reasoning in a complex knowledge base. Finally, a description of an expert system using these techniques is given

    Statistical language learning

    Get PDF
    Theoretical arguments based on the "poverty of the stimulus" have denied a priori the possibility that abstract linguistic representations can be learned inductively from exposure to the environment, given that the linguistic input available to the child is both underdetermined and degenerate. I reassess such learnability arguments by exploring a) the type and amount of statistical information implicitly available in the input in the form of distributional and phonological cues; b) psychologically plausible inductive mechanisms for constraining the search space; c) the nature of linguistic representations, algebraic or statistical. To do so I use three methodologies: experimental procedures, linguistic analyses based on large corpora of naturally occurring speech and text, and computational models implemented in computer simulations. In Chapters 1,2, and 5, I argue that long-distance structural dependencies - traditionally hard to explain with simple distributional analyses based on ngram statistics - can indeed be learned associatively provided the amount of intervening material is highly variable or invariant (the Variability effect). In Chapter 3, I show that simple associative mechanisms instantiated in Simple Recurrent Networks can replicate the experimental findings under the same conditions of variability. Chapter 4 presents successes and limits of such results across perceptual modalities (visual vs. auditory) and perceptual presentation (temporal vs. sequential), as well as the impact of long and short training procedures. In Chapter 5, I show that generalisation to abstract categories from stimuli framed in non-adjacent dependencies is also modulated by the Variability effect. In Chapter 6, I show that the putative separation of algebraic and statistical styles of computation based on successful speech segmentation versus unsuccessful generalisation experiments (as published in a recent Science paper) is premature and is the effect of a preference for phonological properties of the input. In chapter 7 computer simulations of learning irregular constructions suggest that it is possible to learn from positive evidence alone, despite Gold's celebrated arguments on the unlearnability of natural languages. Evolutionary simulations in Chapter 8 show that irregularities in natural languages can emerge from full regularity and remain stable across generations of simulated agents. In Chapter 9 I conclude that the brain may endowed with a powerful statistical device for detecting structure, generalising, segmenting speech, and recovering from overgeneralisations. The experimental and computational evidence gathered here suggests that statistical language learning is more powerful than heretofore acknowledged by the current literature.EThOS - Electronic Theses Online ServiceEuropean Union (EU) (HPRN-CT-1999-00065)GBUnited Kingdo
    corecore