16 research outputs found

    Not all syllogisms are created equal: Varying premise believability reveals differences between conditional and categorical syllogisms

    Get PDF
    Deductive reasoning is a fundamental cognitive skill, and consequently has been the focus of much research over the past several decades. In the realm of syllogistic reasoning—judging the validity of a conclusion given two premises—a robust finding is the belief bias effect: broadly, the tendency for reasoners to judge as valid more believable than unbelievable conclusions. How the content believability of conclusions influences syllogistic reasoning has been the subject of hundreds of experiments and has informed several theories of deductive reasoning; however, how the content of premises influences the reasoning processes has been largely overlooked. In this thesis, I present 5 experiments that examine how premise content influences reasoning about categorical (i.e., statements with the words ‘some’ and ‘not’) and conditional (i.e., ‘if/then’ statements) syllogisms, which tend to be treated as interchangeable in deductive reasoning literature. It is demonstrated that premise content influences reasoning in these two types of syllogisms in fundamentally different ways. Specifically, Experiment 1 replicates and extends previous findings and demonstrates that for conditional syllogisms, belief bias results when premises are both believable and unbelievable; however, reasoners are more likely to judge that a conclusion is valid when it follows from believable than from unbelievable premises. Conversely, belief bias for categorical syllogisms results only when premises are believable; conclusion believability does not influence conclusion endorsement when premises are unbelievable. Based on these preliminary findings, I propose a theory that categorical and conditional syllogisms differ in the extent to which reasoners initially assume the premises to be true, and that this difference influences when in the reasoning process reasoners evaluate the believability of premises. Specifically, I propose that reasoners automatically assume that conditional, but not categorical, premises are true. It is proposed that, because the word “if” in conditional statements elicits hypothetical thinking, conditional premises are assumed to be true for the duration of the reasoning process. Subsequent to reasoning, premises can be “disbelieved” in a time-consuming process, and initial judgments about the conclusion may be altered, with a bias to respond that conclusions following from believable premises are valid. On the other hand, because categorical premises are phrased as factual propositions, reasoners initially judge the believability of categorical premises prior to reasoning about the conclusion. Unbelievable premises trigger the reasoner to disregard content from the rest of the syllogism, perhaps because the reasoner believes that the information in the problem will not be helpful in solving the problem. This theory is tested and supported by four additional experiments. Experiment 2 demonstrates that reasoners take longer to reason about conditional syllogisms with unbelievable than believable premises, consistent with the theory that unbelievable premises are “disbelieved” in a time-consuming process. Further, participants demonstrate belief bias for categorical syllogisms with unbelievable premises when they are instructed to assume that premises are true (Experiment 3) or when the word ‘if’ precedes the categorical premises (Experiment 4). Finally, Experiment 5 uses eye-tracking to demonstrate that premise believability influences post-conclusion premise looking durations for conditional syllogisms and pre-conclusion premise looking durations for categorical syllogisms. This finding supports the hypothesis that reasoners evaluate the believability of conditional premises after reasoning about the conclusion but that they evaluate the believability of categorical premises before reasoning about the conclusion. Further, Experiment 5 reveals that participants have poorer memory for the content of categorical syllogisms with unbelievable than believable premises, but memory did not differ for conditional syllogisms with believable and unbelievable premises. This suggests that unbelievable premise content in categorical syllogism is suppressed or ignored. These results and the theory of premise evaluation that I propose are discussed in the context of contemporary theories of deductive reasoning

    Theories of the syllogism: A meta-analysis.

    Full text link

    Assessing the belief bias effect with ROCs: It's a response bias effect.

    Full text link

    The intersection between Descriptivism and Meliorism in reasoning research: further proposals in support of 'soft normativism'

    Get PDF
    The rationality paradox centres on the observation that people are highly intelligent, yet show evidence of errors and biases in their thinking when measured against normative standards. Elqayam and Evans (e.g., 2011) reject normative standards in the psychological study of thinking, reasoning and deciding in favour of a ‘value-free’ descriptive approach to studying high-level cognition. In reviewing Elqayam and Evans’ position, we defend an alternative to descriptivism in the form of ‘soft normativism’, which allows for normative evaluations alongside the pursuit of descriptive research goals. We propose that normative theories have considerable value provided that researchers: (1) are alert to the philosophical quagmire of strong relativism; (2) are mindful of the biases that can arise from utilising normative benchmarks; and (3) engage in a focused analysis of the processing approach adopted by individual reasoners. We address the controversial ‘is–ought’ inference in this context and appeal to a ‘bridging solution’ to this contested inference that is based on the concept of ‘informal reflective equilibrium’. Furthermore, we draw on Elqayam and Evans’ recognition of a role for normative benchmarks in research programmes that are devised to enhance reasoning performance and we argue that such Meliorist research programmes have a valuable reciprocal relationship with descriptivist accounts of reasoning. In sum, we believe that descriptions of reasoning processes are fundamentally enriched by evaluations of reasoning quality, and argue that if such standards are discarded altogether then our explanations and descriptions of reasoning processes are severely undermined

    The intersection between Descriptivism and Meliorism in reasoning research: further proposals in support of ‘soft normativism’

    Get PDF
    The rationality paradox centers on the observation that people are highly intelligent, yet show evidence of errors and biases in their thinking when measured against normative standards. Elqayam and Evans’ (2011) reject normative standards in the psychological study of thinking, reasoning and deciding in favor of a ‘value-free’ descriptive approach to studying high-level cognition. In reviewing Elqayam and Evans’ (2011) position, we defend an alternative to descriptivism in the form of ‘soft normativism,’ which allows for normative evaluations alongside the pursuit of descriptive research goals. We propose that normative theories have considerable value provided that researchers: (1) are alert to the philosophical quagmire of strong relativism; (2) are mindful of the biases that can arise from utilizing normative benchmarks; and (3) engage in a focused analysis of the processing approach adopted by individual reasoners. We address the controversial ‘is–ought’ inference in this context and appeal to a ‘bridging solution’ to this contested inference that is based on the concept of ‘informal reflective equilibrium.’ Furthermore, we draw on Elqayam and Evans’ (2011) recognition of a role for normative benchmarks in research programs that are devised to enhance reasoning performance and we argue that such Meliorist research programs have a valuable reciprocal relationship with descriptivist accounts of reasoning. In sum, we believe that descriptions of reasoning processes are fundamentally enriched by evaluations of reasoning quality, and argue that if such standards are discarded altogether then our explanations and descriptions of reasoning processes are severely undermined

    Effects of Dyslexia on Problem Solving - Strategies and Interventions for Syllogistic Reasoning

    Get PDF
    When solving syllogisms, people can adopt either a spatial strategy, where spatial representations are used to illustrate relations between terms, or a verbal strategy where the problem is represented in terms of letters and relational rules (Ford, 1995). People with dyslexia tend to adopt a spatial strategy when solving syllogisms while people without dyslexia tend to adopt a verbal strategy (Bacon, Handley & McDonald, 2007). But how fixed are these strategic approaches? This thesis examines whether training that focuses on verbal or spatial representations of the problems affected performance for people with and without dyslexia, and whether the effectiveness of this training varied according to whether the syllogisms were categorised as those easiest to solve for verbal reasoners, easiest for spatial reasoners, and equally difficult for both types of reasoners, based on Ford’s (1995) results. Five studies were conducted to compare the performance of people with dyslexia to people without dyslexia to examine 1) individual differences in spontaneous reasoning strategies, 2) effects of figure and belief bias, 3) performance after being taught a verbal strategy, 4) performance after being taught a spatial strategy, and 5) the pattern of eye movements to observe where attention is focused while solving the syllogisms. The results supported previous research that people do tend to reason spontaneously with a verbal or spatial strategy but failed to find evidence of a difference between participants with dyslexia and participants without dyslexia. The studies further showed that participants with dyslexia are affected by the figure of the syllogism (the placement of the middle term in relation to the end terms). Training was effective in encouraging all participants to switch solution strategies, but this appears independent of dyslexic status. Teaching a spatial strategy impacted learning but did not promote problem solving and was not particularly helpful for the participants with dyslexia. It appears to make problems that are easier with a verbal strategy harder to solve. Examination of eye movements revealed that the focus of attention during problem solving was more on the terms in the premises than the quantifiers. The pattern of eye fixations was the same regardless of the figure or problem type. There was an interaction between problem type x AOI, indicating a longer processing time for premise 2 for problems that are difficult to solve with a verbal or spatial strategy. Overall, the studies suggest that there is a burden on participants with dyslexia in problem solving that is not alleviated by training in either spatial or verbal strategies, but that particular problems might be easier or harder to solve according to whether a spatial or verbal strategy is spontaneously used by the participant, and that these differences in problem type are marked by eye fixation patterns during problem solving

    The face of research: Do first impressions based on the facial appearance of scientists affect the selection and evaluation of science communication?

    Get PDF
    First impressions based on facial appearance alone predict a large number of important social outcomes in areas of interest to the general public, such as politics, justice and economics. The current project aims to expand these findings to science communication, investigating both the impressions that the public forms of a scientist based on their facial appearance, and the impact that these impressions may have on the public’s selection and evaluation of the research conducted by the scientist in question. First, we investigated what social judgement traits predict looking like a “good scientist” (someone who does high-quality research) and an “interesting scientist” (someone whose research people show interest in). Three studies showed that looking competent and moral were positively related to both looking like a good scientist and to interest ratings, whereas looking physically attractive positively predicted being perceived as a scientist with higher interest ratings, but was negatively related to looking like a good scientist. Subsequently, we investigated whether these perceptions translated into real-life consequences. Three studies examined the impact of first impressions on the public’s choice of scientific communications, and found that people were more likely to choose real science news stories to read or watch when they were paired with scientists high on interest judgements. Another three studies looked at whether the appearance of the researcher influenced people’s evaluations of real science news stories . We found that people judged the research to be of higher quality when it was associated with “good” scientists. Our findings illustrate novel insights into the social psychology of science communication, and flag a potential source of bias in the dissemination of scientific findings to the general public, stemming solely from the facial appearance of the scientist

    A Defense of Pure Connectionism

    Full text link
    Connectionism is an approach to neural-networks-based cognitive modeling that encompasses the recent deep learning movement in artificial intelligence. It came of age in the 1980s, with its roots in cybernetics and earlier attempts to model the brain as a system of simple parallel processors. Connectionist models center on statistical inference within neural networks with empirically learnable parameters, which can be represented as graphical models. More recent approaches focus on learning and inference within hierarchical generative models. Contra influential and ongoing critiques, I argue in this dissertation that the connectionist approach to cognitive science possesses in principle (and, as is becoming increasingly clear, in practice) the resources to model even the most rich and distinctly human cognitive capacities, such as abstract, conceptual thought and natural language comprehension and production. Consonant with much previous philosophical work on connectionism, I argue that a core principle—that proximal representations in a vector space have similar semantic values—is the key to a successful connectionist account of the systematicity and productivity of thought, language, and other core cognitive phenomena. My work here differs from preceding work in philosophy in several respects: (1) I compare a wide variety of connectionist responses to the systematicity challenge and isolate two main strands that are both historically important and reflected in ongoing work today: (a) vector symbolic architectures and (b) (compositional) vector space semantic models; (2) I consider very recent applications of these approaches, including their deployment on large-scale machine learning tasks such as machine translation; (3) I argue, again on the basis mostly of recent developments, for a continuity in representation and processing across natural language, image processing and other domains; (4) I explicitly link broad, abstract features of connectionist representation to recent proposals in cognitive science similar in spirit, such as hierarchical Bayesian and free energy minimization approaches, and offer a single rebuttal of criticisms of these related paradigms; (5) I critique recent alternative proposals that argue for a hybrid Classical (i.e. serial symbolic)/statistical model of mind; (6) I argue that defending the most plausible form of a connectionist cognitive architecture requires rethinking certain distinctions that have figured prominently in the history of the philosophy of mind and language, such as that between word- and phrase-level semantic content, and between inference and association
    corecore