24 research outputs found

    Manipulating Belief Bias Across the Lifespan

    Get PDF
    In today’s political climate, when basic facts and reasoning are seemingly up for debate, it is increasingly important to be able to identify well-reasoned arguments, regardless of one’s political leanings, and to retain this skill throughout the lifespan. Research has shown, however, a persistent belief bias—a tendency to judge an argument’s validity based on its conclusion’s agreement with one’s beliefs, rather than its logical quality. Other findings suggest that belief bias can be reduced by instruction to avoid belief bias. The current project seeks to explore whether older adults, believed to be more prone to biased reasoning, respond differently to such instruction, as well as to identify other potential individual differences in belief bias. Participants (41 young adults, 33 older adults) completed an online survey in which they were asked to evaluate valid and invalid syllogisms about political topics, both before and after instruction to avoid belief bias. Contrary to the literature, there was no significant difference between the bias scale scores or correction post-manipulation based on age group; however, response to de-biasing instructions was inversely related to political conservatism. Findings call into doubt the general statement that older adults are categorically more biased, and further research is suggested

    'At least one' problem with 'some' formal reasoning paradigms

    Get PDF
    In formal reasoning, the quantifier "some" means "at least one and possibly all." In contrast, reasoners often pragmatically interpret "some" to mean "some, but not all" on both immediate-inference and Euler circle tasks. It is still unclear whether pragmatic interpretations can explain the high rates of errors normally observed on syllogistic reasoning tasks. To address this issue, we presented participants (reasoners) in the present experiments either standard quantifiers or clarified quantifiers designed to precisely articulate the quantifiers' logical interpretations. In Experiment 1, reasoners made significantly more logical responses and significantly fewer pragmatic responses on an immediate-inference task when presented with logically clarified as opposed to standard quantifiers. In Experiment 2, this finding was extended to a variant of the immediate-inference task in which reasoners were asked to deduce what followed from premises they were to assume to be false. In Experiment 3, we used a syllogistic reasoning task and observed that logically clarified premises reduced pragmatic and increased logical responses relative to standard ones, providing strong evidence that pragmatic responses can explain some aspects of the errors made in the syllogistic reasoning task. These findings suggest that standard quantifiers should be replaced with logically clarified quantifiers in teaching and in future research

    The source of the truth bias: Heuristic processing?

    Get PDF
    People believe others are telling the truth more often than they actually are; this is called the truth bias. Surprisingly, when a speaker is judged at multiple points across their statement the truth bias declines. Previous claims argue this is evidence of a shift from (biased) heuristic processing to (reasoned) analytical processing. In four experiments we contrast the heuristic-analytic model (HAM) with alternative accounts. In Experiment 1, the decrease in truth responding was not the result of speakers appearing more deceptive, but was instead attributable to the rater's processing style. Yet contrary to HAMs, across three experiments we found the decline in bias was not related to the amount of processing time available (Experiments 1–3) or the communication channel (Experiment 2). In Experiment 4 we found support for a new account: that the bias reflects whether raters perceive the statement to be internally consistent

    Belief bias in deductive reasoning with syllogisms

    Get PDF
    El objetivo de este artículo es analizar en una tarea de evaluación de argumentos el sesgo de creencia, caracterizado como la tendencia a considerar válidos argumentos con conclusiones creíbles e inválidos a los argumentos con conclusiones increíbles. Diseñamos y aplicamos una prueba de evaluación de silogismos donde se registraron los tiempos de respuesta empleados para evaluar cada argumento, a fin de comprobar ciertas predicciones de la teoría de los modelos mentales y de las teorías de los procesos duales. Los resultados muestran un marcado sesgo de creencia, más acentuado en la evaluación de los silogismos inválidos que en la de los válidos. En relación con los tiempos de respuesta, los datos obtenidos son afines a las teorías de los procesos duales, en particular, al modelo serial. No obstante, estos resultados contradicen la predicción de la teoría de los modelos mentales sobre el incremento de la latencia para evaluar silogismos válidos.O objetivo deste artigo é analisar o viés de crença em uma tarefa de avaliação de silogismo, caracterizada como a tendência de considerar válidos argumentos com conclusões críveis e como inválidos argumentos com conclusões incríveis. Desenhamos e aplicámos um teste de avaliação de silogismo onde foram registados os tempos de resposta utilizados para avaliar cada argumento, a fim de verificar certas previsões da teoria dos modelos mentais e das teorias dos processos duais. Os resultados mostram um viés de crença marcante, mais acentuado na avaliação de silogismos inválidos do que de válidos. Em relação aos tempos de resposta, os dados obtidos estão relacionados às teorias dos processos duais, em particular, ao modelo serial. No entanto, esses resultados contradizem a previsão da teoria dos modelos mentais sobre o aumento da latência para avaliar silogismos válidos.The aim of this paper is to analyze belief bias in an argument evaluation task. Belief bias has been characterized as the tendency for people to consider valid arguments with believable conclusions and invalid arguments with unbelievable conclusions. We designed and applied a syllogism evaluation task where the response times used to evaluate each argument were recorded, in order to verify certain predictions of mental model theory and dual processes theories. The results show a strong belief bias, more accentuated regarding the evaluation of invalid syllogisms. In relation to the response times, the data obtained are related to dual processes theories, in particular, to the serial model. However, these results contradict the prediction of mental model theory on the increase of latency to evaluate valid syllogisms.Facultad de Psicologí

    Using forced choice to test belief bias in syllogistic reasoning.

    Get PDF
    In deductive reasoning, believable conclusions are more likely to be accepted regardless of their validity. Although many theories argue that this belief bias reflects a change in the quality of reasoning, distinguishing qualitative changes from simple response biases can be difficult (Dube, Rotello, & Heit, 2010). We introduced a novel procedure that controls for response bias. In Experiments 1 and 2, the task required judging which of two simultaneously presented syllogisms was valid. Surprisingly, there was no evidence for belief bias with this forced choice procedure. In Experiment 3, the procedure was modified so that only one set of premises was viewable at a time. An effect of beliefs emerged: unbelievable conclusions were judged more accurately, supporting the claim that beliefs affect the quality of reasoning. Experiments 4 and 5 replicated and extended this finding, showing that the effect was mediated by individual differences in cognitive ability and analytic cognitive style. Although the positive findings of Experiments 3-5 are most relevant to the debate about the mechanisms underlying belief bias, the null findings of Experiments 1 and 2 offer insight into how the presentation of an argument influences the manner in which people reason

    Matching bias in syllogistic reasoning: Evidence for a dual-process account from response times and confidence ratings

    Get PDF
    We examined matching bias in syllogistic reasoning by analysing response times, confidence ratings, and individual differences. Roberts’ (2005) “negations paradigm” was used to generate conflict between the surface features of problems and the logical status of conclusions. The experiment replicated matching bias effects in conclusion evaluation (Stupple & Waterhouse, 2009), revealing increased processing times for matching/logic “conflict problems”. Results paralleled chronometric evidence from the belief bias paradigm indicating that logic/belief conflict problems take longer to process than non-conflict problems (Stupple, Ball, Evans, & Kamal-Smith, 2011). Individuals’ response times for conflict problems also showed patterns of association with the degree of overall normative responding. Acceptance rates, response times, metacognitive confidence judgements, and individual differences all converged in supporting dual-process theory. This is noteworthy because dual-process predictions about heuristic/analytic conflict in syllogistic reasoning generalised from the belief bias paradigm to a situation where matching features of conclusions, rather than beliefs, were set in opposition to logic

    Fluency and belief bias in deductive reasoning: new indices for old effects.

    Get PDF
    Models based on signal detection theory (SDT) have occupied a prominent role in domains such as perception, categorization, and memory. Recent work by Dube et al. (2010) suggests that the framework may also offer important insights in the domain of deductive reasoning. Belief bias in reasoning has traditionally been examined using indices based on raw endorsement rates-indices that critics have claimed are highly problematic. We discuss a new set of SDT indices fit for the investigation belief bias and apply them to new data examining the effect of perceptual disfluency on belief bias in syllogisms. In contrast to the traditional approach, the SDT indices do not violate important statistical assumptions, resulting in a decreased Type 1 error rate. Based on analyses using these novel indices we demonstrate that perceptual disfluency leads to decreased reasoning accuracy, contrary to predictions. Disfluency also appears to eliminate the typical link found between cognitive ability and the effect of beliefs on accuracy. Finally, replicating previous work, we demonstrate that cognitive ability leads to an increase in reasoning accuracy and a decrease in the response bias component of belief bias

    The intersection between Descriptivism and Meliorism in reasoning research: further proposals in support of 'soft normativism'

    Get PDF
    The rationality paradox centres on the observation that people are highly intelligent, yet show evidence of errors and biases in their thinking when measured against normative standards. Elqayam and Evans (e.g., 2011) reject normative standards in the psychological study of thinking, reasoning and deciding in favour of a ‘value-free’ descriptive approach to studying high-level cognition. In reviewing Elqayam and Evans’ position, we defend an alternative to descriptivism in the form of ‘soft normativism’, which allows for normative evaluations alongside the pursuit of descriptive research goals. We propose that normative theories have considerable value provided that researchers: (1) are alert to the philosophical quagmire of strong relativism; (2) are mindful of the biases that can arise from utilising normative benchmarks; and (3) engage in a focused analysis of the processing approach adopted by individual reasoners. We address the controversial ‘is–ought’ inference in this context and appeal to a ‘bridging solution’ to this contested inference that is based on the concept of ‘informal reflective equilibrium’. Furthermore, we draw on Elqayam and Evans’ recognition of a role for normative benchmarks in research programmes that are devised to enhance reasoning performance and we argue that such Meliorist research programmes have a valuable reciprocal relationship with descriptivist accounts of reasoning. In sum, we believe that descriptions of reasoning processes are fundamentally enriched by evaluations of reasoning quality, and argue that if such standards are discarded altogether then our explanations and descriptions of reasoning processes are severely undermined

    Slower is not always better: Response-time evidence clarifies the limited role of miserly information processing in the Cognitive Reflection Test

    Get PDF
    We report a study examining the role of `cognitive miserliness' as a determinant of poor performance on the standard three-item Cognitive Reflection Test (CRT). The cognitive miserliness hypothesis proposes that people often respond incorrectly on CRT items because of an unwillingness to go beyond default, heuristic processing and invest time and effort in analytic, reflective processing. Our analysis (N = 391) focused on people's response times to CRT items to determine whether predicted associations are evident between miserly thinking and the generation of incorrect, intuitive answers. Evidence indicated only a weak correlation between CRT response times and accuracy. Item-level analyses also failed to demonstrate predicted response time differences between correct analytic and incorrect intuitive answers for two of the three CRT items. We question whether participants who give incorrect intuitive answers on the CRT can legitimately be termed cognitive misers and whether the three CRT items measure the same general construct
    corecore