15 research outputs found
Why Do Humans Reason? Arguments for an Argumentative Theory
Reasoning is generally seen as a means to improve knowledge and make better decisions. However, much evidence shows that reasoning often leads to epistemic distortions and poor decisions. This suggests that the function of reasoning should be rethought. Our hypothesis is that the function of reasoning is argumentative. It is to devise and evaluate arguments intended to persuade. Reasoning so conceived is adaptive given the exceptional dependence of humans on communication and their vulnerability to misinformation. A wide range of evidence in the psychology of reasoning and decision making can be reinterpreted and better explained in the light of this hypothesis. Poor performance in standard reasoning tasks is explained by the lack of argumentative context. When the same problems are placed in a proper argumentative setting, people turn out to be skilled arguers. Skilled arguers, however, are not after the truth but after arguments supporting their views. This explains the notorious confirmation bias. This bias is apparent not only when people are actually arguing, but also when they are reasoning proactively from the perspective of having to defend their opinions. Reasoning so motivated can distort evaluations and attitudes and allow erroneous beliefs to persist. Proactively used reasoning also favors decisions that are easy to justify but not necessarily better. In all these instances traditionally described as failures or flaws, reasoning does exactly what can be expected of an argumentative device: Look for arguments that support a given conclusion, and, ceteris paribus, favor conclusions for which arguments can be found
Recommended from our members
Information enforcement in learning with graphics : improving syllogistic reasoning skills
This thesis is an investigation into the factors that contribute to good choices among graphical systems used in teaching, and the feasibility of implementing teaching software that uses this knowledge.The thesis describes a mathematical metric derived from a cognitive theory of human diagram processing. The theory characterises differences among representations by their ability to express information. The theory provides the factors and relationships needed to build the metric. It says that good representations are easily processed because they are more vivid, more tractable and less expressive, than poor representations.The metric is applied to abstract systems for teaching and learning syllogistic reasoning, TARSKI'S WORLD, EULER CIRCLES, VENN DIAGRAMS and CARROLL'S GAME OF LOGIC. A rank ordering reflects the value of each system predicted by the theory and the metric. The theory, the metric and the systems are then tested in empirical studies. Five studies involving sixty-eight learners, examined the benefit of software based on these abstract systems.Studies showed the theory correctly predicted learners' success with the circle systems and poorer performance with TARSKI'S WORLD. The metric showed small but clear differences in expressivity between the circle systems. Differences between results of the learners using the circle systems contradicted the predictions of the metric.Learners with mathematical training were better equipped and more successful at learning syllogistic reasoning with the systems. Performance of learners without mathematical training declined after using the software systems. Diagrams drawn by learners together with video footage collected during problem solving, led to a catalogue of errors, misconceptions and some helpful strategies for learning from graphical systems.A cognitive style test investigated the poor performance of non-mathematically trained learners. Learners with mathematics training showed serialist and versatile learning styles while learners without this training showed a holist learning style. This is consistent with the hypothesis that non-mathematically trained learners emphasise the use of semantic cues during learning and problem solving.A card-sorting task investigated learners' preferences for parts of the graphical lexicon used in the diagram systems. Preferences for the EULER lexicon increased difficulty in explaining the system's poor results in earlier studies. Video footage of learners using the systems in the final study illustrated useful learning strategies and improved performance with EULER while individual instruction was available.Further work describes a preliminary design for an adaptive syllogism tutor and other related work
Logical models for bounded reasoners
This dissertation aims at the logical modelling of aspects of human reasoning, informed by facts on the bounds of human cognition. We break down this challenge into three parts. In Part I, we discuss the place of logical systems for knowledge and belief in the Rationality Debate and we argue for systems that formalize an alternative picture of rationality -- one wherein empirical facts have a key role (Chapter 2). In Part II, we design logical models that encode explicitly the deductive reasoning of a single bounded agent and the variety of processes underlying it. This is achieved through the introduction of a dynamic, resource-sensitive, impossible-worlds semantics (Chapter 3). We then show that this type of semantics can be combined with plausibility models (Chapter 4) and that it can be instrumental in modelling the logical aspects of System 1 (“fast”) and System 2 (“slow”) cognitive processes (Chapter 5). In Part III, we move from single- to multi-agent frameworks. This unfolds in three directions: (a) the formation of beliefs about others (e.g. due to observation, memory, and communication), (b) the manipulation of beliefs (e.g. via acts of reasoning about oneself and others), and (c) the effect of the above on group reasoning. These questions are addressed, respectively, in Chapters 6, 7, and 8. We finally discuss directions for future work and we reflect on the contribution of the thesis as a whole (Chapter 9)
Proceedings of the 1st Doctoral Consortium at the European Conference on Artificial Intelligence (DC-ECAI 2020)
1st Doctoral Consortium at the European Conference on
Artificial Intelligence (DC-ECAI 2020), 29-30 August, 2020
Santiago de Compostela, SpainThe DC-ECAI 2020 provides a unique opportunity for PhD students, who are close to finishing their doctorate research, to interact with experienced researchers in the field. Senior members of the community are assigned as mentors for each group of students based on the student’s research or similarity of research interests. The DC-ECAI 2020, which is held virtually this year, allows students from all over the world to present their research and discuss their ongoing research and career plans with their mentor, to do networking with other participants, and to receive training and mentoring about career planning and career option
Proceedings of the 11th international Conference on Cognitive Modeling : ICCM 2012
The International Conference on Cognitive Modeling (ICCM) is the premier conference for research on computational models and computation-based theories of human behavior. ICCM is a forum for presenting, discussing, and evaluating the complete spectrum of cognitive modeling approaches, including connectionism, symbolic modeling, dynamical systems, Bayesian modeling, and cognitive architectures. ICCM includes basic and applied research, across a wide variety of domains, ranging from low-level perception and attention to higher-level problem-solving and learning. Online-Version published by Universitätsverlag der TU Berlin (www.univerlag.tu-berlin.de
Understanding and supporting belief accuracy in a digital world
Advances in computing capacities have given rise to a “digital world” in which information can be accessed and shared at a faster pace, larger scale, and lower cost than what was previously possible. While this new digital world has promised a more informed public, research over the past decade has raised major concerns about the accuracy of people’s beliefs, pointing to increasing polarisation, anti-intellectualism, and conspiratorial thinking. Efforts to understand why the promise of the digital world has not been realised often follow one of two perspectives. On one hand, psychological studies argue that humans process information irrationally to believe what they want to believe. On the other hand, studies of new digital media argue that structural features of the digital world present distorted information to users. In this thesis, I challenge these literatures by highlighting the limitations of widely-accepted research methods, and provide initial evidence that the same technologies denounced for undermining the integrity of our beliefs can be re-designed to promote accurate decision making. Using Herbert Simon’s theory of bounded rationality as an organising framework, I present three studies examining (1)optimistic belief updating as a psychological account of belief inaccuracy “in the mind,” (2) moral contagion as a structural account of belief inaccuracy “in the world,” and (3) rewiring algorithms as a novel digital tool to support belief accuracy online. Theoretical, methodological, and practical implications are discussed
A computational model of focused attention meditation and its transfer to a sustained attention task
Explaining Imagination
Imagination will remain a mystery—we will not be able to explain imagination—until we can break it into simpler parts that are more easily understood. Explaining Imagination is a guidebook for doing just that, where the simpler parts are other familiar mental states like beliefs, desires, judgments, decisions, and intentions. In different combinations and contexts, these states constitute cases of imagining. This reductive approach to imagination is at direct odds with the current orthodoxy, which sees imagination as an irreducible, sui generis mental state or process—one that influences our judgments, beliefs, desires, and so on, without being constituted by them. Explaining Imagination looks closely at the main contexts where imagination is thought to be at work and argues that, in each case, the capacity is best explained by appeal to a person’s beliefs, judgments, desires, intentions, or decisions. The proper conclusion is not that there are no imaginings after all, but that these other states simply constitute the relevant cases of imagining. Contexts explored in depth include: hypothetical and counterfactual reasoning, engaging in pretense, appreciating fictions, and generating creative works. The special role of mental imagery within states like beliefs, desires, and judgments is explained in a way that is compatible with reducing imagination to more basic folk psychological states. A significant upshot is that, in order to create an artificial mind with an imagination, we need only give it these more ordinary mental states