113 research outputs found
Logic: A Modern Guide
This book is written for those who wish to learn some basic principles of formal logic but more importantly learn some easy methods to unpick arguments and assess their value for truth and validity.
The first section explains the ideas behind traditional logic which was formed well over two thousand years ago by the ancient Greeks. Terms such as ‘categorical syllogism’, ‘premise’, ‘deduction’ and ‘validity’ may appear at first sight to be inscrutable but will easily be understood with examples bringing the subjects to life. Traditionally, Venn diagrams have been employed to test arguments. These are very useful but their application is limited and they are not open to quantification. The mid-section of this book introduces a methodology that makes the analysis of arguments accessible with the use of a new form of diagram, modified from those of the mathematician Leonhard Euler. These new diagrammatic methods will be employed to demonstrate an addition to the basic form of syllogism. This includes a refined definition of the terms ‘most’ and ‘some’ within propositions. This may seem a little obscure at the moment but one will readily apprehend these new methods and principles of a more modern logic
Development of fuzzy syllogistic algorithms and applications distributed reasoning approaches
Thesis (Master)--Izmir Institute of Technology, Computer Engineering, Izmir, 2010Includes bibliographical references (leaves: 44-45)Text in English; Abstract: Turkish and Englishx, 65 leavesA syllogism, also known as a rule of inference or logical appeals, is a formal logical scheme used to draw a conclusion from a set of premises. It is a form of deductive reasoning that conclusion inferred from the stated premises. The syllogistic system consists of systematically combined premises and conclusions to so called figures and moods. The syllogistic system is a theory for reasoning, developed by Aristotle, who is known as one of the most important contributors of the western thought and logic. Since Aristotle, philosophers and sociologists have successfully modelled human thought and reasoning with syllogistic structures. However, a major lack was that the mathematical properties of the whole syllogistic system could not be fully revealed by now. To be able to calculate any syllogistic property exactly, by using a single algorithm, could indeed facilitate modelling possibly any sort of consistent, inconsistent or approximate human reasoning. In this work generic fuzzifications of sample invalid syllogisms and formal proofs of their validity with set theoretic representations are presented. Furthermore, the study discuss the mapping of sample real-world statements onto those syllogisms and some relevant statistics about the results gained from the algorithm applied onto syllogisms. By using this syllogistic framework, it can be used in various fields that can uses syllogisms as inference mechanisms such as semantic web, object oriented programming and data mining reasoning processes
Essence and Necessity, and the Aristotelian Modal Syllogistic: A Historical and Analytical Study
The following is a critical and historical account of Aristotelian Essentialism informed by recent work on Aristotle’s modal syllogistic. The semantics of the modal syllogistic are interpreted in a way that is motivated by Aristotle, and also make his validity claims in the Prior Analytics consistent to a higher degree than previously developed interpretative models. In Chapter One, ancient and contemporary objections to the Aristotelian modal syllogistic are discussed. A resolution to apparent inconsistencies in Aristotle’s modal syllogistic is proposed and developed out of recent work by Patterson, Rini, and Malink. In particular, I argue that the semantics of negation is distinct in modal context from those of assertoric negative claims. Given my interpretive model of Aristotle’s semantics, in Chapter Two, I provide proofs for each of the mixed apodictic syllogisms, and propose a method of using Venn Diagrams to visualize the validity claims Aristotle makes in the Prior Analytics. Chapter Three explores how Aristotle’s syllogistic fits within Aristotle’s philosophy of science and demonstration, particularly within the context of the Posterior Analytics. Consideration is given to the Aristotelian understanding of the relationship among necessity, explanation, definition, and essence. Chapter Four applies Aristotelian modal logic in contemporary contexts. I contrast Aristotelian modality and essentialism with contemporary modalism based upon the semantics of possible worlds, e.g. Kripke and Putnam. I also develop an account of how Aristotelian modal logic can ground a sortal dependent theory of identity, as discussed by Wiggins
Approximate syllogistic reasoning: a contribution to inference patterns and use cases
In this thesis two models of syllogistic reasoning for dealing with arguments that involve fuzzy quantified statements and approximate chaining are proposed. The modeling of quantified statements is based on the Theory of Generalized Quantifiers, which allows us to manage different kind of quantifiers simultaneously, and the inference process is interpreted in terms of a mathematical optimization problem, which allows us to deal with more arguments that standard deductive ones. For the case of approximate chaining, we propose to use synonymy, as used in a thesaurus, for calculating the degree of confidence of the argument according to the degree of similarity between chaining terms. As use cases, different types of Bayesian reasoning (Generalized Bayes' Theorem, Bayesian networks and probabilistic reasoning in legal argumentation) are analysed for being expressed through syllogisms
Recommended from our members
Information enforcement in learning with graphics : improving syllogistic reasoning skills
This thesis is an investigation into the factors that contribute to good choices among graphical systems used in teaching, and the feasibility of implementing teaching software that uses this knowledge.The thesis describes a mathematical metric derived from a cognitive theory of human diagram processing. The theory characterises differences among representations by their ability to express information. The theory provides the factors and relationships needed to build the metric. It says that good representations are easily processed because they are more vivid, more tractable and less expressive, than poor representations.The metric is applied to abstract systems for teaching and learning syllogistic reasoning, TARSKI'S WORLD, EULER CIRCLES, VENN DIAGRAMS and CARROLL'S GAME OF LOGIC. A rank ordering reflects the value of each system predicted by the theory and the metric. The theory, the metric and the systems are then tested in empirical studies. Five studies involving sixty-eight learners, examined the benefit of software based on these abstract systems.Studies showed the theory correctly predicted learners' success with the circle systems and poorer performance with TARSKI'S WORLD. The metric showed small but clear differences in expressivity between the circle systems. Differences between results of the learners using the circle systems contradicted the predictions of the metric.Learners with mathematical training were better equipped and more successful at learning syllogistic reasoning with the systems. Performance of learners without mathematical training declined after using the software systems. Diagrams drawn by learners together with video footage collected during problem solving, led to a catalogue of errors, misconceptions and some helpful strategies for learning from graphical systems.A cognitive style test investigated the poor performance of non-mathematically trained learners. Learners with mathematics training showed serialist and versatile learning styles while learners without this training showed a holist learning style. This is consistent with the hypothesis that non-mathematically trained learners emphasise the use of semantic cues during learning and problem solving.A card-sorting task investigated learners' preferences for parts of the graphical lexicon used in the diagram systems. Preferences for the EULER lexicon increased difficulty in explaining the system's poor results in earlier studies. Video footage of learners using the systems in the final study illustrated useful learning strategies and improved performance with EULER while individual instruction was available.Further work describes a preliminary design for an adaptive syllogism tutor and other related work
Debiasing reasoning:a signal detection analysis
This thesis focuses on deductive reasoning and how the belief bias effect can be reduced or ameliorated. Belief bias is a phenomenon whereby the evaluation of the logical validity of an argument is skewed by the degree to which the reasoner believes the conclusion. There has been little research examining ways of reducing such bias and whether there is some sort of effective intervention which makes people reason more on the basis of logic. Traditional analyses of this data has focussed on simple measures of accuracy, typically deducting the number of incorrect answers from the number of correct answers to give an accuracy score. However, recent theoretical developments have shown that this approach fails to separate reasoning biases and response biases. A reasoning bias, is one which affects individuals’ ability to discriminate between valid and invalid arguments, whereas a response bias is simply the individual’s tendency to give a particular answer, independent of reasoning. A Signal Detection Theory (SDT) approach is used to calculate measures of reasoning accuracy and response bias. These measures are then analysed using mixed effects models. Chapter 1 gives a general introduction to the topic, and outlines the content of subsequent chapters. In Chapter 2, I review the psychological literature around belief bias, the growth of the use of SDT models, and approaches to reducing bias. Chapter 3 covers the methodology, and includes a a thorough description of the calculation of the SDT measures, and an explanation of the mixed effects models I used to analyse these. Chapter 4 presents an experiment in which the effects of feedback on reducing belief bias is examined. In Chapter 5, the focus shifts in the direction of individual differences, and looks at the effect of different instructions given to participants, and Chapter 6 examines the effects of both feedback and specific training. Chapter 7 provides a general discussion of the implications of the previous three chapters
Individual differences and strategies for human reasoning
Theories of human reasoning have tended to assume cognitive universality, i. e. that all
individuals reason in basically the same way. However, some research (e. g. that of Ford.
1995) has found evidence of individual differences in the strategies people use for
syllogistic reasoning. This thesis presents a series of experiments which aimed to identify
individual differences in strategies for human reasoning and investigate their nature and
aetiology. Experiment 1 successfully replicated and extended Ford (1995) and provided
further evidence that most individuals prefer to reason with either verbal-propositional or
visuo-spatial representations. Data from verbal and written protocols showed that verbal
reasoners tended to use a method of substitution whereby they obtain a value for the
common term from one premise and then simply substitute it in the other premise to obtain
a conclusion. Spatial reasoners, on the other hand, presented protocols which resembled
Euler circles and described the syllogistic premises in terms of sets and subsets.
Experiment 2 provided some further qualitative evidence about the nature of such
strategies, especially the verbal reasoners, showing that within strategy variations occurred.
Experiment 3 extended this line of research, identifying a strong association between
verbal and spatial strategies for syllogistic reasoning and abstract and concrete strategies
for transitive inference (the latter having originally been identified by Egan and Grimes-
Farrow, 1982). Experiments 1-3 also showed that inter-strategic differences in accuracy are
generally not observed, hence, reasoners present an outward appearance of ubiquity despite
underlying differences in reasoning processes. Experiments 5 and 6 investigated individual
differences in cognitive factors which may underpin strategy preference. Whilst no
apparent effects of verbal and spatial ability or cognitive style were found, reasoners did
appear to draw differentially on the verbal and spatial components of working memory.
Confirmatory factor analysis showed that whilst verbal reasoners draw primarily on the
verbal memory resource, spatial reasoners draw both on this and on spatial resource.
Overall, these findings have important implications for theories of human reasoning, which
need to take into account possible individual differences in strategies if they are to present
a truly comprehensive account of how people reason.Economic and Social
Research Counci
- …