362 research outputs found

    Thinking Things Through

    Get PDF
    A Photcopy of Thinking Things Through, Princeton Univeresity Press, 198

    Computer simulation of syllogism solving using restricted mental models

    Get PDF

    Essays on the Logical

    Get PDF
    Already in ancient philosophy, there was a transition from the implicit and hidden action of the Logical ( lógos) in nature ( phýsis) to the scientific and explicit expression of the logical structures of thought, action, the world and language. Heraclitus' heno-logic with Logos as hidden implicit principle of homologization of opposites ( tà enantía) in nature differs from Parmenides' paraconsistent logic developed in a hypothetical hemidyalectics given in the formula ''All is One'' ( hén pánta eînai). Plato's concept of dia-logic (dialektikè téchne ) with a new concept of Logos as the one genus of beings ( hén tí génon toôn ontoôn) in which the word not-Being (negation) got its place enabled production of dyadic logical structure by the granulation of genera into opposite species and sub-species that it contains. Aristotle's concept of triadic-logic as syl-logistics ( syllogismós) and demonstrative science (epistéme apodeiktikê ) give a new approach by new granulation of the concept of Logos into triadic logical structure: (1) the structure of being (substratum-attributes relation), (2) the structure of thought (substance-second substances relation), and (3) the structure of propositions (subject-predicate relation). Plato's dialectic and Aristotle's syllogistic both deconstructed the implicit ontological unity of the world (pan, kosmos, sphairos ) given through the concept of Logos in Pre-Socratic philosophy in order to make that unity in explicit form given by the logical and semantical structures of the propositions about the world, about the thought and about the language. The hidden implicit of the nature, which had to be known intuitively, was transformed into unhidden explicit inferential logical structures given in the semantics and pragmatics of scientific demonstration

    Thinking Things Through

    Get PDF
    A Photcopy of Thinking Things Through, Princeton Univeresity Press, 198

    Metalogic and the psychology of reasoning.

    Get PDF
    The central topic of the thesis is the relationship between logic and the cognitive psychology of reasoning. This topic is treated in large part through a detailed examination of the recent work of P. N. Johnson-Laird, who has elaborated a widely-read and influential theory in the field. The thesis is divided into two parts, of which the first is a more general and philosophical coverage of some of the most central issues to be faced in relating psychology to logic, while the second draws upon this as introductory material for a critique of Johnson-Laird's `Mental Model' theory, particularly as it applies to syllogistic reasoning. An approach similar to Johnson-Laird's is taken to cognitive psychology, which centrally involves the notion of computation. On this view, a cognitive model presupposes an algorithm which can be seen as specifying the behaviour of a system in ideal conditions. Such behaviour is closely related to the notion of `competence' in reasoning, and this in turn is often described in terms of logic. Insofar as a logic is taken to specify the competence of reasoners in some domain, it forms a set of conditions on the 'input-output' behaviour of the system, to be accounted for by the algorithm. Cognitive models, however, must also be subjected to empirical test, and indeed are commonly built in a highly empirical manner. A strain can therefore develop between the empirical and the logical pressures on a theory of reasoning. Cognitive theories thus become entangled in a web of recently much-discussed issues concerning the rationality of human reasoners and the justification of a logic as a normative system. There has been an increased interest in the view that logic is subject to revision and development, in which there is a recognised place for the influence of psychological investigation. It is held, in this thesis, that logic and psychology are revealed by these considerations to be interdetermining in interesting ways, under the general a priori requirement that people are in an important and particular sense rational. Johnson-Laird's theory is a paradigm case of the sort of cognitive theory dealt with here. It is especially significant in view of the strong claims he makes about its relation to logic, and the role the latter plays in its justification and in its interpretation. The theory is claimed to be revealing about fundamental issues in semantics, and the nature of rationality. These claims are examined in detail, and several crucial ones refuted. Johnson- Laird's models are found to be wanting in the level of empirical support provided, and in their ability to found the considerable structure of explanation they are required to bear. They fail, most importantly, to be distinguishable from certain other kinds of models, at a level of theory where the putative differences are critical. The conclusion to be drawn is that the difficulties in this field are not yet properly appreciated. Psychological explantion requires a complexity which is hard to reconcile with the clarity and simplicity required for logical insights

    Linguistic probability theory

    Get PDF
    In recent years probabilistic knowledge-based systems such as Bayesian networks and influence diagrams have come to the fore as a means of representing and reasoning about complex real-world situations. Although some of the probabilities used in these models may be obtained statistically, where this is impossible or simply inconvenient, modellers rely on expert knowledge. Experts, however, typically find it difficult to specify exact probabilities and conventional representations cannot reflect any uncertainty they may have. In this way, the use of conventional point probabilities can damage the accuracy, robustness and interpretability of acquired models. With these concerns in mind, psychometric researchers have demonstrated that fuzzy numbers are good candidates for representing the inherent vagueness of probability estimates, and the fuzzy community has responded with two distinct theories of fuzzy probabilities.This thesis, however, identifies formal and presentational problems with these theories which render them unable to represent even very simple scenarios. This analysis leads to the development of a novel and intuitively appealing alternative - a theory of linguistic probabilities patterned after the standard Kolmogorov axioms of probability theory. Since fuzzy numbers lack algebraic inverses, the resulting theory is weaker than, but generalises its classical counterpart. Nevertheless, it is demonstrated that analogues for classical probabilistic concepts such as conditional probability and random variables can be constructed. In the classical theory, representation theorems mean that most of the time the distinction between mass/density distributions and probability measures can be ignored. Similar results are proven for linguistic probabiliities.From these results it is shown that directed acyclic graphs annotated with linguistic probabilities (under certain identified conditions) represent systems of linguistic random variables. It is then demonstrated these linguistic Bayesian networks can utilise adapted best-of-breed Bayesian network algorithms (junction tree based inference and Bayes' ball irrelevancy calculation). These algorithms are implemented in ARBOR, an interactive design, editing and querying tool for linguistic Bayesian networks.To explore the applications of these techniques, a realistic example drawn from the domain of forensic statistics is developed. In this domain the knowledge engineering problems cited above are especially pronounced and expert estimates are commonplace. Moreover, robust conclusions are of unusually critical importance. An analysis of the resulting linguistic Bayesian network for assessing evidential support in glass-transfer scenarios highlights the potential utility of the approach

    Automated Validation of State-Based Client-Centric Isolation with TLA <sup>+</sup>

    Get PDF
    Clear consistency guarantees on data are paramount for the design and implementation of distributed systems. When implementing distributed applications, developers require approaches to verify the data consistency guarantees of an implementation choice. Crooks et al. define a state-based and client-centric model of database isolation. This paper formalizes this state-based model in, reproduces their examples and shows how to model check runtime traces and algorithms with this formalization. The formalized model in enables semi-automatic model checking for different implementation alternatives for transactional operations and allows checking of conformance to isolation levels. We reproduce examples of the original paper and confirm the isolation guarantees of the combination of the well-known 2-phase locking and 2-phase commit algorithms. Using model checking this formalization can also help finding bugs in incorrect specifications. This improves feasibility of automated checking of isolation guarantees in synthesized synchronization implementations and it provides an environment for experimenting with new designs.</p

    Mental representations underlying syllogistic reasoning

    Get PDF
    corecore