6,596 research outputs found

    A probabilistic analysis of argument cogency

    Get PDF
    This paper offers a probabilistic treatment of the conditions for argument cogency as endorsed in informal logic: acceptability, relevance, and sufficiency. Treating a natural language argument as a reason-claim-complex, our analysis identifies content features of defeasible argument on which the RSA conditions depend, namely: change in the commitment to the reason, the reason’s sensitivity and selectivity to the claim, one’s prior commitment to the claim, and the contextually determined thresholds of acceptability for reasons and for claims. Results contrast with, and may indeed serve to correct, the informal understanding and applications of the RSA criteria concerning their conceptual dependence, their function as update-thresholds, and their status as obligatory rather than permissive norms, but also show how these formal and informal normative approachs can in fact align

    Redefining logical constants as inference markers

    Get PDF
    There is currently no universally accepted general de nition of logical constanthood. With a view to addressing this issue, we follow a pragmatist ratio- nale, according to which, some notion can be identi ed as a logical constant by considering the way in which it is used in our everyday reasoning practices, and argue that a logical constant has to be seen as encoding some kind of dynamic meaning, which marks the presence of an inferential transition among proposi- tional contents. We then put forth a characterisation of logical constants that takes into account their syntactic, semantic and pragmatic roles. What follows from our proposal is that logical constanthood can be best understood as a func- tional property that is satis ed only by certain uses of the relevant notions.peer-reviewe

    Bayesianism for Non-ideal Agents

    Get PDF
    Orthodox Bayesianism is a highly idealized theory of how we ought to live our epistemic lives. One of the most widely discussed idealizations is that of logical omniscience: the assumption that an agent’s degrees of belief must be probabilistically coherent to be rational. It is widely agreed that this assumption is problematic if we want to reason about bounded rationality, logical learning, or other aspects of non-ideal epistemic agency. Yet, we still lack a satisfying way to avoid logical omniscience within a Bayesian framework. Some proposals merely replace logical omniscience with a different logical idealization; others sacrifice all traits of logical competence on the altar of logical non-omniscience. We think a better strategy is available: by enriching the Bayesian framework with tools that allow us to capture what agents can and cannot infer given their limited cognitive resources, we can avoid logical omniscience while retaining the idea that rational degrees of belief are in an important way constrained by the laws of probability. In this paper, we offer a formal implementation of this strategy, show how the resulting framework solves the problem of logical omniscience, and compare it to orthodox Bayesianism as we know it

    Truthmaker Semantics for Relevant Logic

    Get PDF
    I develop and defend a truthmaker semantics for the relevant logic R. The approach begins with a simple philosophical idea and develops it in various directions, so as to build a technically adequate relevant semantics. The central philosophical idea is that truths are true in virtue of speciic states. Developing the idea formally results in a semantics on which truthmakers are relevant to what they make true. A very natural notion of conditionality is added, giving us relevant implication. I then investigate ways to add conjunction, disjunction, and negation; and I discuss how to justify contraposition and excluded middle within a truthmaker semantics

    Substructural Logics and Pragmatic Enrichment

    Get PDF
    In this dissertation, we argue for a Pragmatic Logical Pluralism, a pluralist thesis about logic which endorses Classical, Relevant, Linear, and Ordered logic. We justify that the formal languages of these four logics are legitimate codifications of the logical vocabulary and capture legitimate senses of logical consequence. This will be justified given a particular interpretation of the four formal languages: logical consequence and conditional, disjunction, and conjunction of the four different logics codify different and legitimate senses of ‘follows from’, ‘if...then’, ‘or’ and ‘and’ which diverge in their different pragmatic enrichments. The dissertation is twofold. First, we will explore the effect that the lack of structural rules has on logical connectives, in four substructural logics, and its connection with certain pragmatic enrichments. Second, we will defend a pluralist thesis according to which pragmatics has an important role for capturing the inferential role of logical vocabulary, both of the notions of ‘follows from’ and the logical constants, although classical logic preserves truth and captures their lit- eral meaning. In sum, we defend a version of logical pluralism based on the plurality of legitimate translations from natural language to formal languages, arguing that more than one translation is legitimate for logical vocabulary, which makes it possible to adopt more than one logic.En aquesta tesi presentem el Pluralisme Lògic Pragmàtic, una tesi pluralista sobre la lògica que accepta les lògiques Clàssica, Rellevant, Lineal i Ordenada. Justifiquem que els llenguatges formals d’aquestes quatre lògiques són codificacions legítimes del vocabulari lògic i capturen sentits legítims de la conseqüència lògica. Això es justificarà donant una interpretació particular dels quatre llenguatges formals: la conseqüència lògica i el condicional, la disjunció i la conjunció de les quatre lògiques acceptades codifiquen diferents i legítims sentits de ‘si...llavors’, ‘o’ i ‘i’, que es distingeixen pels diferents enriquiments pragmàtics que codifiquen. La tesi té dos vessants. Primer, explorem l’efecte que la falta de regles estructurals té en les connectives lògiques de les quatre lògiques presentades, i la seva connexió amb certs enriquiments pragmàtics. Segon, defensem una visió pluralista segons la qual la pragmàtica juga un rol important a l’hora de capturar el rol inferencial del vocabulari lògic, tant per la noció de conseqüència lògica com per les connectives, tot i que la lògica clàssica preserva la veritat i captura el seu significat literal. En resum, defensem una versió del pluralisme lògic basat en la pluralitat de traduccions legítimes del llenguatge natural al llenguatge formal, argumentant que més d’una traducció és legítima pel vocabulari lògic, la qual cosa ens permet adoptar més d’una lògica

    A frequentist framework of inductive reasoning

    Full text link
    Reacting against the limitation of statistics to decision procedures, R. A. Fisher proposed for inductive reasoning the use of the fiducial distribution, a parameter-space distribution of epistemological probability transferred directly from limiting relative frequencies rather than computed according to the Bayes update rule. The proposal is developed as follows using the confidence measure of a scalar parameter of interest. (With the restriction to one-dimensional parameter space, a confidence measure is essentially a fiducial probability distribution free of complications involving ancillary statistics.) A betting game establishes a sense in which confidence measures are the only reliable inferential probability distributions. The equality between the probabilities encoded in a confidence measure and the coverage rates of the corresponding confidence intervals ensures that the measure's rule for assigning confidence levels to hypotheses is uniquely minimax in the game. Although a confidence measure can be computed without any prior distribution, previous knowledge can be incorporated into confidence-based reasoning. To adjust a p-value or confidence interval for prior information, the confidence measure from the observed data can be combined with one or more independent confidence measures representing previous agent opinion. (The former confidence measure may correspond to a posterior distribution with frequentist matching of coverage probabilities.) The representation of subjective knowledge in terms of confidence measures rather than prior probability distributions preserves approximate frequentist validity.Comment: major revisio

    Bayesian computational methods

    Full text link
    In this chapter, we will first present the most standard computational challenges met in Bayesian Statistics, focussing primarily on mixture estimation and on model choice issues, and then relate these problems with computational solutions. Of course, this chapter is only a terse introduction to the problems and solutions related to Bayesian computations. For more complete references, see Robert and Casella (2004, 2009), or Marin and Robert (2007), among others. We also restrain from providing an introduction to Bayesian Statistics per se and for comprehensive coverage, address the reader to Robert (2007), (again) among others.Comment: This is a revised version of a chapter written for the Handbook of Computational Statistics, edited by J. Gentle, W. Hardle and Y. Mori in 2003, in preparation for the second editio

    Logical Pluralism and Vicious Regresses

    Full text link
    This material in this dissertation will be divided into two parts. The first part is a preliminary discussion of vicious regress arguments in the philosophy of logic in the 20th century. The second part will focus on three different versions of logical pluralism, i.e., the view that there are many correct logics. In each case an argument will be developed to show that these versions of logical pluralism result in a vicious regress. The material in part one will be divided into three chapters, and there are a few reasons for having a preliminary discussion of vicious regress arguments in philosophy of logic. Many vicious regress arguments have been raised in the past with the aim of making some kind of point in the philosophy of logic. Looking at some of these historical examples will serve multiple purposes. Primarily it will provide an opportunity to think carefully about the structure of vicious regress arguments. Vicious regress arguments can be distinguished in terms of what their underlying assumptions are and what they ultimately aim to demonstrate. The successfulness of a vicious regress argument will always be a function of these two things. But thinking about the structure of vicious regress arguments will also be beneficial for another reason. It will provide for a useful comparison to see how and in what way my own arguments relate to or differ from previous vicious regress arguments. Having cases to compare and contrast will help to clarify what assumptions I am making and where it will be important to reply to objections. In chapter one, I\u27ll look at a vicious regress that unfolds in Lewis Carroll\u27s dialogue What the Tortoise Said to Achilles? (1895). I\u27ll look at a few different ways that people have tried to extract a moral from Carroll\u27s text, and I\u27ll argue that our thinking about the moral should be guided by a prior question about how to understand the nature of the regress. I\u27ll look at some different interpretations of the regress in Carroll, and I\u27ll comment on why some interpretations may be more plausible than others. Chapter two focuses on a vicious regress argument that is developed in Willard Van Orman Quine\u27s Truth by Convention (1936). In some ways, it is easier to see what the intended moral of Quine\u27s vicious regress argument is because he explicitly characterizes the view he aims to criticize. He aims to criticize a conventionalist thesis about logic where logical truths are fully explained in terms of linguistic conventions for logical connectives. I\u27ll assume that Quine took himself to be criticizing a view held by Rudolph Carnap (1934/37). (I\u27ll also note some views that challenge this assumption.) I won\u27t focus on the exegetical question of whether Quine\u27s interpretation of Carnap is accurate, but I will look at a few passages from the Carnap material that Quine cited in his critique. Whatever the case may be with Carnap\u27s actual view, Quine distinguished between two forms of conventionalism about logic. He only intended his regress argument to apply to a version of conventionalism about logic where conventions are understood as being somehow explicit. Quine developed a separate argument for a version of the conventionalist thesis about logic where conventions are understood as implicit. I\u27ll discuss both of these arguments and the operative notions of convention. Much of the discussion in this chapter will concern the nature of Quine\u27s regress argument and the extent to which its successfulness depends on the notion of convention at play. Quine saw his regress argument as based on the same kind of considerations in Carroll\u27s dialogue. I\u27ll note that there are both epistemic and non-epistemic interpretations of Quine\u27s regress argument, and I\u27ll argue that there are reasons to prefer a non-epistemic reading. I\u27ll also look at a view from Jared Warren who develops an implicit convention version of the conventionalist thesis about logic. Warren responds to the critique of implicit conventionalism about logic from Quine, and I\u27ll provide some reasons for thinking that Warren\u27s response isn\u27t successful. I\u27ll end the chapter by making two observations about versions of conventionalism about logic that employ a conception of conventions where they are understood as implicit. I\u27ll suggest that these views don\u27t obviously avoid vicious regress worries, and that they also face worries concerning underdetermination (although I\u27ll discuss these latter two points in more detail in chapter five). In chapter three, I\u27ll look at an argument from Saul Kripke (1974a/74b) that is also supposed to be inspired by the regress considerations in Carroll\u27s dialogue. Kripke\u27s argument is directed towards Quine\u27s own views regarding the idea that logical hypotheses can be empirically revised. I\u27ll explain Quine\u27s view about the empirical revision of logical hypotheses and Kripke\u27s criticism. I\u27ll also comment on some of the similarities and differences between Kripke\u27s argument against Quine and Quine\u27s argument against Carnap (largely to argue that they are based on the same kind of underlying point). I\u27ll also argue that there are some reasons for thinking that Kripke\u27s argument may be based on a misinterpretation of Quine. I\u27ll look at the interpretation that is needed in order for Kripke\u27s challenge to be successful, and I\u27ll criticize some arguments in support of this interpretation from Romina Padro (2015). The second part begins with chapter four where I will look at a version of logical pluralism from Jc Beall and Greg Restall (2006). Beall and Restall\u27s version of logical pluralism is based on a case-theoretic analysis of logical validity. I\u27ll give an exegesis of their view, and then I\u27ll argue that it results in a vicious regress. I\u27ll spend some time talking about how I understand the nature of vicious regresses in this chapter, and the discussion of vicious regresses will be informed by a view from John Passmore (1961). I\u27ll give an exegesis of Passmore\u27s view, and I\u27ll also devote quite a bit of space to an objection and reply section. Part of the objection and reply section will contribute to making a case for the claim that the vicious regress point is a consideration in favor of logical monism. In particular, I\u27ll respond to an objection claiming that there is an analogous regress for logical monism. Without responding to an objection like this, the ultimate objective of my thesis would be incomplete (since there is only a consideration in favor of logical monism if it doesn\u27t face an analogous puzzle). Chapter four will end with a discussion of a view from Colin Caret (2017). Caret develops a view where the details of Beall and Restall\u27s theory are articulated in terms of an indexical contextualist semantic theory for expressions like logicaly valid . In chapters four and six, my use of theoretical labels like indexical contextualist , non-indexical contextualist , and assessment sensitive will follow the usage of MacFarlane (2014). Shapiro (2014) cites MacFarlane when explaining his usage of these technical terms, and Caret\u27s view fits the definition of indexical contextualism that is given by MacFarlane (although Caret cites other sources when developing his view). I\u27ll argue that a vicious regress can still be developed for a view like Caret\u27s where the details of Beall and Restall\u27s theory are understood in this way. In the fifth chapter, I\u27ll look at a version of logical pluralism from Hartry Field. Field\u27s logical pluralism is developed by conjoining a normative conception of logical validity with a relativistic conception of normativity. I\u27ll devote a good deal of space to explaining Field\u27s normative conception of validity and how his form of relativism is understood. The main upshot of combining these two components is that it results in a view where validity attributions are understood as being somehow relative to policies. I\u27ll argue that Field\u27s view also results in a vicious regress, and I\u27ll look at a few objections to my argument. The objections will mostly concern an issue about whether the regress argument only works for certain conceptions of policies (in particular whether they are conceived of as being somehow explicit). So the issues here will be analogous to some of the concerns that are discussed in chapter two regarding Quine\u27s criticism of logical conventionalism. I\u27ll also raise a separate puzzle for Field\u27s view that is based on considerations of underdetermination. The details of this argument are informed by a criticism of dispositional analyses of rule-following from Kripke (1982). So I\u27ll spend some time in this chapter looking at responses to Kripke\u27s criticism from Tomoji Shogenji (1993) and Jared Warren (2018). I\u27ll argue that neither of these accounts will help to dissolve the underdetermination issue. The points in this chapter (concerning regress and underdetermination) are also the ones I mentioned I would come back to in chapter two. In the sixth chapter, I\u27ll look at Stewart Shapiro\u27s version of logical pluralism (2014). It is also based on a form of relativism, but it is distinctive in that it is developed in terms of considerations in the philosophy of mathematics. I\u27ll provide an exegesis of Shapiro\u27s view, and I\u27ll argue that it also faces a vicious regress puzzle. It\u27s worth noting that Shapiro gives a semantic characterization of his view. He gives a detailed description of his view about the meaning of logical connectives and expressions like logically valid . He describes his view on the semantics of expressions like logically valid as a form of indexical contextualism (although there are some important qualifications to this claim which I\u27ll discuss). I\u27ll argue that a vicious regress can be developed for Shapiro\u27s view even when the details of his indexical contextualist semantic theory are taken into consideration. I\u27ll also argue that Shapiro\u27s view faces an underdetermination puzzle. These points about underdetermination will be similar to what is discussed in the chapter on Field, but the argument will concern details that are specific to Shapiro\u27s semantic theory. A key point of focus will be Shapiro\u27s view of contexts and the role that contexts are supposed to play in his indexical contextualist theory of logically valid

    Legal linked data ecosystems and the rule of law

    Get PDF
    This chapter introduces the notions of meta-rule of law and socio-legal ecosystems to both foster and regulate linked democracy. It explores the way of stimulating innovative regulations and building a regulatory quadrant for the rule of law. The chapter summarises briefly (i) the notions of responsive, better and smart regulation; (ii) requirements for legal interchange languages (legal interoperability); (iii) and cognitive ecology approaches. It shows how the protections of the substantive rule of law can be embedded into the semantic languages of the web of data and reflects on the conditions that make possible their enactment and implementation as a socio-legal ecosystem. The chapter suggests in the end a reusable multi-levelled meta-model and four notions of legal validity: positive, composite, formal, and ecological

    Feasible Computation in Symbolic and Numeric Integration

    Get PDF
    Two central concerns in scientific computing are the reliability and efficiency of algorithms. We introduce the term feasible computation to describe algorithms that are reliable and efficient given the contextual constraints imposed in practice. The main focus of this dissertation then, is to bring greater clarity to the forms of error introduced in computation and modeling, and in the limited context of symbolic and numeric integration, to contribute to integration algorithms that better account for error while providing results efficiently. Chapter 2 considers the problem of spurious discontinuities in the symbolic integration problem, proposing a new method to restore continuity based on a pair of unwinding numbers. Computable conditions for the unwinding numbers are specified, allowing the computation of a variety of continuous integrals. Chapter 3 introduces two structure-preserving algorithms for the symbolic-numeric integration of rational functions on exact input. A structured backward and forward error analysis for the algorithms shows that they are a posteriori backward and forward stable, with both forms of error exhibiting tolerance proportionality. Chapter 4 identifies the basic logical structure of feasible inference by presenting a logical model of stable approximate inference, illustrated by examples of modeling and numerical integration. In terms of this model it is seen that a necessary condition for the feasibility of methods of abstraction in modeling and complexity reduction in computational mathematics is the preservation of inferential structure, in a sense that is made precise. Chapter 5 identifies a robust pattern in mathematical sciences of transforming problems to make solutions feasible. It is showed that computational complexity reduction methods in computational science involve chains of such transformations. It is argued that the structured and approximate nature of such strategies indicates the need for a higher-order model of computation and a new definition of computational complexity
    • …
    corecore