616 research outputs found

    <i>Not</i> Wanted: On Scharp's Solution to the Liar

    Get PDF
    Kevin Scharp argues that the concept of truth is defective, and is therefore unable to play its intended role in natural language truth-conditional semantics. As such, for this theoretical purpose, Scharp constructs two replacements: ascending truth and descending truth. Scharp applies the resultant theory, AD semantics, to the liar sentence, thereby obtaining a novel solution to the liar paradox. The aim of the present paper is fourfold. First, I show that, contrary to Scharp’s claims, AD semantics in fact yields an inconsistency when applied to standard liar sentences. Second, I diagnose the problem: AD semantics mishandles negation. I propose an alternative treatment, resulting in what I call AD* semantics. Third, I show that AD* semantics gives Scharp the resources required to respond to an alleged revenge paradox that has been raised against his view. Finally, I argue that, these consequences notwithstanding, it remains unclear whether AD* semantics provides an adequate account of alethic paradoxes more generally

    Shrieking in the face of vengeance

    Get PDF
    Paraconsistent dialetheism is the view that some contradictions are true and that the inference rule ex falso quod libet (a.k.a. explosion) is invalid. A long-standing problem for paraconsistent dialetheism is that it has difficulty making sense of situations where people use locutions like ‘just true’ and ‘just false’. Jc Beall recently advocated a general strategy, which he terms shrieking, for solving this problem and thereby strengthening the case for paraconsistent dialetheism. However, Beall’s strategy fails, and seeing why it fails brings into greater focus just how daunting the just-true problem is for the dialetheist.PostprintPeer reviewe

    Conceptual engineering for truth : aletheic properties and new aletheic concepts

    Get PDF
    What is the property of being true like? To answer this question, begin with a Canberra-plan analysis of the concept of truth. That is, assemble the platitudes for the concept of truth, and then investigate which property might satisfy them. This project is aided by Friedman and Sheard’s groundbreaking analysis of twelve logical platitudes for truth. It turns out that, because of the paradoxes like the liar, the platitudes for the concept of truth are inconsistent. Moreover, there are so many distinct paradoxes that only small subsets of platitudes for truth are consistent. The result is that there is no property of being true. The failure of the Canberra plan analysis of the concept of truth, points the way toward a new methodology: a conceptual engineering project for the concept of truth. Conceptual engineering is assessing the quality of our concepts, and when they are found defective, offering new and better concepts to replace them for certain purposes. Still, there are many aletheic properties, which are properties satisfied by reasonably large subsets of platitudes for the concept of truth. We can treat these aletheic properties as a guide to the multitude of new aletheic concepts, which are concepts similar to, but distinct from, the concept of truth. Any new aletheic concept or team of concepts might be called on to replace the concept of truth. In particular, the concepts of ascending truth and descending truth are recommended, but the most important point is that we need a full-scale investigation into the space of aletheic properties and new aletheic concepts—that is, we need an Aletheic Principles Project (APP).Publisher PDFPeer reviewe

    Impact of New Teacher Induction on Beginning Teachers

    Get PDF
    Teacher retention and its much more emphasized antithesis, attrition, affects the Federation of Affiliated Christian Churches (FACC) school system. The FACC created the New Teacher Induction (NTI) program after the New Teacher Center model for induction out of Santa Cruz, California to assist new teachers and increase retention rates in their school system. Mentoring, professional development opportunities, and principal engagement were the three-prongs of the NTI approach to teacher support which aimed at increasing new teacher self-efficacy. This qualitative case study examines new teacher perceptions of the NTI program, and its impact on their decisions to remain in or leave the teaching profession. The sample population was drawn from new teachers who graduated from a private teacher training college in the Midwest during a three-year span, and who had participated in the NTI program. Data consisted of questionnaires, semi-structured interviews, and semi-structured focus group sessions. The findings revealed both positive and negative perceptions of the FACC NTI program. Many new teachers reported high levels of self-efficacy prior to entering the program, and they commented on the impact their mentors and principals had on that self-efficacy. Some new teacher expressed disappointment with certain aspects and perceived the need for improvements in the areas of policies and procedures and mentor proximity. The new teachers did not perceive a connection between the NTI program and their retention intentions in the profession

    Analytic Pragmatism and universal LX vocabulary

    Get PDF
    In his recent John Locke Lectures – published as Between Saying and Doing – Brandom extends and refines his views on the nature of language and philosophy by developing a position that he calls Analytic Pragmatism. Although Brandom’s project bears on an extraordinarily rich array of different philosophical issues, we focus here on the contention that certain vocabularies have a privileged status within our linguistic practices, and that when adequately understood, the practices in which these vocabularies figure can help furnish us with an account of semantic intentionality. Brandom’s claim is that such vocabularies are privileged because they are a species of what he calls universal LX vocabulary –roughly, vocabulary whose mastery is implicit in any linguistic practice whatsoever. We show that, contrary to Brandom’s claim, logical vocabulary per se fails to satisfy the conditions that must be met for something to count as universal LX vocabulary. Further, we show that exactly analogous considerations undermine his claim that modal vocabulary is universal LX. If our arguments are sound, then, contrary to what Brandom maintains, intentionality cannot be explicated as a “pragmatically mediated semantic phenomenon”, at any rate not of the sort that he proposes.Publisher PDFPeer reviewe

    A defense of QUD reasons contextualism

    Get PDF
    In this article, we defend the semantic theory, Question Under Discussion (QUD) Contextualism about Reasons that we develop in our monograph Semantics for Reasons against a series of objections that focus on whether our semantics can deliver predictions for some common examples, how we defend the semantic theory, and how we assess it compared to its competitors.</p

    The end of vagueness : technological epistemicism, surveillance capitalism, and explainable Artificial Intelligence

    Get PDF
    Artificial Intelligence (AI) pervades humanity in 2022, and it is notoriously difficult to understand how certain aspects of it work. There is a movement—Explainable Artificial Intelligence (XAI)—to develop new methods for explaining the behaviours of AI systems. We aim to highlight one important philosophical significance of XAI—it has a role to play in the elimination of vagueness. To show this, consider that the use of AI in what has been labeled surveillance capitalism has resulted in humans quickly gaining the capability to identify and classify most of the occasions in which languages are used. We show that the knowability of this information is incompatible with what a certain theory of vagueness—epistemicism—says about vagueness. We argue that one way the epistemicist could respond to this threat is to claim that this process brought about the end of vagueness. However, we suggest an alternative interpretation, namely that epistemicism is false, but there is a weaker doctrine we dub technological epistemicism, which is the view that vagueness is due to ignorance of linguistic usage, but the ignorance can be overcome. The idea is that knowing more of the relevant data and how to process it enables us to know the semantic values of our words and sentences with higher confidence and precision. Finally, we argue that humans are probably not going to believe what future AI algorithms tell us about the sharp boundaries of our vague words unless the AI involved can be explained in terms understandable by humans. That is, if people are going to accept that AI can tell them about the sharp boundaries of the meanings of their words, then it is going to have to be XAI.Publisher PDFPeer reviewe
    • 

    corecore