9,786 research outputs found

    Synchronous Online Philosophy Courses: An Experiment in Progress

    Get PDF
    There are two main ways to teach a course online: synchronously or asynchronously. In an asynchronous course, students can log on at their convenience and do the course work. In a synchronous course, there is a requirement that all students be online at specific times, to allow for a shared course environment. In this article, the author discusses the strengths and weaknesses of synchronous online learning for the teaching of undergraduate philosophy courses. The author discusses specific strategies and technologies he uses in the teaching of online philosophy courses. In particular, the author discusses how he uses videoconferencing to create a classroom-like environment in an online class

    Hyperfine-Grained Meanings in Classical Logic

    Get PDF
    This paper develops a semantics for a fragment of English that is based on the idea of `impossible possible worlds'. This idea has earlier been formulated by authors such as Montague, Cresswell, Hintikka, and Rantala, but the present set-up shows how it can be formalized in a completely unproblematic logic---the ordinary classical theory of types. The theory is put to use in an account of propositional attitudes that is `hyperfine-grained', i.e. that does not suffer from the well-known problems involved with replacing expressions by logical equivalents

    On the Rationality of Escalation

    Get PDF
    Escalation is a typical feature of infinite games. Therefore tools conceived for studying infinite mathematical structures, namely those deriving from coinduction are essential. Here we use coinduction, or backward coinduction (to show its connection with the same concept for finite games) to study carefully and formally the infinite games especially those called dollar auctions, which are considered as the paradigm of escalation. Unlike what is commonly admitted, we show that, provided one assumes that the other agent will always stop, bidding is rational, because it results in a subgame perfect equilibrium. We show that this is not the only rational strategy profile (the only subgame perfect equilibrium). Indeed if an agent stops and will stop at every step, we claim that he is rational as well, if one admits that his opponent will never stop, because this corresponds to a subgame perfect equilibrium. Amazingly, in the infinite dollar auction game, the behavior in which both agents stop at each step is not a Nash equilibrium, hence is not a subgame perfect equilibrium, hence is not rational.Comment: 19 p. This paper is a duplicate of arXiv:1004.525

    Logical omniscience and classical logic

    Get PDF

    A Defeasible Calculus for Zetetic Agents

    Get PDF
    The study of defeasible reasoning unites epistemologists with those working in AI, in part, because both are interested in epistemic rationality. While it is traditionally thought to govern the formation and (with)holding of beliefs, epistemic rationality may also apply to the interrogative attitudes associated with our core epistemic practice of inquiry, such as wondering, investigating, and curiosity. Since generally intelligent systems should be capable of rational inquiry, AI researchers have a natural interest in the norms that govern interrogative attitudes. Following its recent coinage, we use the term ``zetetic'' to refer to the properties and norms associated with the capacity to inquire. In this paper, we argue that zetetic norms can be modeled via defeasible inferences to and from questions---a.k.a erotetic inferences---in a manner similar to the way norms of epistemic rationality are represented by defeasible inference rules. We offer a sequent calculus that accommodates the unique features of ``erotetic defeat" and that exhibits the computational properties needed to inform the design of zetetic agents. The calculus presented here is an improved version of the one presented in Millson (2019), extended to cover a new class of defeasible erotetic inferences

    Methodology of Algorithm Engineering

    Full text link
    Research on algorithms has drastically increased in recent years. Various sub-disciplines of computer science investigate algorithms according to different objectives and standards. This plurality of the field has led to various methodological advances that have not yet been transferred to neighboring sub-disciplines. The central roadblock for a better knowledge exchange is the lack of a common methodological framework integrating the perspectives of these sub-disciplines. It is the objective of this paper to develop a research framework for algorithm engineering. Our framework builds on three areas discussed in the philosophy of science: ontology, epistemology and methodology. In essence, ontology describes algorithm engineering as being concerned with algorithmic problems, algorithmic tasks, algorithm designs and algorithm implementations. Epistemology describes the body of knowledge of algorithm engineering as a collection of prescriptive and descriptive knowledge, residing in World 3 of Popper's Three Worlds model. Methodology refers to the steps how we can systematically enhance our knowledge of specific algorithms. The framework helps us to identify and discuss various validity concerns relevant to any algorithm engineering contribution. In this way, our framework has important implications for researching algorithms in various areas of computer science

    Exposing Fake Logic

    Get PDF
    Exposing Fake Logic by Avi Sion is a collection of essays written after publication of his book A Fortiori Logic, in which he critically responds to derivative work by other authors who claim to know better. This is more than just polemics; but allows further clarifications of a fortiori logic and of general logic. This collection includes essays on: a fortiori argument (in general and in Judaism); Luis Duarte D’Almeida; Mahmoud Zeraatpishe; Michael Avraham (et al.); an anonymous reviewer of BDD (a Bar Ilan University journal); and self-publishing

    Human-Intelligence and Machine-Intelligence Decision Governance Formal Ontology

    Get PDF
    Since the beginning of the human race, decision making and rational thinking played a pivotal role for mankind to either exist and succeed or fail and become extinct. Self-awareness, cognitive thinking, creativity, and emotional magnitude allowed us to advance civilization and to take further steps toward achieving previously unreachable goals. From the invention of wheels to rockets and telegraph to satellite, all technological ventures went through many upgrades and updates. Recently, increasing computer CPU power and memory capacity contributed to smarter and faster computing appliances that, in turn, have accelerated the integration into and use of artificial intelligence (AI) in organizational processes and everyday life. Artificial intelligence can now be found in a wide range of organizational systems including healthcare and medical diagnosis, automated stock trading, robotic production, telecommunications, space explorations, and homeland security. Self-driving cars and drones are just the latest extensions of AI. This thrust of AI into organizations and daily life rests on the AI community’s unstated assumption of its ability to completely replicate human learning and intelligence in AI. Unfortunately, even today the AI community is not close to completely coding and emulating human intelligence into machines. Despite the revolution of digital and technology in the applications level, there has been little to no research in addressing the question of decision making governance in human-intelligent and machine-intelligent (HI-MI) systems. There also exists no foundational, core reference, or domain ontologies for HI-MI decision governance systems. Further, in absence of an expert reference base or body of knowledge (BoK) integrated with an ontological framework, decision makers must rely on best practices or standards that differ from organization to organization and government to government, contributing to systems failure in complex mission critical situations. It is still debatable whether and when human or machine decision capacity should govern or when a joint human-intelligence and machine-intelligence (HI-MI) decision capacity is required in any given decision situation. To address this deficiency, this research establishes a formal, top level foundational ontology of HI-MI decision governance in parallel with a grounded theory based body of knowledge which forms the theoretical foundation of a systemic HI-MI decision governance framework
    • …
    corecore