4,514 research outputs found

    Course of value distinguishes the intentionality of programming languages

    Get PDF
    International audienceIn this contribution, we propose to study the transformation of first order programs by course of value recursion. Our motivation is to show that this transformation provides a separation criterion for the intentionality of sets of programs. As an illustration, we consider two variants of the multiset path ordering, for the first, terms in recursive calls are compared with respect to the subterm property, for the second with respect to embedding. Under a quasi-interpretation, both characterize Ptime, the latter characterization being a new result. Once applied the transformation, we get respec- tively Ptime and Pspace thus proving that the latter set of programs contains more algorithms

    Apperceptive patterning: Artefaction, extensional beliefs and cognitive scaffolding

    Get PDF
    In “Psychopower and Ordinary Madness” my ambition, as it relates to Bernard Stiegler’s recent literature, was twofold: 1) critiquing Stiegler’s work on exosomatization and artefactual posthumanism—or, more specifically, nonhumanism—to problematize approaches to media archaeology that rely upon technical exteriorization; 2) challenging how Stiegler engages with Giuseppe Longo and Francis Bailly’s conception of negative entropy. These efforts were directed by a prevalent techno-cultural qualifier: the rise of Synthetic Intelligence (including neural nets, deep learning, predictive processing and Bayesian models of cognition). This paper continues this project but first directs a critical analytic lens at the Derridean practice of the ontologization of grammatization from which Stiegler emerges while also distinguishing how metalanguages operate in relation to object-oriented environmental interaction by way of inferentialism. Stalking continental (Kapp, Simondon, Leroi-Gourhan, etc.) and analytic traditions (e.g., Carnap, Chalmers, Clark, Sutton, Novaes, etc.), we move from artefacts to AI and Predictive Processing so as to link theories related to technicity with philosophy of mind. Simultaneously drawing forth Robert Brandom’s conceptualization of the roles that commitments play in retrospectively reconstructing the social experiences that lead to our endorsement(s) of norms, we compliment this account with Reza Negarestani’s deprivatized account of intelligence while analyzing the equipollent role between language and media (both digital and analog)

    Information-Theoretic Philosophy of Mind

    Get PDF

    Semantics and the Computational Paradigm in Cognitive Psychology

    Get PDF
    There is a prevalent notion among cognitive scientists and philosophers of mind that computers are merely formal symbol manipulators, performing the actions they do solely on the basis of the syntactic properties of the symbols they manipulate. This view of computers has allowed some philosophers to divorce semantics from computational explanations. Semantic content, then, becomes something one adds to computational explanations to get psychological explanations. Other philosophers, such as Stephen Stich, have taken a stronger view, advocating doing away with semantics entirely. This paper argues that a correct account of computation requires us to attribute content to computational processes in order to explain which functions are being computed. This entails that computational psychology must countenance mental representations. Since anti-semantic positions are incompatible with computational psychology thus construed, they ought to be rejected. Lastly, I argue that in an important sense, computers are not formal symbol manipulators

    Implementations, interpretative malleability, value-ladenness and the moral significance of agent-based social simulations

    Get PDF
    The focus of social simulation on representing the social world calls for an investigation of whether its implementations are inherently value-laden. In this article, I investigate what kind of thing implementation is in social simulation and consider the extent to which it has moral significance. When the purpose of a computational artefact is simulating human institutions, designers with different value judgements may have rational reasons for developing different implementations. I provide three arguments to show that different implementations amount to taking moral stands via the artefact. First, the meaning of a social simulation is not homogeneous among its users, which indicates that simulations have high interpretive malleability. I place malleability as the condition of simulation to be a metaphorical vehicle for representing the social world, allowing for different value judgements about the institutional world that the artefact is expected to simulate. Second, simulating the social world involves distinguishing between malfunction of the artefact and representation gaps, which reflect the role of meaning in simulating the social world and how meaning may or not remain coherent among the models that constitute a single implementation. Third, social simulations are akin to Kroes’ (2012) techno-symbolic artefacts, in which the artefact’s effectiveness relative to a purpose hinges not only on the functional effectiveness of the artefact’s structure, but also on the artefact’s meaning. Meaning, not just technical function, makes implementations morally appraisable relative to a purpose. I investigate Schelling’s model of ethnic residential segregation as an example, in which different implementations amount to taking different moral stands via the artefact.info:eu-repo/semantics/acceptedVersio

    Synchronous Online Philosophy Courses: An Experiment in Progress

    Get PDF
    There are two main ways to teach a course online: synchronously or asynchronously. In an asynchronous course, students can log on at their convenience and do the course work. In a synchronous course, there is a requirement that all students be online at specific times, to allow for a shared course environment. In this article, the author discusses the strengths and weaknesses of synchronous online learning for the teaching of undergraduate philosophy courses. The author discusses specific strategies and technologies he uses in the teaching of online philosophy courses. In particular, the author discusses how he uses videoconferencing to create a classroom-like environment in an online class

    Specialization in i* strategic rationale diagrams

    Get PDF
    ER 2012 Best Student Paper AwardThe specialization relationship is offered by the i* modeling language through the is-a construct defined over actors (a subactor is-a superactor). Although the overall meaning of this construct is highly intuitive, its semantics when it comes to the fine-grained level of strategic rationale (SR) diagrams is not defined, hampering seriously its appropriate use. In this paper we provide a formal definition of the specialization relationship at the level of i* SR diagrams. We root our proposal over existing work in conceptual modeling in general, and object-orientation in particular. Also, we use the results of a survey conducted in the i* community that provides some hints about what i* modelers expect from specialization. As a consequence of this twofold analysis, we identify, define and specify two specialization operations, extension and refinement, that can be applied over SR diagrams. Correctness conditions for them are also clearly stated. The result of our work is a formal proposal of specialization for i* that allows its use in a well-defined mannerPeer ReviewedAward-winningPostprint (author’s final draft

    On Automating the Doctrine of Double Effect

    Full text link
    The doctrine of double effect (DDE\mathcal{DDE}) is a long-studied ethical principle that governs when actions that have both positive and negative effects are to be allowed. The goal in this paper is to automate DDE\mathcal{DDE}. We briefly present DDE\mathcal{DDE}, and use a first-order modal logic, the deontic cognitive event calculus, as our framework to formalize the doctrine. We present formalizations of increasingly stronger versions of the principle, including what is known as the doctrine of triple effect. We then use our framework to simulate successfully scenarios that have been used to test for the presence of the principle in human subjects. Our framework can be used in two different modes: One can use it to build DDE\mathcal{DDE}-compliant autonomous systems from scratch, or one can use it to verify that a given AI system is DDE\mathcal{DDE}-compliant, by applying a DDE\mathcal{DDE} layer on an existing system or model. For the latter mode, the underlying AI system can be built using any architecture (planners, deep neural networks, bayesian networks, knowledge-representation systems, or a hybrid); as long as the system exposes a few parameters in its model, such verification is possible. The role of the DDE\mathcal{DDE} layer here is akin to a (dynamic or static) software verifier that examines existing software modules. Finally, we end by presenting initial work on how one can apply our DDE\mathcal{DDE} layer to the STRIPS-style planning model, and to a modified POMDP model.This is preliminary work to illustrate the feasibility of the second mode, and we hope that our initial sketches can be useful for other researchers in incorporating DDE in their own frameworks.Comment: 26th International Joint Conference on Artificial Intelligence 2017; Special Track on AI & Autonom
    • …
    corecore