4,514 research outputs found
Course of value distinguishes the intentionality of programming languages
International audienceIn this contribution, we propose to study the transformation of first order programs by course of value recursion. Our motivation is to show that this transformation provides a separation criterion for the intentionality of sets of programs. As an illustration, we consider two variants of the multiset path ordering, for the first, terms in recursive calls are compared with respect to the subterm property, for the second with respect to embedding. Under a quasi-interpretation, both characterize Ptime, the latter characterization being a new result. Once applied the transformation, we get respec- tively Ptime and Pspace thus proving that the latter set of programs contains more algorithms
Apperceptive patterning: Artefaction, extensional beliefs and cognitive scaffolding
In “Psychopower and Ordinary Madness” my ambition, as it relates to Bernard Stiegler’s recent literature, was twofold: 1) critiquing Stiegler’s work on exosomatization and artefactual posthumanism—or, more specifically, nonhumanism—to problematize approaches to media archaeology that rely upon technical exteriorization; 2) challenging how Stiegler engages with Giuseppe Longo and Francis Bailly’s conception of negative entropy. These efforts were directed by a prevalent techno-cultural qualifier: the rise of Synthetic Intelligence (including neural nets, deep learning, predictive processing and Bayesian models of cognition). This paper continues this project but first directs a critical analytic lens at the Derridean practice of the ontologization of grammatization from which Stiegler emerges while also distinguishing how metalanguages operate in relation to object-oriented environmental interaction by way of inferentialism. Stalking continental (Kapp, Simondon, Leroi-Gourhan, etc.) and analytic traditions (e.g., Carnap, Chalmers, Clark, Sutton, Novaes, etc.), we move from artefacts to AI and Predictive Processing so as to link theories related to technicity with philosophy of mind. Simultaneously drawing forth Robert Brandom’s conceptualization of the roles that commitments play in retrospectively reconstructing the social experiences that lead to our endorsement(s) of norms, we compliment this account with Reza Negarestani’s deprivatized account of intelligence while analyzing the equipollent role between language and media (both digital and analog)
Semantics and the Computational Paradigm in Cognitive Psychology
There is a prevalent notion among cognitive scientists and philosophers of mind that computers are merely formal symbol manipulators, performing the actions they do solely on the basis of the syntactic properties of the symbols they manipulate. This view of computers has allowed some philosophers to divorce semantics from computational explanations. Semantic content, then, becomes something one adds to computational explanations to get psychological explanations. Other philosophers, such as Stephen Stich, have taken a stronger view, advocating doing away with semantics entirely. This paper argues that a correct account of computation requires us to attribute content to computational processes in order to explain which functions are being computed. This entails that computational psychology must countenance mental representations. Since anti-semantic positions are incompatible with computational psychology thus construed, they ought to be rejected. Lastly, I argue that in an important sense, computers are not formal symbol manipulators
Implementations, interpretative malleability, value-ladenness and the moral significance of agent-based social simulations
The focus of social simulation on representing the social world calls for an investigation of whether its implementations are inherently value-laden. In this article, I investigate what kind of thing implementation is in social simulation and consider the extent to which it has moral significance. When the purpose of a computational artefact is simulating human institutions, designers with different value judgements may have rational reasons for developing different implementations. I provide three arguments to show that different implementations amount to taking moral stands via the artefact. First, the meaning of a social simulation is not homogeneous among its users, which indicates that simulations have high interpretive malleability. I place malleability as the condition of simulation to be a metaphorical vehicle for representing the social world, allowing for different value judgements about the institutional world that the artefact is expected to simulate. Second, simulating the social world involves distinguishing between malfunction of the artefact and representation gaps, which reflect the role of meaning in simulating the social world and how meaning may or not remain coherent among the models that constitute a single implementation. Third, social simulations are akin to Kroes’ (2012) techno-symbolic artefacts, in which the artefact’s effectiveness relative to a purpose hinges not only on the functional effectiveness of the artefact’s structure, but also on the artefact’s meaning. Meaning, not just technical function, makes implementations morally appraisable relative to a purpose. I investigate Schelling’s model of ethnic residential segregation as an example, in which different implementations amount to taking different moral stands via the artefact.info:eu-repo/semantics/acceptedVersio
Synchronous Online Philosophy Courses: An Experiment in Progress
There are two main ways to teach a course online: synchronously or asynchronously. In an asynchronous course, students can log on at their convenience and do the course work. In a synchronous course, there is a requirement that all students be online at specific times, to allow for a shared course environment. In this article, the author discusses the strengths and weaknesses of synchronous online learning for the teaching of undergraduate philosophy courses. The author discusses specific strategies and technologies he uses in the teaching of online philosophy courses. In particular, the author discusses how he uses videoconferencing to create a classroom-like environment in an online class
Specialization in i* strategic rationale diagrams
ER 2012 Best Student Paper AwardThe specialization relationship is offered by the i* modeling language through the is-a construct defined over actors (a subactor is-a superactor). Although the overall meaning of this construct is highly intuitive, its semantics when it comes to the fine-grained level of strategic rationale (SR) diagrams is not defined, hampering seriously its appropriate use. In this paper
we provide a formal definition of the specialization relationship at the level of
i* SR diagrams. We root our proposal over existing work in conceptual modeling in general, and object-orientation in particular. Also, we use the results of a survey conducted in the i* community that provides some hints about what i* modelers expect from specialization. As a consequence of this twofold analysis, we identify, define and specify two specialization operations,
extension and refinement, that can be applied over SR diagrams. Correctness conditions for them are also clearly stated. The result of our work is a formal proposal of specialization for i* that allows its use in a well-defined mannerPeer ReviewedAward-winningPostprint (author’s final draft
Recommended from our members
The Dynamics of Intentions in Collaborative Intentionality
An adequate formulation of collective intentionality is crucial for understanding group activity and for modeling the mental state of participants in such activities. Although work on collective intentionality in philosophy, artificial intelligence, and cognitive science has many points of agreement, several key issues remain under debate. This paper argues that the dynamics of intention – in particular, the inter-related processes of plan-related group decision making and intention updating – play crucial roles in an explanation of collective intentionality. Furthermore, it is in these dynamic aspects that coordinated group activity differs most from individual activity. The paper specifies a model of the dynamics of agent intentions in the context of collaborative activity. Its integrated treatment of group decision making and coordinated updating of group-related intentions fills an important gap in prior accounts of collective intentionality, thus helping to resolve a long-standing debate about the nature of intentions in group activity. The paper also defines an architecture for collaboration-capable computer agents that satisfies the constraints of the model and is a natural extension of the standard architecture for resource-bounded agents operating as individuals. The new architecture is both more principled and more complete than prior architectures for collaborative multi-agent systems.Engineering and Applied Science
On Automating the Doctrine of Double Effect
The doctrine of double effect () is a long-studied ethical
principle that governs when actions that have both positive and negative
effects are to be allowed. The goal in this paper is to automate
. We briefly present , and use a first-order
modal logic, the deontic cognitive event calculus, as our framework to
formalize the doctrine. We present formalizations of increasingly stronger
versions of the principle, including what is known as the doctrine of triple
effect. We then use our framework to simulate successfully scenarios that have
been used to test for the presence of the principle in human subjects. Our
framework can be used in two different modes: One can use it to build
-compliant autonomous systems from scratch, or one can use it to
verify that a given AI system is -compliant, by applying a
layer on an existing system or model. For the latter mode, the
underlying AI system can be built using any architecture (planners, deep neural
networks, bayesian networks, knowledge-representation systems, or a hybrid); as
long as the system exposes a few parameters in its model, such verification is
possible. The role of the layer here is akin to a (dynamic or
static) software verifier that examines existing software modules. Finally, we
end by presenting initial work on how one can apply our layer
to the STRIPS-style planning model, and to a modified POMDP model.This is
preliminary work to illustrate the feasibility of the second mode, and we hope
that our initial sketches can be useful for other researchers in incorporating
DDE in their own frameworks.Comment: 26th International Joint Conference on Artificial Intelligence 2017;
Special Track on AI & Autonom
- …