510 research outputs found
On the Role of AI in the Ongoing Paradigm Shift within the Cognitive Sciences
This paper supports the view that the ongoing shift from orthodox to embodied-embedded cognitive science has been significantly influenced by the experimental results generated by AI research. Recently, there has also been a noticeable shift toward enactivism, a paradigm which radicalizes the embodied-embedded approach by placing autonomous agency and lived subjectivity at the heart of cognitive science. Some first steps toward a clarification of the relationship of AI to this further shift are outlined. It is concluded that the success of enactivism in establishing itself as a mainstream cognitive science research program will depend less on progress made in AI research and more on the development of a phenomenological pragmatics
Companion robots: the hallucinatory danger of human-robot interactions
The advent of the so-called Companion Robots is raising many ethical concerns among scholars and in the public opinion. Focusing mainly on robots caring for the elderly, in this paper we analyze these concerns to distinguish which are directly ascribable to robotic, and which are instead preexistent. One of these is the âdeception objectionâ, namely the ethical unacceptability of deceiving the user about the simulated nature of the robotâs behaviors. We argue on the inconsistency of this charge, as today formulated. After that, we underline the risk, for human-robot interaction, to become a hallucinatory relation where the human would subjectify the robot in a dynamic of meaning-overload. Finally, we analyze the definition of âquasi-otherâ relating to the notion of âuncannyâ. The goal of this paper is to argue that the main concern about Companion Robots is the simulation of a human-like interaction in the absence of an autonomous robotic horizon of meaning. In addition, that absence could lead the human to build a hallucinatory reality based on the relation with the robot
The Unbearable Heaviness of Being in Phenomenologist AI
The aim of this paper is to pin down the misuse of Heideggerâs philosophal insights within the discipline of artificial intelligence (AI) and robotics. In this paper we argue that a central thesis of phenomenology, in Husserlâs words, âputting the world between bracketsâ, has led to a positioning in embodied AI that deeply neglects fundamental representational aspects that are totally necessary for the purpose of building a theory of cognition. The unification of representational and being-in-the-world aspects, are necesary for the explanation and realization of complex consciousness phenomenon in a cognizer, both animal and mechanic. The emphasis on the self (post-cognitivists), on the being (phenomenologists), as well as the Being by Heideggerâs followers, has contributed interesting insights concerning the puzzle of cognition and consciousness. However, has neglected the necessity and even denied the possibility to provide a scientific theory of cognition.
On the other hand, the phenomenologistâs separation of the world into two different ones, the scientific and objective world, and that of our common and lived experience is untenable. The claim that any scientific-theoretical world must find its foundation in the so called live world is ill-founded. In this paper we will propose the basis of a theoretical framework where only one world âwith entities and processesâ exists and can be known to a certain degree by the cognitive system. This calls for a unified vision of both ontology and epistemology
Should Robots Be Like Humans? A Pragmatic Approach to Social Robotics
This paper describes the instrumentalizing aspects of social robots, which generate the term pragmatic social robot. In contrast to humanoid robots, pragmatic social robots (PSRs) are defined by their instrumentalizing aspects, which consist of language, skill, and artificial intelligence. These technical aspects of social robots have led to the tendency to attribute a selfhood characteristic or anthropomorphism. Anthropomorphism can raise problems of responsibility and the ontological problems of human-technology relations. As a result, there is an antinomy in the research and development of pragmatic social robotics, considering that they are expected to achieve similarity with humans in terms of completing works. How can we avoid anthropomorphism in the research and development of PSRs while ensuring their flexibility? In response to this issue, I suggest intuition should be instrumentalized to advance PSRsâ social skills. Intuition, as theorized by Henry Bergson and Efraim Fischbein, overcomes the capacity of logical analysis to solve problems. Robots should be like humans in the sense that their instrumentalizing aspects meet the criteria for the value of human social skills
Cognition in Context: Phenomenology, Situated Robotics and the Frame Problem
The frame problem is the difficulty of explaining how non-magical systems think and act in ways that are adaptively sensitive to context-dependent relevance. Influenced centrally by Heideggerian phenomenology, Hubert Dreyfus has argued that the frame problem is, in part, a consequence of the assumption (made by mainstream cognitive science and artificial intelligence) that intelligent behaviour is representation-guided behaviour. Dreyfusâ Heideggerian analysis suggests that the frame problem dissolves if we reject representationalism about intelligence and recognize that human agents realize the property of thrownness (the property of being always already embedded in a context). I argue that this positive proposal is incomplete until we understand exactly how the properties in question may be instantiated in machines like us. So, working within a broadly Heideggerian conceptual framework, I pursue the character of a representationshunning thrown machine. As part of this analysis, I suggest that the frame problem is, in truth, a two-headed beast. The intra-context frame problem challenges us to say how a purely mechanistic system may achieve appropriate, flexible and fluid action within a context. The inter-context frame problem challenges us to say how a purely mechanistic system may achieve appropriate, flexible and fluid action in worlds in which adaptation to new contexts is open-ended and in which the number of potential contexts is indeterminate. Drawing on the field of situated robotics, I suggest that the intra-context frame problem may be neutralized by systems of special purpose adaptive couplings, while the inter-context frame problem may be neutralized by systems that exhibit the phenomenon of continuous reciprocal causation. I also defend the view that while continuous reciprocal causation is in conflict with representational explanation, special-purpose adaptive coupling, as well as its associated agential phenomenology, may feature representations. My proposal has been criticized recently by Dreyfus, who accuses me of propagating a cognitivist misreading of Heidegger, one that, because it maintains a role for representation, leads me seriously astray in my handling of the frame problem. I close by responding to Dreyfusâ concerns
âThe end of The Dreyfus affairâ: (Post)Heideggerian meditations on man, machine and meaning
In this paper, the possibility of developing a Heideggerian solution to the Schizophrenia Problem associated with cognitive technologies is investigated. This problem arises as a result of the computer bracketing emotion from cognition during human-computer interaction and results in human psychic self-amputation. It is argued that in order to solve the Schizophrenia Problem, it is necessary to first solve the 'hard problem' of consciousness since emotion is at least partially experiential. Heidegger's thought, particularly as interpreted by Hubert Dreyfus, appears relevant in this regard since it ostensibly provides the basis for solving the 'hard problem' via the construction of artificial systems capable of the emergent generation of conscious experience. However, it will be shown that Heidegger's commitment to a non-experiential conception of nature renders this whole approach problematic, thereby necessitating consideration of alternative, post-Heideggerian approaches to solving the Schizophrenia Problem
Recommended from our members
Double elevation: Autonomous weapons and the search for an irreducible law of war
What should be the role of law in response to the spread of artificial intelligence in war? Fuelled by both public and private investment, military technology is accelerating towards increasingly autonomous weapons, as well as the merging of humans and machines. Contrary to much of the contemporary debate, this is not a paradigm change; it is the intensification of a central feature in the relationship between technology and war: Double elevation, above one's enemy and above oneself. Elevation above one's enemy aspires to spatial, moral, and civilizational distance. Elevation above oneself reflects a belief in rational improvement that sees humanity as the cause of inhumanity and de-humanization as our best chance for humanization. The distance of double elevation is served by the mechanization of judgement. To the extent that judgement is seen as reducible to algorithm, law becomes the handmaiden of mechanization. In response, neither a focus on questions of compatibility nor a call for a 'ban on killer robots' help in articulating a meaningful role for law. Instead, I argue that we should turn to a long-standing philosophical critique of artificial intelligence, which highlights not the threat of omniscience, but that of impoverished intelligence. Therefore, if there is to be a meaningful role for law in resisting double elevation, it should be law encompassing subjectivity, emotion and imagination, law irreducible to algorithm, a law of war that appreciates situated judgement in the wielding of violence for the collective
Science Friction: Phenomenology, Naturalism and Cognitive Science
Recent years have seen growing evidence of a fruitful engagement between phenomenology and cognitive science. This paper confronts an in-principle problem that stands in the way of this (perhaps unlikely) intellectual coalition, namely the fact that a tension exists between the transcendentalism that characterizes phenomenology and the naturalism that accompanies cognitive science. After articulating the general shape of this tension, I respond as follows. First, I argue that, if we view things through a kind of neo-McDowellian lens, we can open up a conceptual space in which phenomenology and cognitive science may exert productive constraints on each other. Second, I describe some examples of phenomenological cognitive science that illustrate such constraints in action. Third, I use the mutually constraining relationship at work here as the platform from which to bring to light a domesticated version of the transcendental and a minimal form of naturalism that are compatible with each other
Naturalizing Dasein. Aporias of the Neo-Heideggerian Approach in Cognitive Science
ABSTRACT: This paper deals with the neo-Heideggerian approach in cognitive science as espoused by Michael Wheeler in his Reconstructing the Cognitive World: The Next Step (2005). According to Wheeler, this next step amounts to incorporating Heideggerian insights bearing on online intelligence: the kind of intelligence which is exhibited by human agents in embedded, embodied coping. However, this phenomenological reception implies also stripping Heideggerian phenomenology of its overt antinaturalistic and transcendental tendencies. The approach is indeed âneo-Heideggerian â inasmuch as tantamount to a naturalization of phenomenological themes. I attempt to put this naturalizing aspiration to the test, and show that the approach remains âHeideggerian â only superficially
- âŠ