165 research outputs found

    Dennett’s Theory of the Folk Theory of Consciousness

    Get PDF
    It is not uncommon to find assumptions being made about folk psychology in the discussions of phenomenal consciousness in philosophy of mind. In this article I consider one example, focusing on what Dan Dennett says about the “folk theory of consciousness.” I show that he holds that the folk believe that qualities like colors that we are acquainted with in ordinary perception are phenomenal qualities. Nonetheless, the shape of the folk theory is an empirical matter and in the absence of empirical investigation there is ample room for doubt. Fortunately, experimental evidence on the topic is now being produced by experimental philosophers and psychologists. This article contributes to this growing literature, presenting the results of six new studies on the folk view of colors and pains. I argue that the results indicate against Dennett’s theory of the folk theory of consciousness

    Consciousness, Meaning and the Future Phenomenology

    Get PDF
    Phenomenological states are generally considered sources of intrinsic motivation for autonomous biological agents. In this paper we will address the issue of exploiting these states for robust goal-directed systems. We will provide an analysis of consciousness in terms of a precise definition of how an agent “understands” the informational flows entering the agent. This model of consciousness and understanding is based in the analysis and evaluation of phenomenological states along potential trajectories in the phase space of the agents. This implies that a possible strategy to follow in order to build autonomous but useful systems is to embed them with the particular, ad-hoc phenomenology that captures the requirements that define the system usefulness from a requirements-strict engineering viewpoint

    The Search for Invertebrate Consciousness

    Get PDF
    There is no agreement on whether any invertebrates are conscious and no agreement on a methodology that could settle the issue. How can the debate move forward? I distinguish three broad types of approach: theory-heavy, theory-neutral and theory-light. Theory-heavy and theory-neutral approaches face serious problems, motivating a middle path: the theory-light approach. At the core of the theory-light approach is a minimal commitment about the relation between phenomenal consciousness and cognition that is compatible with many specific theories of consciousness: the hypothesis that phenomenally conscious perception of a stimulus facilitates, relative to unconscious perception, a cluster of cognitive abilities in relation to that stimulus. This “facilitation hypothesis” can productively guide inquiry into invertebrate consciousness. What is needed? At this stage, not more theory, and not more undirected data gathering. What is needed is a systematic search for consciousness-linked cognitive abilities, their relationships to each other, and their sensitivity to masking

    THE CO-EVOLUTION OF MATTER AND CONSCIOUSNESS

    Get PDF
    Theories about the evolution of consciousness relate in an intimate way to theories about the distribution of consciousness, which range from the view that only human beings are conscious to the view that all matter is in some sense conscious. Broadly speaking, such theories can be classified into discontinuity theories and continuity theories. Discontinuity theories propose that consciousness emerged only when material forms reached a given stage of evolution, but propose different criteria for the stage at which this occurred. Continuity theories argue that in some primal form, consciousness always accompanies matter and as matter evolved in form and complexity consciousness co-evolved, for example into the forms that we now recognise in human beings. Given our limited knowledge of the necessary and sufficient conditions for the presence of human consciousness in human brains, all options remain open. On balance however continuity theory appears to be more elegant than discontinuity theory

    Non-locality of the phenomenon of consciousness according to Roger Penrose

    Get PDF
    Roger Penrose is known for his proposals, in collaboration with Stuart Hameroff, for quantum action in the brain. These proposals, which are still recent, have a prior, less known basis, which will be studied in the following work. First, the paper situates the framework from which a mathematical physicist like Penrose proposes to speak about consciousness. Then it shows how he understands the possible relationships between computation and consciousness and what criticism from other authors he endorses, to conclude by explaining how he understands this relationship between consciousness and computation. Then, it focuses on the concept of non-locality so essential to his understanding of consciousness. With some examples, such as impossible objects or aperiodic tiling, the study addresses the concept of non-locality as Penrose understands it, and then shows how far he intends to arrive with that concept of non-locality. At all times the approach will be more philosophical than physical

    Experiments in artificial theory of mind: From safety to story-telling

    Get PDF
    © 2018 Winfield. Theory of mind is the term given by philosophers and psychologists for the ability to form a predictive model of self and others. In this paper we focus on synthetic models of theory of mind. We contend firstly that such models-especially when tested experimentally-can provide useful insights into cognition, and secondly that artificial theory of mind can provide intelligent robots with powerful new capabilities, in particular social intelligence for human-robot interaction. This paper advances the hypothesis that simulation-based internal models offer a powerful and realisable, theory-driven basis for artificial theory of mind. Proposed as a computational model of the simulation theory of mind, our simulation-based internal model equips a robot with an internal model of itself and its environment, including other dynamic actors, which can test (i.e., simulate) the robot's next possible actions and hence anticipate the likely consequences of those actions both for itself and others. Although it falls far short of a full artificial theory of mind, our model does allow us to test several interesting scenarios: in some of these a robot equipped with the internal model interacts with other robots without an internal model, but acting as proxy humans; in others two robots each with a simulation-based internal model interact with each other. We outline a series of experiments which each demonstrate some aspect of artificial theory of mind

    With Clear Intention - An Ethical Responsibility Model for Robot Governance

    Get PDF
    There is much discussion about super artificial intelligence (AI) and autonomous machine learning (ML) systems, or learning machines (LM). Yet, the reality of thinking robotics still seems far on the horizon. It is one thing to define AI in light of human intelligence, citing the remoteness between ML and human intelligence, but another to understand issues of ethics, responsibility, and accountability in relation to the behavior of autonomous robotic systems within a human society. Due to the apparent gap between a society in which autonomous robots are a reality and present-day reality, many of the efforts placed on establishing robotic governance, and indeed, robot law fall outside the fields of valid scientific research. Work within this area has concentrated on manifestos, special interest groups and popular culture. This article takes a cognitive scientific perspective toward characterizing the nature of what true LMs would entail—i.e., intentionality and consciousness. It then proposes the Ethical Responsibility Model for Robot Governance (ER-RoboGov) as an initial platform or first iteration of a model for robot governance that takes the standpoint of LMs being conscious entities. The article utilizes past AI governance model research to map out the key factors of governance from the perspective of autonomous machine learning systems© 2022 Rousi. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.This article has been partially funded by the Strategic Research Council (SRC), Academy of Finland through the ETAIROS project (Decision No. 327354), as well as the Sea4Value Fairway (S4VF) project funded by Business Finland (funding code 110/31/2020) and Stroke-Data project, and also funded by the Digital Economy Platform, University of Vaasa.fi=vertaisarvioitu|en=peerReviewed

    The Argument from Consciousness and Divine Consciousness

    Get PDF
    The paper aims for an improvement of the so-called argument from consciousness while focusing on the first-person-perspective as a unique feature of consciousness that opens the floor for a theistic explanation. As a side effect of knowledge arguments, which are necessary to keep a posterior materialism off bounds, the paper proposes an interpretation of divine knowledge as knowledge of things rather than knowledge of facts

    Technologies on the stand:Legal and ethical questions in neuroscience and robotics

    Get PDF

    From Biological to Synthetic Neurorobotics Approaches to Understanding the Structure Essential to Consciousness (Part 2)

    Get PDF
    We have been left with a big challenge, to articulate consciousness and also to prove it in an artificial agent against a biological standard. After introducing Boltuc’s h-consciousness in the last paper, we briefly reviewed some salient neurology in order to sketch less of a standard than a series of targets for artificial consciousness, “most-consciousness” and “myth-consciousness.” With these targets on the horizon, we began reviewing the research program pursued by Jun Tani and colleagues in the isolation of the formal dynamics essential to either. In this paper, we describe in detail Tani’s research program, in order to make the clearest case for artificial consciousness in these systems. In the next paper, the third in the series, we will return to Boltuc’s naturalistic non-reductionism in light of the neurorobotics models introduced (alongside some others), and evaluate them more completely
    • 

    corecore