106,857 research outputs found

    Why and How We Are Not Zombies

    No full text
    A robot that is functionally indistinguishable from us may or may not be a mindless Zombie. There will never be any way to know, yet its functional principles will be as close as we can ever get to explaining the mind

    A Case for Machine Ethics in Modeling Human-Level Intelligent Agents

    Get PDF
    This paper focuses on the research field of machine ethics and how it relates to a technological singularity—a hypothesized, futuristic event where artificial machines will have greater-than-human-level intelligence. One problem related to the singularity centers on the issue of whether human values and norms would survive such an event. To somehow ensure this, a number of artificial intelligence researchers have opted to focus on the development of artificial moral agents, which refers to machines capable of moral reasoning, judgment, and decision-making. To date, different frameworks on how to arrive at these agents have been put forward. However, there seems to be no hard consensus as to which framework would likely yield a positive result. With the body of work that they have contributed in the study of moral agency, philosophers may contribute to the growing literature on artificial moral agency. While doing so, they could also think about how the said concept could affect other important philosophical concepts

    On the Matter of Robot Minds

    Get PDF
    The view that phenomenally conscious robots are on the horizon often rests on a certain philosophical view about consciousness, one we call “nomological behaviorism.” The view entails that, as a matter of nomological necessity, if a robot had exactly the same patterns of dispositions to peripheral behavior as a phenomenally conscious being, then the robot would be phenomenally conscious; indeed it would have all and only the states of phenomenal consciousness that the phenomenally conscious being in question has. We experimentally investigate whether the folk think that certain (hypothetical) robots made of silicon and steel would have the same conscious states as certain familiar biological beings with the same patterns of dispositions to peripheral behavior as the robots. Our findings provide evidence that the folk largely reject the view that silicon-based robots would have the sensations that they, the folk, attribute to the biological beings in question

    The Systemic Erasure of the Black/Dark-Skinned Body in Catholic Ethics

    Get PDF
    One of the questions I address in my scholarly work is this: What would Catholic theological ethics look like if it took the Black Experience seriously as a dialogue partner? To raise the question, however, is to signal the reality of absence, erasure, and missing voices. The question is necessary only because the Black Experience --the collective story of African American survival and achievement in a hostile, exploitative, and racist environment--and the bodies who are the subjects of this experience have been all too often rendered invisible and therefore missing in U.S. Catholic ethical reflection

    Is consciousness necessary to high-level control systems?

    Get PDF
    Building on Bringsjord's (1992, 1994) and Searle's (1992) work, I take it for granted that computational systems cannot be conscious. In order to discuss the possibility that they might be able to pass refined versions of the Turing Test, I consider three possible relationships between consciousness and control systems in human-level adaptive agents

    Commentary on “An alternative to working on machine consciousness”, by Aaron Sloman

    Get PDF
    A commentary on a current paper by Aaron Sloman where he argues that in order to make progress in AI, consciousness (and other such unclear concepts of common-sense regarding the mind), "should be replaced by more precise and varied architecture-based concepts better suited to specify what needs to be explained by scientific theories". This original vision of philosophical inquiry as the mapping out of 'design-spaces' for a contested concept seeks to achieve a holistic, synthetic understanding of what possibilities such spaces embody and how different parameters might structure them in nomic and highly inter-connected ways. It therefore does not reduce to either "relations of ideas" or "matters of fact" in Hume's famous dichotomy. It is also shown to be in interesting ways the exact opposite of the current vogue for 'experimental philosophy'

    Minds, Brains and Programs

    Get PDF
    This article can be viewed as an attempt to explore the consequences of two propositions. (1) Intentionality in human beings (and animals) is a product of causal features of the brain I assume this is an empirical fact about the actual causal relations between mental processes and brains It says simply that certain brain processes are sufficient for intentionality. (2) Instantiating a computer program is never by itself a sufficient condition of intentionality The main argument of this paper is directed at establishing this claim The form of the argument is to show how a human agent could instantiate the program and still not have the relevant intentionality. These two propositions have the following consequences (3) The explanation of how the brain produces intentionality cannot be that it does it by instantiating a computer program. This is a strict logical consequence of 1 and 2. (4) Any mechanism capable of producing intentionality must have causal powers equal to those of the brain. This is meant to be a trivial consequence of 1. (5) Any attempt literally to create intentionality artificially (strong AI) could not succeed just by designing programs but would have to duplicate the causal powers of the human brain. This follows from 2 and 4

    A Catholic Perspective on the Ethics of Artificially Providing Food and Water

    Get PDF

    The Contribution of Society to the Construction of Individual Intelligence

    Get PDF
    It is argued that society is a crucial factor in the construction of individual intelligence. In other words that it is important that intelligence is socially situated in an analogous way to the physical situation of robots. Evidence that this may be the case is taken from developmental linguistics, the social intelligence hypothesis, the complexity of society, the need for self-reflection and autism. The consequences for the development of artificial social agents is briefly considered. Finally some challenges for research into socially situated intelligence are highlighted
    corecore