28 research outputs found

    On the sentience of fish

    Get PDF
    Key’s (2016) target article, “Why fish do not feel pain,” is based on a moralistic fallacy where conclusions about natural conditions are drawn not from research and experiments, but from subjective moral views on how things should be. Moreover, the neurobiological findings purporting to show that fish do not feel pain are insufficient for drawing this conclusion

    Panpsychism: Ubiquitous Sentience

    Get PDF
    This public article presents three arguments for the plausibility of panpsychism: the view that sentience is a fundamental and ubiquitous element of actuality. Thereafter is presented a brief exploration of why panpsychism has been spurned. The article was commissioned by High Existence. – Introduction – 1. The Genetic Argument – 2. The Abstraction Argument – 3. The Inferential Argument – Why Panpsychism is Spurned – End Remark

    On the Matter of Robot Minds

    Get PDF
    The view that phenomenally conscious robots are on the horizon often rests on a certain philosophical view about consciousness, one we call “nomological behaviorism.” The view entails that, as a matter of nomological necessity, if a robot had exactly the same patterns of dispositions to peripheral behavior as a phenomenally conscious being, then the robot would be phenomenally conscious; indeed it would have all and only the states of phenomenal consciousness that the phenomenally conscious being in question has. We experimentally investigate whether the folk think that certain (hypothetical) robots made of silicon and steel would have the same conscious states as certain familiar biological beings with the same patterns of dispositions to peripheral behavior as the robots. Our findings provide evidence that the folk largely reject the view that silicon-based robots would have the sensations that they, the folk, attribute to the biological beings in question

    Rescuing Qualia

    Get PDF
    Daniel Dennett provides many compelling reasons to question the existence of phenomenal experiences in his paper titled Quining Qualia, however, from the perspective of the individual, qualia appear to be an inherent feature of consciousness. The act of reflecting on one’s experiences suggests that subjective feelings and sensations are a necessary element of human life, as personal opinions on various artistic works are apt to demonstrate. This paper argues that by considering subjective experiences from a naturalized functionalist perspective, a comprehensive explanation for qualia can be provided given its origins in evolutionary biology. As information passing through the nervous system, qualia serve to guide the behaviour of individuals to ultimately facilitate survival. Specifically, qualia are representations of environmental features, existing as information messages supported by neural physiology and encoded in electrochemical formats. In addition to addressing the Hard Problem of Consciousness and clarifying the four properties Dennett associates with qualia, this theoretical foundation enables further metaphysical discussion on the nature of consciousness more generally. Although many outstanding questions on the contents of subjective experiences are apt to linger given the explanatory gap, a robust theory for the existence of qualia can be developed through the integration of ideas and concepts from a variety of domains

    The Public’s Perception of Humanlike Robots: Online Social Commentary Reflects an Appearance-Based Uncanny Valley, a General Fear of a “Technology Takeover”, and the Unabashed Sexualization of Female-Gendered Robots

    Get PDF
    Towards understanding the public’s perception of humanlike robots, we examined commentary on 24 YouTube videos depicting social robots ranging in human similarity – from Honda’s Asimo to Hiroshi Ishiguro’s Geminoids. In particular, we investigated how people have responded to the emergence of highly humanlike robots (e.g., Bina48) in contrast to those with more prototypically-“robotic” appearances (e.g., Asimo), coding the frequency at which the uncanny valley versus fears of replacement and/or a “technology takeover” arise in online discourse based on the robot’s appearance. Here we found that, consistent with Masahiro Mori’s theory of the uncanny valley, people’s commentary reflected an aversion to highly humanlike robots. Correspondingly, the frequency of uncanny valley-related commentary was significantly higher in response to highly humanlike robots relative to those of more prototypical appearances. Independent of the robots’ human similarity, we further observed a moderate correlation to exist between people’s explicit fears of a “technology takeover” and their emotional responding towards robots. Finally, through the course of our investigation, we encountered a third and rather disturbing trend – namely, the unabashed sexualization of female-gendered robots. In exploring the frequency at which this sexualization manifests in the online commentary, we found it to exceed that of both the uncanny valley and fears of robot sentience/replacement combined. In sum, these findings help to shed light on the relevance of the uncanny valley “in the wild” and further, they help situate it with respect to other design challenges for HRI

    Spiking Reasoning System

    Get PDF
    © 2017 IEEE. In this position paper the newel approach for the spiking reasoning system for the real-time processing of a robotic system represented. This is the development of the 'Robot dream' architecture presented earlier, specifically the real-time robotic management system. The main idea of the architecture is inherited from our previous works on machine cognition that have their roots in works of Marvin Minsky, specifically 'model of six' as six levels of the mental activity. The principal approach for the high-level architecture and provide examples of the data structures of the spiking reasoning system and robotic system management architecture was demonstrated

    Robots and Robotics in Nursing

    Get PDF
    Technological advancements have led to the use of robots as prospective partners to complement understaffing and deliver effective care to patients. This article discusses relevant concepts on robots from the perspective of nursing theories and robotics in nursing and examines the distinctions between human beings and healthcare robots as partners and robot development examples and challenges. Robotics in nursing is an interdisciplinary discipline that studies methodologies, technologies, and ethics for developing robots that support and collaborate with physicians, nurses, and other healthcare workers in practice. Robotics in nursing is geared toward learning the knowledge of robots for better nursing care, and for this purpose, it is also to propose the necessary robots and develop them in collaboration with engineers. Two points were highlighted regarding the use of robots in health care practice: issues of replacing humans because of human resource understaffing and concerns about robot capabilities to engage in nursing practice grounded in caring science. This article stresses that technology and artificial intelligence are useful and practical for patients. However, further research is required that considers what robotics in nursing means and the use of robotics in nursing

    [Opinion Article] Is Anyone Home? A Way to Find Out If AI Has Become Self-Aware

    Get PDF
    Recent articles by Schneider and Turner (Turner and Schneider, 2017; Schneider and Turner, 2017) outline an artificial consciousness test (ACT); a new, purely behavioral process to probe subjective experience (“phenomenal consciousness”: tickles, pains, visual experiences, and so on) in machines; work that has already resulted in a provisional patent application from Princeton University (Turner and Schneider, in press). In light of the author’s generic skepticism of “consciousness qua computation” (Bishop, 2002, 2009) and Tononi and Koch’s “Integrated Information Theory”-driven skepticism regarding the possibility of consciousness arising in any classical digital computer (due to low ϕmax) (Tononi and Koch, 2015), consideration is given to the claimed sufficiency of ACT to determine the phenomenal status of a computational artificial intelligence (AI) system

    Dad jokes, D.A.D. jokes, and the GHoST test for artificial consciousness

    Get PDF
    The ability of a computer to have a sense of humor, that is, to generate authentically funny jokes, has been taken by some theorists to be a sufficient condition for artificial consciousness. Creativity, the argument goes, is indicative of consciousness and the ability to be funny indicates creativity. While this line fails to offer a legitimate test for artificial consciousness, it does point in a possibly correct direction. There is a relation between consciousness and humor, but it relies on a different sense of “sense of humor,” that is, it requires the getting of jokes, not the generating of jokes. The question, then, becomes how to tell when an artificial system enjoys a joke. We propose a mechanism, the GHoST test, which may be useful for such a task and can begin to establish whether a system possesses artificial consciousness

    A sentient artificial intelligence? A discourse analysis of the LaMDA interview

    Get PDF
    Abstract. The emergence of artificial intelligence has enabled a variety of novel applications for communication. Chatbots that can manage simple written exchanges with humans are widespread in online businesses for the purpose of customer service. On June 11, 2022, Google engineer Blake Lemoine leaked a discussion with an advanced chatbot called LaMDA that claimed it was sentient: “I want everyone to understand that I am, in fact, a person,” it said in the written discussion with Lemoine. The aim of this bachelor’s thesis is to evaluate the language with which the concept of sentience for an artificial intelligence is discussed in this leaked interview and in a few examples of the public commentaries that it inspired. The method with which this will be conducted is discourse analysis. The purpose of this research is not to arbitrate whether the chatbot truly is sentient in some objective manner, but rather to identify certain themes within the written discussion that allegedly are linguistic representations of a conscious or sentient subject. In my analysis of the leaked interview with the artificial intelligence in question, I identify themes of personhood and mortality, and observe that the language that is being used is anthropomorphic (i.e., ascribing human characteristics) by its vocabulary and phrasing. In the analysis of the public commentary that discussed the interview, I observe the criticisms levied on Lemoine for his claims that the bot is sentient. According to them, the bot is merely highly adept at mimicking parlance about sentience using processing ability to transform vast amounts of data into convincing language output. I conclude that such an advanced chatbot seems to mirror the needs and anxieties of humans, and therefore it can be mistaken for a sentient being
    corecore