7,291 research outputs found

    Artificial consciousness and the consciousness-attention dissociation

    Get PDF
    Artificial Intelligence is at a turning point, with a substantial increase in projects aiming to implement sophisticated forms of human intelligence in machines. This research attempts to model specific forms of intelligence through brute-force search heuristics and also reproduce features of human perception and cognition, including emotions. Such goals have implications for artificial consciousness, with some arguing that it will be achievable once we overcome short-term engineering challenges. We believe, however, that phenomenal consciousness cannot be implemented in machines. This becomes clear when considering emotions and examining the dissociation between consciousness and attention in humans. While we may be able to program ethical behavior based on rules and machine learning, we will never be able to reproduce emotions or empathy by programming such control systems—these will be merely simulations. Arguments in favor of this claim include considerations about evolution, the neuropsychological aspects of emotions, and the dissociation between attention and consciousness found in humans. Ultimately, we are far from achieving artificial consciousness

    An Evaluation Schema for the Ethical Use of Autonomous Robotic Systems in Security Applications

    Get PDF
    We propose a multi-step evaluation schema designed to help procurement agencies and others to examine the ethical dimensions of autonomous systems to be applied in the security sector, including autonomous weapons systems

    What Makes AI ‘Intelligent’ and ‘Caring’?:Exploring Affect and Relationality Across Three Sites of Intelligence and Care

    Get PDF
    This research was funded in whole by the Wellcome Trust [Seed Award ‘AI and Health’ 213643/Z/18/Z]. For the purpose of Open Access, the authors have applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission. The authors would like to thank Dr Jane Hopton for inspiring discussions about AI and dimensions of intelligence, and three anonymous reviewers as well as the editor in chief Dr Timmemans at Social Science and Medicine for their very helpful and constructive feedback.Peer reviewedPublisher PD

    Patrick Lin, Ryan Jenkins and Keith Abney (eds.), Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence

    Get PDF
    Review of the book Patrick Lin, Ryan Jenkins and Keith Abney (eds.), Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence (New York: Oxford University Press 2017)

    Trusting the Moral Judgments of a Robot: Perceived Moral Competence and Humanlikeness of a GPT-3 Enabled AI

    Get PDF
    Advancements in computing power and foundational modeling have enabled artificial intelligence (AI) to respond to moral queries with surprising accuracy. This raises the question of whether we trust AI to influence human moral decision-making, so far, a uniquely human activity. We explored how a machine agent trained to respond to moral queries (Delphi, Jiang et al., 2021) is perceived by human questioners. Participants were tasked with querying the agent with the goal of figuring out whether the agent, presented as a humanlike robot or a web client, was morally competent and could be trusted. Participants rated the moral competence and perceived morality of both agents as high yet found it lacking because it could not provide justifications for its moral judgments. While both agents were also rated highly on trustworthiness, participants had little intention to rely on such an agent in the future. This work presents an important first evaluation of a morally competent algorithm integrated with a human-like platform that could advance the development of moral robot advisors
    corecore