4,722 research outputs found

    "Involving Interface": An Extended Mind Theoretical Approach to Roboethics

    Get PDF
    In 2008 the authors held Involving Interface, a lively interdisciplinary event focusing on issues of biological, sociocultural, and technological interfacing (see Acknowledgments). Inspired by discussions at this event, in this article, we further discuss the value of input from neuroscience for developing robots and machine interfaces, and the value of philosophy, the humanities, and the arts for identifying persistent links between human interfacing and broader ethical concerns. The importance of ongoing interdisciplinary debate and public communication on scientific and technical advances is also highlighted. Throughout, the authors explore the implications of the extended mind hypothesis for notions of moral accountability and robotics

    The perception of emotion in artificial agents

    Get PDF
    Given recent technological developments in robotics, artificial intelligence and virtual reality, it is perhaps unsurprising that the arrival of emotionally expressive and reactive artificial agents is imminent. However, if such agents are to become integrated into our social milieu, it is imperative to establish an understanding of whether and how humans perceive emotion in artificial agents. In this review, we incorporate recent findings from social robotics, virtual reality, psychology, and neuroscience to examine how people recognize and respond to emotions displayed by artificial agents. First, we review how people perceive emotions expressed by an artificial agent, such as facial and bodily expressions and vocal tone. Second, we evaluate the similarities and differences in the consequences of perceived emotions in artificial compared to human agents. Besides accurately recognizing the emotional state of an artificial agent, it is critical to understand how humans respond to those emotions. Does interacting with an angry robot induce the same responses in people as interacting with an angry person? Similarly, does watching a robot rejoice when it wins a game elicit similar feelings of elation in the human observer? Here we provide an overview of the current state of emotion expression and perception in social robotics, as well as a clear articulation of the challenges and guiding principles to be addressed as we move ever closer to truly emotional artificial agents

    Children, Humanoid Robots and Caregivers

    Get PDF
    This paper presents developmental learning on a humanoid robot from human-robot interactions. We consider in particular teaching humanoids as children during the child's Separation and Individuation developmental phase (Mahler, 1979). Cognitive development during this phase is characterized both by the child's dependence on her mother for learning while becoming awareness of her own individuality, and by self-exploration of her physical surroundings. We propose a learning framework for a humanoid robot inspired on such cognitive development

    SEAI: Social Emotional Artificial Intelligence Based on Damasio's Theory of Mind

    Get PDF
    A socially intelligent robot must be capable to extract meaningful information in real-time from the social environment and react accordingly with coherent human-like behaviour. Moreover, it should be able to internalise this information, to reason on it at a higher abstract level, build its own opinions independently and then automatically bias the decision-making according to its unique experience. In the last decades, neuroscience research highlighted the link between the evolution of such complex behaviour and the evolution of a certain level of consciousness, which cannot leave out of a body that feels emotions as discriminants and prompters. In order to develop cognitive systems for social robotics with greater human-likeliness, we used an "understanding by building" approach to model and implement a well-known theory of mind in the form of an artificial intelligence, and we tested it on a sophisticated robotic platform. The name of the presented system is SEAI (Social Emotional Artificial Intelligence), a cognitive system specifically conceived for social and emotional robots. It is designed as a bio-inspired, highly modular, hybrid system with emotion modelling and high-level reasoning capabilities. It follows the deliberative/reactive paradigm where a knowledge-based expert system is aimed at dealing with the high-level symbolic reasoning, while a more conventional reactive paradigm is deputed to the low-level processing and control. The SEAI system is also enriched by a model which simulate the Damasio's theory of consciousness and the theory of Somatic Markers. After a review of similar bio-inspired cognitive systems, we present the scientific foundations and their computational formalisation at the basis of the SEAI framework. Then, a deeper technical description of the architecture is disclosed underlining the numerous parallelisms with the human cognitive system. Finally, the influence of artificial emotions and feelings, and their link with the robot's beliefs and decisions have been tested in a physical humanoid involved in Human-Robot Interaction (HRI)

    Organizational Posthumanism

    Get PDF
    Building on existing forms of critical, cultural, biopolitical, and sociopolitical posthumanism, in this text a new framework is developed for understanding and guiding the forces of technologization and posthumanization that are reshaping contemporary organizations. This ‘organizational posthumanism’ is an approach to analyzing, creating, and managing organizations that employs a post-dualistic and post-anthropocentric perspective and which recognizes that emerging technologies will increasingly transform the kinds of members, structures, systems, processes, physical and virtual spaces, and external ecosystems that are available for organizations to utilize. It is argued that this posthumanizing technologization of organizations will especially be driven by developments in three areas: 1) technologies for human augmentation and enhancement, including many forms of neuroprosthetics and genetic engineering; 2) technologies for synthetic agency, including robotics, artificial intelligence, and artificial life; and 3) technologies for digital-physical ecosystems and networks that create the environments within which and infrastructure through which human and artificial agents will interact. Drawing on a typology of contemporary posthumanism, organizational posthumanism is shown to be a hybrid form of posthumanism that combines both analytic, synthetic, theoretical, and practical elements. Like analytic forms of posthumanism, organizational posthumanism recognizes the extent to which posthumanization has already transformed businesses and other organizations; it thus occupies itself with understanding organizations as they exist today and developing strategies and best practices for responding to the forces of posthumanization. On the other hand, like synthetic forms of posthumanism, organizational posthumanism anticipates the fact that intensifying and accelerating processes of posthumanization will create future realities quite different from those seen today; it thus attempts to develop conceptual schemas to account for such potential developments, both as a means of expanding our theoretical knowledge of organizations and of enhancing the ability of contemporary organizational stakeholders to conduct strategic planning for a radically posthumanized long-term future

    On the Possibility of Robots Having Emotions

    Get PDF
    I argue against the commonly held intuition that robots and virtual agents will never have emotions by contending robots can have emotions in a sense that is functionally similar to humans, even if the robots\u27 emotions are not exactly equivalent to those of humans. To establish a foundation for assessing the robots\u27 emotional capacities, I first define what emotions are by characterizing the components of emotion consistent across emotion theories. Second, I dissect the affective-cognitive architecture of MIT\u27s Kismet and Leonardo, two robots explicitly designed to express emotions and to interact with humans, in order to explore whether they have emotions. I argue that, although Kismet and Leonardo lack the subjective feelings component of emotion, they are capable of having emotions

    Introducing a Pictographic Language for Envisioning a Rich Variety of Enactive Systems with Different Degrees of Complexity

    Get PDF
    Notwithstanding the considerable amount of progress that has been made in recent years, the parallel fields of cognitive science and cognitive systems lack a unifying methodology for describing, understanding, simulating and implementing advanced cognitive behaviours. Growing interest in ’enactivism’ - as pioneered by the Chilean biologists Humberto Maturana and Francisco Varela - may lead to new perspectives in these areas, but a common framework for expressing many of the key concepts is still missing. This paper attempts to lay a tentative foundation in that direction by extending Maturana and Varela’s pictographic depictions of autopoietic unities to create a rich visual language for envisioning a wide range of enactive systems - natural or artificial - with different degrees of complexity. It is shown how such a diagrammatic taxonomy can help in the comprehension of important relationships between a variety of complex concepts from a pan-theoretic perspective. In conclusion, it is claimed that visual language is not only valuable for teaching and learning, but also offers important insights into the design and implementation of future advanced robotic systems
    • …
    corecore