11,399 research outputs found

    Human-Robot Dichotomy

    Get PDF
    This paper belongs to the area of roboethics and responsible robotics. It discusses the conceptual and practical separation of humans and robots in designing and implementing robots into real-world environments. We argue here that humans are often seen as a component that is only optional in design thinking, and in some cases even an obstacle to the successful robot performance. Such an approach may vary from viewing humans as a factor that does not belong to the robotics domain, through attempts to ā€˜adjustā€™ humans to robot requirements, to the overall replacement of humans with robots. Such separation or exclusion of humans poses serious ethical challenges, including the very exclusion of ethics from our thinking about robots

    Shall I trust you? From child-robot interaction to trusting relationships

    Get PDF
    Studying trust in the context of human-robot interaction is of great importance given the increasing relevance and presence of robotic agents in various social settings, from educational to clinical. In the present study, we investigated the acquisition, loss and restoration of trust when preschool and school-age children played with either a human or a humanoid robot in-vivo. The relationship between trust and the representation of the quality of attachment relationships, Theory of Mind, and executive function skills was also investigated. Additionally, to outline children\u2019s beliefs about the mental competencies of the robot, we further evaluated the attribution of mental states to the interactive agent. In general, no substantial differences were found in children\u2019s trust in the play-partner as a function of agency (human or robot). Nevertheless, 3-years-olds showed a trend toward trusting the human more than the robot, as opposed to 7-years-olds, who displayed the reverse pattern. These findings align with results showing that, for children aged 3 and 7 years, the cognitive ability to switch was significantly associated with trust restoration in the human and the robot, respectively. Additionally, supporting previous findings, a dichotomy was found between attribution of mental states to the human and robot and children\u2019s behavior: while attributing significantly lower mental states to the robot than the human, in the trusting game children behaved similarly when they related to the human and the robot. Altogether, the results of this study highlight that comparable psychological mechanisms are at play when children are to establish a novel trustful relationship with a human and robot partner. Furthermore, the findings shed light on the interplay \u2013 during development \u2013 between children\u2019s quality of attachment relationships and the development of a Theory of Mind, which act differently on trust dynamics as a function of the children\u2019s age as well as the interactive partner\u2019s nature (human vs. robot)

    Scalable Co-Optimization of Morphology and Control in Embodied Machines

    Full text link
    Evolution sculpts both the body plans and nervous systems of agents together over time. In contrast, in AI and robotics, a robot's body plan is usually designed by hand, and control policies are then optimized for that fixed design. The task of simultaneously co-optimizing the morphology and controller of an embodied robot has remained a challenge. In psychology, the theory of embodied cognition posits that behavior arises from a close coupling between body plan and sensorimotor control, which suggests why co-optimizing these two subsystems is so difficult: most evolutionary changes to morphology tend to adversely impact sensorimotor control, leading to an overall decrease in behavioral performance. Here, we further examine this hypothesis and demonstrate a technique for "morphological innovation protection", which temporarily reduces selection pressure on recently morphologically-changed individuals, thus enabling evolution some time to "readapt" to the new morphology with subsequent control policy mutations. We show the potential for this method to avoid local optima and converge to similar highly fit morphologies across widely varying initial conditions, while sustaining fitness improvements further into optimization. While this technique is admittedly only the first of many steps that must be taken to achieve scalable optimization of embodied machines, we hope that theoretical insight into the cause of evolutionary stagnation in current methods will help to enable the automation of robot design and behavioral training -- while simultaneously providing a testbed to investigate the theory of embodied cognition

    Formulating Consciousness: A Comparative Analysis of Searleā€™s and Dennettā€™s Theory of Consciousness

    Get PDF
    This research will argue about which theory of mind between Searleā€™s and Dennettā€™s can better explain human consciousness. Initially, distinctions between dualism and materialism will be discussed ranging from substance dualism, property dualism, physicalism, and functionalism. In this part, the main issue that is tackled in various theories of mind is revealed. It is the missing connection between input stimulus (neuronal reactions) and behavioral disposition: consciousness. Then, the discussion will be more specific on Searleā€™s biological naturalism and Dennettā€™s multiple drafts model as the two attempted to answer the issue. The differences between them will be highlighted and will be analyzed according to their relation to their roots: dualism and materialism. The two theories will be examined on how each answer the questions on consciousness

    Symbol grounding and its implications for artificial intelligence

    Get PDF
    In response to Searle's well-known Chinese room argument against Strong AI (and more generally, computationalism), Harnad proposed that if the symbols manipulated by a robot were sufficiently grounded in the real world, then the robot could be said to literally understand. In this article, I expand on the notion of symbol groundedness in three ways. Firstly, I show how a robot might select the best set of categories describing the world, given that fundamentally continuous sensory data can be categorised in an almost infinite number of ways. Secondly, I discuss the notion of grounded abstract (as opposed to concrete) concepts. Thirdly, I give an objective criterion for deciding when a robot's symbols become sufficiently grounded for "understanding" to be attributed to it. This deeper analysis of what symbol groundedness actually is weakens Searle's position in significant ways; in particular, whilst Searle may be able to refute Strong AI in the specific context of present-day digital computers, he cannot refute computationalism in general

    Alienation and Recognition - The Ī” Phenomenology of the Humanā€“Social Robot Interaction (HSRI)

    Get PDF
    A crucial philosophical problem of social robots is how much they perform a kind of sociality in interacting with humans. Scholarship diverges between those who sustain that humans and social robots cannot by default have social interactions and those who argue for the possibility of an asymmetric sociality. Against this dichotomy, we argue in this paper for a holistic approach called ā€œĪ” phenomenologyā€ of HSRI (Humanā€“Social Robot Interaction). In the first part of the paper, we will analyse the semantics of an HSRI. This is what leads a human being (x) to assign or receive a meaning of sociality (z) by interacting with a social robot (y). Hence, we will question the ontological structure underlying HSRIs, suggesting that HSRIs may lead to a peculiar kind of user alienation. By combining all these variables, we will formulate some final recommendations for an ethics of social robots

    A novel plasticity rule can explain the development of sensorimotor intelligence

    Full text link
    Grounding autonomous behavior in the nervous system is a fundamental challenge for neuroscience. In particular, the self-organized behavioral development provides more questions than answers. Are there special functional units for curiosity, motivation, and creativity? This paper argues that these features can be grounded in synaptic plasticity itself, without requiring any higher level constructs. We propose differential extrinsic plasticity (DEP) as a new synaptic rule for self-learning systems and apply it to a number of complex robotic systems as a test case. Without specifying any purpose or goal, seemingly purposeful and adaptive behavior is developed, displaying a certain level of sensorimotor intelligence. These surprising results require no system specific modifications of the DEP rule but arise rather from the underlying mechanism of spontaneous symmetry breaking due to the tight brain-body-environment coupling. The new synaptic rule is biologically plausible and it would be an interesting target for a neurobiolocal investigation. We also argue that this neuronal mechanism may have been a catalyst in natural evolution.Comment: 18 pages, 5 figures, 7 video
    • ā€¦
    corecore