14,336 research outputs found

    A Novel Reinforcement-Based Paradigm for Children to Teach the Humanoid Kaspar Robot

    Get PDF
    © The Author(s) 2019. This is the final published version of an article published in Psychological Research, licensed under a Creative Commons Attri-bution 4.0 International License. Available online at: https://doi.org/10.1007/s12369-019-00607-xThis paper presents a contribution to the active field of robotics research with the aim of supporting the development of social and collaborative skills of children with Autism Spectrum Disorders (ASD). We present a novel experiment where the classical roles are reversed: in this scenario the children are the teachers providing positive or negative reinforcement to the Kaspar robot in order for the robot to learn arbitrary associations between different toy names and the locations where they are positioned. The objective of this work is to develop games which help children with ASD develop collaborative skills and also provide them tangible example to understand that sometimes learning requires several repetitions. To facilitate this game we developed a reinforcement learning algorithm enabling Kaspar to verbally convey its level of uncertainty during the learning process, so as to better inform the children interacting with Kaspar the reasons behind the successes and failures made by the robot. Overall, 30 Typically Developing (TD) children aged between 7 and 8 (19 girls, 11 boys) and 6 children with ASD performed 22 sessions (16 for TD; 6 for ASD) of the experiment in groups, and managed to teach Kaspar all associations in 2 to 7 trials. During the course of study Kaspar only made rare unexpected associations (2 perseverative errors and 1 win-shift, within a total of 272 trials), primarily due to exploratory choices, and eventually reached minimal uncertainty. Thus the robot's behavior was clear and consistent for the children, who all expressed enthusiasm in the experiment.Peer reviewe

    Autonomous Weapons and the Nature of Law and Morality: How Rule-of-Law-Values Require Automation of the Rule of Law

    Get PDF
    While Autonomous Weapons Systems have obvious military advantages, there are prima facie moral objections to using them. By way of general reply to these objections, I point out similarities between the structure of law and morality on the one hand and of automata on the other. I argue that these, plus the fact that automata can be designed to lack the biases and other failings of humans, require us to automate the formulation, administration, and enforcement of law as much as possible, including the elements of law and morality that are operated by combatants in war. I suggest that, ethically speaking, deploying a legally competent robot in some legally regulated realm is not much different from deploying a more or less well-armed, vulnerable, obedient, or morally discerning soldier or general into battle, a police officer onto patrol, or a lawyer or judge into a trial. All feature automaticity in the sense of deputation to an agent we do not then directly control. Such relations are well understood and well-regulated in morality and law; so there is not much challenging philosophically in having robots be some of these agents — excepting the implications of the limits of robot technology at a given time for responsible deputation. I then consider this proposal in light of the differences between two conceptions of law. These are distinguished by whether each conception sees law as unambiguous rules inherently uncontroversial in each application; and I consider the prospects for robotizing law on each. Likewise for the prospects of robotizing moral theorizing and moral decision-making. Finally I identify certain elements of law and morality, noted by the philosopher Immanuel Kant, which robots can participate in only upon being able to set ends and emotionally invest in their attainment. One conclusion is that while affectless autonomous devices might be fit to rule us, they would not be fit to vote with us. For voting is a process for summing felt preferences, and affectless devices would have none to weigh into the sum. Since they don't care which outcomes obtain, they don't get to vote on which ones to bring about

    Assistive robotics: research challenges and ethics education initiatives

    Get PDF
    Assistive robotics is a fast growing field aimed at helping healthcarers in hospitals, rehabilitation centers and nursery homes, as well as empowering people with reduced mobility at home, so that they can autonomously fulfill their daily living activities. The need to function in dynamic human-centered environments poses new research challenges: robotic assistants need to have friendly interfaces, be highly adaptable and customizable, very compliant and intrinsically safe to people, as well as able to handle deformable materials. Besides technical challenges, assistive robotics raises also ethical defies, which have led to the emergence of a new discipline: Roboethics. Several institutions are developing regulations and standards, and many ethics education initiatives include contents on human-robot interaction and human dignity in assistive situations. In this paper, the state of the art in assistive robotics is briefly reviewed, and educational materials from a university course on Ethics in Social Robotics and AI focusing on the assistive context are presented.Peer ReviewedPostprint (author's final draft

    The complexity of respecting together: From the point of view of one participant of the 2012 vancouver naaci conference

    Get PDF
    Dedication: I would like to dedicate this essay to Mort Morehouse, whose intelligence, warmth, and good humour sustains NAACI to this day. I would like, too, to dedicate this essay to Nadia Kennedy who, in her paper “Respecting the Complexity of CI,” suggests that respect for the rich non-reductive emergent memories and understandings that evolve out of participating in the sort of complex communicative interactions that we experienced at the 2012 NAACI conference requires “a turning around and looking back so that we might understand it better.” Thus, though “we cannot grasp the essence of the system in some determinate way, since each description provides a limited view, and portrays some aspect of the system from a specific position inside or outside it, and at a specific point in time,” nonetheless respect requires that we try “to take different ‘snapshots’ of such systems and attempt to make sense of them.” It is as a result of this urging that the following snapshot was attempted. My thanks to Nadia for being such an inspiration, and to all the participants for making this conference such a memorable occasion

    Achieving Corresponding Effects on Multiple Robotic Platforms: Imitating in Context Using Different Effect Metrics

    Get PDF
    Original paper can be found at: www.aisb.org.uk/publications/proceedings/aisb05/3_Imitation_Final.pdfOne of the fundamental problems in imitation is the correspondence problem, how to map between the actions, states and effects of the model and imitator agents, when the embodiment of the agents is dissimilar. In our approach, the matching is according to different metrics and granularity. This paper presents JABBERWOCKY, a system that uses captured data from a human demonstrator to generate appropriate action commands, addressing the correspondence problem in imitation. Towards a characterization of the space of effect metrics, we are exploring absolute/relative angle and displacement aspects and focus on the overall arrangement and trajectory of manipulated objects. Using as an example a captured demonstration from a human, the system produces a correspondence solution given a selection of effect metrics and starting from dissimilar initial object positions, producing action commands that are then executed by two imitator target platforms (in simulation) to successfully imitate

    Equal Rights for Zombies?: Phenomenal Consciousness and Responsible Agency

    Get PDF
    Intuitively, moral responsibility requires conscious awareness of what one is doing, and why one is doing it, but what kind of awareness is at issue? Neil Levy argues that phenomenal consciousness—the qualitative feel of conscious sensations—is entirely unnecessary for moral responsibility. He claims that only access consciousness—the state in which information (e.g., from perception or memory) is available to an array of mental systems (e.g., such that an agent can deliberate and act upon that information)—is relevant to moral responsibility. I argue that numerous ethical, epistemic, and neuroscientific considerations entail that the capacity for phenomenal consciousness is necessary for moral responsibility. I focus in particular on considerations inspired by P. F. Strawson, who puts a range of qualitative moral emotions—the reactive attitudes—front and center in the analysis of moral responsibility

    Social Situatedness: Vygotsky and Beyond

    Get PDF
    The concept of ‘social situatedness’, i.e. the idea that the development of individual intelligence requires a social (and cultural) embedding, has recently received much attention in cognitive science and artificial intelligence research. The work of Lev Vygotsky who put forward this view already in the 1920s has influenced the discussion to some degree, but still remains far from well known. This paper therefore aims to give an overview of his cognitive development theory and discuss its relation to more recent work in primatology and socially situated artificial intelligence, in particular humanoid robotics
    • 

    corecore