38,105 research outputs found

    Adaptive Optimal Control in Physical Human-Robot Interaction

    Get PDF
    abstract: What if there is a way to integrate prosthetics seamlessly with the human body and robots could help improve the lives of children with disabilities? With physical human-robot interaction being seen in multiple aspects of life, including industry, medical, and social, how these robots are interacting with human becomes even more important. Therefore, how smoothly the robot can interact with a person will determine how safe and efficient this relationship will be. This thesis investigates adaptive control method that allows a robot to adapt to the human's actions based on the interaction force. Allowing the relationship to become more effortless and less strained when the robot has a different goal than the human, as seen in Game Theory, using multiple techniques that adapts the system. Few applications this could be used for include robots in physical therapy, manufacturing robots that can adapt to a changing environment, and robots teaching people something new like dancing or learning how to walk after surgery. The experience gained is the understanding of how a cost function of a system works, including the tracking error, speed of the system, the robot’s effort, and the human’s effort. Also, this two-agent system, results into a two-agent adaptive impedance model with an input for each agent of the system. This leads to a nontraditional linear quadratic regulator (LQR), that must be separated and then added together. Thus, creating a traditional LQR. This new experience can be used in the future to help build better safety protocols on manufacturing robots. In the future the knowledge learned from this research could be used to develop technologies for a robot to allow to adapt to help counteract human error.Dissertation/ThesisMasters Thesis Engineering 201

    Reward Shaping for Building Trustworthy Robots in Sequential Human-Robot Interaction

    Full text link
    Trust-aware human-robot interaction (HRI) has received increasing research attention, as trust has been shown to be a crucial factor for effective HRI. Research in trust-aware HRI discovered a dilemma -- maximizing task rewards often leads to decreased human trust, while maximizing human trust would compromise task performance. In this work, we address this dilemma by formulating the HRI process as a two-player Markov game and utilizing the reward-shaping technique to improve human trust while limiting performance loss. Specifically, we show that when the shaping reward is potential-based, the performance loss can be bounded by the potential functions evaluated at the final states of the Markov game. We apply the proposed framework to the experience-based trust model, resulting in a linear program that can be efficiently solved and deployed in real-world applications. We evaluate the proposed framework in a simulation scenario where a human-robot team performs a search-and-rescue mission. The results demonstrate that the proposed framework successfully modifies the robot's optimal policy, enabling it to increase human trust at a minimal task performance cost.Comment: In Proceedings of 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS

    Regulating Highly Automated Robot Ecologies: Insights from Three User Studies

    Full text link
    Highly automated robot ecologies (HARE), or societies of independent autonomous robots or agents, are rapidly becoming an important part of much of the world's critical infrastructure. As with human societies, regulation, wherein a governing body designs rules and processes for the society, plays an important role in ensuring that HARE meet societal objectives. However, to date, a careful study of interactions between a regulator and HARE is lacking. In this paper, we report on three user studies which give insights into how to design systems that allow people, acting as the regulatory authority, to effectively interact with HARE. As in the study of political systems in which governments regulate human societies, our studies analyze how interactions between HARE and regulators are impacted by regulatory power and individual (robot or agent) autonomy. Our results show that regulator power, decision support, and adaptive autonomy can each diminish the social welfare of HARE, and hint at how these seemingly desirable mechanisms can be designed so that they become part of successful HARE.Comment: 10 pages, 7 figures, to appear in the 5th International Conference on Human Agent Interaction (HAI-2017), Bielefeld, German

    Playing Pairs with Pepper

    Full text link
    As robots become increasingly prevalent in almost all areas of society, the factors affecting humans trust in those robots becomes increasingly important. This paper is intended to investigate the factor of robot attributes, looking specifically at the relationship between anthropomorphism and human development of trust. To achieve this, an interaction game, Matching the Pairs, was designed and implemented on two robots of varying levels of anthropomorphism, Pepper and Husky. Participants completed both pre- and post-test questionnaires that were compared and analyzed predominantly with the use of quantitative methods, such as paired sample t-tests. Post-test analyses suggested a positive relationship between trust and anthropomorphism with 80%80\% of participants confirming that the robots' adoption of facial features assisted in establishing trust. The results also indicated a positive relationship between interaction and trust with 90%90\% of participants confirming this for both robots post-testComment: Presented at AI-HRI AAAI-FSS, 2018 (arXiv:1809.06606

    Interaction Histories and Short-Term Memory: Enactive Development of Turn-Taking Behaviours in a Childlike Humanoid Robot

    Get PDF
    In this article, an enactive architecture is described that allows a humanoid robot to learn to compose simple actions into turn-taking behaviours while playing interaction games with a human partner. The robot’s action choices are reinforced by social feedback from the human in the form of visual attention and measures of behavioural synchronisation. We demonstrate that the system can acquire and switch between behaviours learned through interaction based on social feedback from the human partner. The role of reinforcement based on a short-term memory of the interaction was experimentally investigated. Results indicate that feedback based only on the immediate experience was insufficient to learn longer, more complex turn-taking behaviours. Therefore, some history of the interaction must be considered in the acquisition of turn-taking, which can be efficiently handled through the use of short-term memory.Peer reviewedFinal Published versio

    Post-Turing Methodology: Breaking the Wall on the Way to Artificial General Intelligence

    Get PDF
    This article offers comprehensive criticism of the Turing test and develops quality criteria for new artificial general intelligence (AGI) assessment tests. It is shown that the prerequisites A. Turing drew upon when reducing personality and human consciousness to “suitable branches of thought” re-flected the engineering level of his time. In fact, the Turing “imitation game” employed only symbolic communication and ignored the physical world. This paper suggests that by restricting thinking ability to symbolic systems alone Turing unknowingly constructed “the wall” that excludes any possi-bility of transition from a complex observable phenomenon to an abstract image or concept. It is, therefore, sensible to factor in new requirements for AI (artificial intelligence) maturity assessment when approaching the Tu-ring test. Such AI must support all forms of communication with a human being, and it should be able to comprehend abstract images and specify con-cepts as well as participate in social practices

    A Novel Reinforcement-Based Paradigm for Children to Teach the Humanoid Kaspar Robot

    Get PDF
    © The Author(s) 2019. This is the final published version of an article published in Psychological Research, licensed under a Creative Commons Attri-bution 4.0 International License. Available online at: https://doi.org/10.1007/s12369-019-00607-xThis paper presents a contribution to the active field of robotics research with the aim of supporting the development of social and collaborative skills of children with Autism Spectrum Disorders (ASD). We present a novel experiment where the classical roles are reversed: in this scenario the children are the teachers providing positive or negative reinforcement to the Kaspar robot in order for the robot to learn arbitrary associations between different toy names and the locations where they are positioned. The objective of this work is to develop games which help children with ASD develop collaborative skills and also provide them tangible example to understand that sometimes learning requires several repetitions. To facilitate this game we developed a reinforcement learning algorithm enabling Kaspar to verbally convey its level of uncertainty during the learning process, so as to better inform the children interacting with Kaspar the reasons behind the successes and failures made by the robot. Overall, 30 Typically Developing (TD) children aged between 7 and 8 (19 girls, 11 boys) and 6 children with ASD performed 22 sessions (16 for TD; 6 for ASD) of the experiment in groups, and managed to teach Kaspar all associations in 2 to 7 trials. During the course of study Kaspar only made rare unexpected associations (2 perseverative errors and 1 win-shift, within a total of 272 trials), primarily due to exploratory choices, and eventually reached minimal uncertainty. Thus the robot's behavior was clear and consistent for the children, who all expressed enthusiasm in the experiment.Peer reviewe
    corecore