13 research outputs found

    Human aware robot navigation

    Get PDF
    Abstract. Human aware robot navigation refers to the navigation of a robot in an environment shared with humans in such a way that the humans should feel comfortable, and natural with the presence of the robot. On top of that, the robot navigation should comply with the social norms of the environment. The robot can interact with humans in the environment, such as avoiding them, approaching them, or following them. In this thesis, we specifically focus on the approach behavior of the robot, keeping the other use cases still in mind. Studying and analyzing how humans move around other humans gives us the idea about the kind of navigation behaviors that we expect the robots to exhibit. Most of the previous research does not focus much on understanding such behavioral aspects while approaching people. On top of that, a straightforward mathematical modeling of complex human behaviors is very difficult. So, in this thesis, we proposed an Inverse Reinforcement Learning (IRL) framework based on Guided Cost Learning (GCL) to learn these behaviors from demonstration. After analyzing the CongreG8 dataset, we found that the incoming human tends to make an O-space (circle) with the rest of the group. Also, the approaching velocity slows down when the approaching human gets closer to the group. We utilized these findings in our framework that can learn the optimal reward and policy from the example demonstrations and imitate similar human motion

    Our Puppets, Our Selves: Puppetry\u27s Changing Paradigms

    Get PDF
    Taking up the topic of puppetry, Orenstein forges connections between Craig’s vision of the übermarionette and the rise of “New Puppetry” today. She examines the use of puppets to explore similarities and differences between the technological anxieties of modernists versus contemporary artists. In addition, she calls for a more careful and contextualized attention to Craig’s puppet theory, with a close reading of the übermarionette passage in On the Art of the Theatre. Orenstein returns to some of the most well-known and much-studied passages and theories from Craig’s early work, but considers them from the fresh vantage point of contemporary puppetry scholars and practitioners

    Technologies on the stand:Legal and ethical questions in neuroscience and robotics

    Get PDF

    Seven HCI Grand Challenges

    Get PDF
    This article aims to investigate the Grand Challenges which arise in the current and emerging landscape of rapid technological evolution towards more intelligent interactive technologies, coupled with increased and widened societal needs, as well as individual and collective expectations that HCI, as a discipline, is called upon to address. A perspective oriented to humane and social values is adopted, formulating the challenges in terms of the impact of emerging intelligent interactive technologies on human life both at the individual and societal levels. Seven Grand Challenges are identified and presented in this article: Human-Technology Symbiosis; Human-Environment Interactions; Ethics, Privacy and Security; Well-being, Health and Eudaimonia; Accessibility and Universal Access; Learning and Creativity; and Social Organization and Democracy. Although not exhaustive, they summarize the views and research priorities of an international interdisciplinary group of experts, reflecting different scientific perspectives, methodological approaches and application domains. Each identified Grand Challenge is analyzed in terms of: concept and problem definition; main research issues involved and state of the art; and associated emerging requirements

    Intent-aligned AI systems deplete human agency: the need for agency foundations research in AI safety

    Full text link
    The rapid advancement of artificial intelligence (AI) systems suggests that artificial general intelligence (AGI) systems may soon arrive. Many researchers are concerned that AIs and AGIs will harm humans via intentional misuse (AI-misuse) or through accidents (AI-accidents). In respect of AI-accidents, there is an increasing effort focused on developing algorithms and paradigms that ensure AI systems are aligned to what humans intend, e.g. AI systems that yield actions or recommendations that humans might judge as consistent with their intentions and goals. Here we argue that alignment to human intent is insufficient for safe AI systems and that preservation of long-term agency of humans may be a more robust standard, and one that needs to be separated explicitly and a priori during optimization. We argue that AI systems can reshape human intention and discuss the lack of biological and psychological mechanisms that protect humans from loss of agency. We provide the first formal definition of agency-preserving AI-human interactions which focuses on forward-looking agency evaluations and argue that AI systems - not humans - must be increasingly tasked with making these evaluations. We show how agency loss can occur in simple environments containing embedded agents that use temporal-difference learning to make action recommendations. Finally, we propose a new area of research called "agency foundations" and pose four initial topics designed to improve our understanding of agency in AI-human interactions: benevolent game theory, algorithmic foundations of human rights, mechanistic interpretability of agency representation in neural-networks and reinforcement learning from internal states

    2018 FSDG Combined Abstracts

    Get PDF
    https://scholarworks.gvsu.edu/fsdg_abstracts/1000/thumbnail.jp
    corecore