334 research outputs found

    Analyzing the Effects of Human-Aware Motion Planning on Close-Proximity Human-Robot Collaboration

    Get PDF
    Objective: The objective of this work was to examine human response to motion-level robot adaptation to determine its effect on team fluency, human satisfaction, and perceived safety and comfort. Background: The evaluation of human response to adaptive robotic assistants has been limited, particularly in the realm of motion-level adaptation. The lack of true human-in-the-loop evaluation has made it impossible to determine whether such adaptation would lead to efficient and satisfying human–robot interaction. Method: We conducted an experiment in which participants worked with a robot to perform a collaborative task. Participants worked with an adaptive robot incorporating human-aware motion planning and with a baseline robot using shortest-path motions. Team fluency was evaluated through a set of quantitative metrics, and human satisfaction and perceived safety and comfort were evaluated through questionnaires. Results: When working with the adaptive robot, participants completed the task 5.57% faster, with 19.9% more concurrent motion, 2.96% less human idle time, 17.3% less robot idle time, and a 15.1% greater separation distance. Questionnaire responses indicated that participants felt safer and more comfortable when working with an adaptive robot and were more satisfied with it as a teammate than with the standard robot. Conclusion: People respond well to motion-level robot adaptation, and significant benefits can be achieved from its use in terms of both human–robot team fluency and human worker satisfaction. Application: Our conclusion supports the development of technologies that could be used to implement human-aware motion planning in collaborative robots and the use of this technique for close-proximity human–robot collaboration

    Towards A Theory-Of-Mind-Inspired Generic Decision-Making Framework

    Full text link
    Simulation is widely used to make model-based predictions, but few approaches have attempted this technique in dynamic physical environments of medium to high complexity or in general contexts. After an introduction to the cognitive science concepts from which this work is inspired and the current development in the use of simulation as a decision-making technique, we propose a generic framework based on theory of mind, which allows an agent to reason and perform actions using multiple simulations of automatically created or externally inputted models of the perceived environment. A description of a partial implementation is given, which aims to solve a popular game within the IJCAI2013 AIBirds contest. Results of our approach are presented, in comparison with the competition benchmark. Finally, future developments regarding the framework are discussed.Comment: 7 pages, 5 figures, IJCAI 2013 Symposium on AI in Angry Bird

    Simulating Human-AI Collaboration with ACT-R and Project Malmo

    Get PDF
    We use the ACT-R cognitive architecture (Anderson, 2007) to explore human-AI collaboration. Computational models of human and AI behavior, and their interaction, allow for more effective development of collaborative artificial intelligent agents. With these computational models and simulations, one may be better equipped to predict the situations in which certain classes of intelligent agents may be more suited to collaborate with people. One can more tractably understand and predict how different AI agents affect task behavior in these situations. To simulate human-AI collaboration, we are developing ACT-R models that work with more traditional AI agents to solve a task in Project Malmo (Johnson et al., 2016). We use existing AI agents that were originally developed as the AI portion of the Human-AI collaboration. In addition, creating a model in ACT-R to simulate human behavior gives us the opportunity to play out these interactions much faster than would be possible in real time

    Belief-Desire-Intention in RoboCup

    Get PDF
    The Belief-Desire-Intention (BDI) model of a rational agent proposed by Bratman has strongly influenced the research of intelligent agents in Multi-Agent Systems (MAS). Jennings extended Bratman’s concept of a single rational agent into MAS in the form of joint-intention and joint-responsibility. Kitano et al. initiated RoboCup Soccer Simulation as a standard problem in MAS analogous to the Blocks World problem in traditional AI. This has motivated many researchers from various areas of studies such as machine learning, planning, and intelligent agent research. The first RoboCup team to incorporate the BDI concept is ATHumboldt98 team by Burkhard et al. In this thesis we present a novel collaborative BDI architecture modeled for RoboCup 2D Soccer Simulation called the TA09 team which is based on Bratman’s rational agent, influenced by Cohen and Levesque’s commitment, and incorporating Jennings’ joint-intention. The TA09 team features observation-based coordination, layered planning, and dynamic formation positioning

    FABRIC: A Framework for the Design and Evaluation of Collaborative Robots with Extended Human Adaptation

    Full text link
    A limitation for collaborative robots (cobots) is their lack of ability to adapt to human partners, who typically exhibit an immense diversity of behaviors. We present an autonomous framework as a cobot's real-time decision-making mechanism to anticipate a variety of human characteristics and behaviors, including human errors, toward a personalized collaboration. Our framework handles such behaviors in two levels: 1) short-term human behaviors are adapted through our novel Anticipatory Partially Observable Markov Decision Process (A-POMDP) models, covering a human's changing intent (motivation), availability, and capability; 2) long-term changing human characteristics are adapted by our novel Adaptive Bayesian Policy Selection (ABPS) mechanism that selects a short-term decision model, e.g., an A-POMDP, according to an estimate of a human's workplace characteristics, such as her expertise and collaboration preferences. To design and evaluate our framework over a diversity of human behaviors, we propose a pipeline where we first train and rigorously test the framework in simulation over novel human models. Then, we deploy and evaluate it on our novel physical experiment setup that induces cognitive load on humans to observe their dynamic behaviors, including their mistakes, and their changing characteristics such as their expertise. We conduct user studies and show that our framework effectively collaborates non-stop for hours and adapts to various changing human behaviors and characteristics in real-time. That increases the efficiency and naturalness of the collaboration with a higher perceived collaboration, positive teammate traits, and human trust. We believe that such an extended human adaptation is key to the long-term use of cobots.Comment: The article is in review for publication in International Journal of Robotics Researc
    • …
    corecore