6 research outputs found

    Human-robot co-navigation using anticipatory indicators of human walking motion

    Get PDF
    Mobile, interactive robots that operate in human-centric environments need the capability to safely and efficiently navigate around humans. This requires the ability to sense and predict human motion trajectories and to plan around them. In this paper, we present a study that supports the existence of statistically significant biomechanical turn indicators of human walking motions. Further, we demonstrate the effectiveness of these turn indicators as features in the prediction of human motion trajectories. Human motion capture data is collected with predefined goals to train and test a prediction algorithm. Use of anticipatory features results in improved performance of the prediction algorithm. Lastly, we demonstrate the closed-loop performance of the prediction algorithm using an existing algorithm for motion planning within dynamic environments. The anticipatory indicators of human walking motion can be used with different prediction and/or planning algorithms for robotics; the chosen planning and prediction algorithm demonstrates one such implementation for human-robot co-navigation

    An Architecture for Online Affordance-based Perception and Whole-body Planning

    Get PDF
    The DARPA Robotics Challenge Trials held in December 2013 provided a landmark demonstration of dexterous mobile robots executing a variety of tasks aided by a remote human operator using only data from the robot's sensor suite transmitted over a constrained, field-realistic communications link. We describe the design considerations, architecture, implementation and performance of the software that Team MIT developed to command and control an Atlas humanoid robot. Our design emphasized human interaction with an efficient motion planner, where operators expressed desired robot actions in terms of affordances fit using perception and manipulated in a custom user interface. We highlight several important lessons we learned while developing our system on a highly compressed schedule

    Principles and Guidelines for Evaluating Social Robot Navigation Algorithms

    Full text link
    A major challenge to deploying robots widely is navigation in human-populated environments, commonly referred to as social robot navigation. While the field of social navigation has advanced tremendously in recent years, the fair evaluation of algorithms that tackle social navigation remains hard because it involves not just robotic agents moving in static environments but also dynamic human agents and their perceptions of the appropriateness of robot behavior. In contrast, clear, repeatable, and accessible benchmarks have accelerated progress in fields like computer vision, natural language processing and traditional robot navigation by enabling researchers to fairly compare algorithms, revealing limitations of existing solutions and illuminating promising new directions. We believe the same approach can benefit social navigation. In this paper, we pave the road towards common, widely accessible, and repeatable benchmarking criteria to evaluate social robot navigation. Our contributions include (a) a definition of a socially navigating robot as one that respects the principles of safety, comfort, legibility, politeness, social competency, agent understanding, proactivity, and responsiveness to context, (b) guidelines for the use of metrics, development of scenarios, benchmarks, datasets, and simulators to evaluate social navigation, and (c) a design of a social navigation metrics framework to make it easier to compare results from different simulators, robots and datasets.Comment: 43 pages, 11 figures, 6 table

    Fast target prediction of human reaching motion for cooperative human-robot manipulation tasks using time series classification

    No full text
    Interest in human-robot coexistence, in which humans and robots share a common work volume, is increasing in manufacturing environments. Efficient work coordination requires both awareness of the human pose and a plan of action for both human and robot agents in order to compute robot motion trajectories that synchronize naturally with human motion. In this paper, we present a data-driven approach that synthesizes anticipatory knowledge of both human motions and subsequent action steps in order to predict in real-time the intended target of a human performing a reaching motion. Motion-level anticipatory models are constructed using multiple demonstrations of human reaching motions. We produce a library of motions from human demonstrations, based on a statistical representation of the degrees of freedom of the human arm, using time series analysis, wherein each time step is encoded as a multivariate Gaussian distribution. We demonstrate the benefits of this approach through offline statistical analysis of human motion data. The results indicate a considerable improvement over prior techniques in early prediction, achieving 70% or higher correct classification on average for the first third of the trajectory (<; 500msec). We also indicate proof-of-concept through the demonstration of a human-robot cooperative manipulation task performed with a PR2 robot. Finally, we analyze the quality of task-level anticipatory knowledge required to improve prediction performance early in the human motion trajectory

    A Theatrical Mobile-Dexterous Robot Directed through Shared Autonomy

    No full text
    We present the deployment of a 16-DoF dual-arm mobile manipulator as an on-stage actor in the MIT2016 Pageant, a 60 minute live play performed for the centennial celebration of the Massachusetts Institute of Technology campus move from Boston to Cambridge. The robot performed using expressive motions, navigated a 250ft-long thrust stage through a wireless connection, and was directed remotely by a human operator using a shared autonomy system. We report on the technical framework and human-robot interaction that enabled the performance, including motion planning, coordination of action with human actors, and the challenges in navigation, manipulation, perception and system reliability
    corecore