6,013 research outputs found

    Human Intention Inference using Fusion of Gaze and Motion Information

    Get PDF
    Enabling robots with the ability to quickly and accurately determine the intention of their human counterparts is a very important problem in Human-Robot Collaboration (HRC). The focus of this work is to provide a framework wherein multiple modalities of information, available to the robot through different sensors, are fused to estimate a human\u27s action intent. In this thesis, two human intention estimation schemes are presented. In both cases, human intention is defined as a motion profile associated with a single goal location. The first scheme presents the first human intention estimator to fuse information from pupil tracking data as well as skeletal tracking data during each iteration of an Interacting Multiple Model (IMM) filter in order to predict the goal location of a reaching motion. In the second, two variable structure IMM (VS-IMM) filters, which track gaze and skeletal motion, respectively, are run in parallel and their associated model probabilities fused. This method is advantageous over the first as it can be easily scaled to include more models and provides greater disparity between the most likely model and the other models. For each VS-IMM filter, a model selection algorithm is proposed which chooses the most likely models in each iteration based on physical constraints of the human body. Experimental results are provided to validate the proposed human intention estimation schemes

    Human Motion Trajectory Prediction: A Survey

    Full text link
    With growing numbers of intelligent autonomous systems in human environments, the ability of such systems to perceive, understand and anticipate human behavior becomes increasingly important. Specifically, predicting future positions of dynamic agents and planning considering such predictions are key tasks for self-driving vehicles, service robots and advanced surveillance systems. This paper provides a survey of human motion trajectory prediction. We review, analyze and structure a large selection of work from different communities and propose a taxonomy that categorizes existing methods based on the motion modeling approach and level of contextual information used. We provide an overview of the existing datasets and performance metrics. We discuss limitations of the state of the art and outline directions for further research.Comment: Submitted to the International Journal of Robotics Research (IJRR), 37 page

    Car that Knows Before You Do: Anticipating Maneuvers via Learning Temporal Driving Models

    Full text link
    Advanced Driver Assistance Systems (ADAS) have made driving safer over the last decade. They prepare vehicles for unsafe road conditions and alert drivers if they perform a dangerous maneuver. However, many accidents are unavoidable because by the time drivers are alerted, it is already too late. Anticipating maneuvers beforehand can alert drivers before they perform the maneuver and also give ADAS more time to avoid or prepare for the danger. In this work we anticipate driving maneuvers a few seconds before they occur. For this purpose we equip a car with cameras and a computing device to capture the driving context from both inside and outside of the car. We propose an Autoregressive Input-Output HMM to model the contextual information alongwith the maneuvers. We evaluate our approach on a diverse data set with 1180 miles of natural freeway and city driving and show that we can anticipate maneuvers 3.5 seconds before they occur with over 80\% F1-score in real-time.Comment: ICCV 2015, http://brain4cars.co

    Rebellion and Obedience: The Effects of Intention Prediction in Cooperative Handheld Robots

    Get PDF
    Within this work, we explore intention inference for user actions in the context of a handheld robot setup. Handheld robots share the shape and properties of handheld tools while being able to process task information and aid manipulation. Here, we propose an intention prediction model to enhance cooperative task solving. The model derives intention from the user's gaze pattern which is captured using a robot-mounted remote eye tracker. The proposed model yields real-time capabilities and reliable accuracy up to 1.5s prior to predicted actions being executed. We assess the model in an assisted pick and place task and show how the robot's intention obedience or rebellion affects the cooperation with the robot.Comment: submitted to iROS 2019. arXiv admin note: substantial text overlap with arXiv:1810.0646
    corecore