18 research outputs found

    Robot control based on qualitative representation of human trajectories

    Get PDF
    A major challenge for future social robots is the high-level interpretation of human motion, and the consequent generation of appropriate robot actions. This paper describes some fundamental steps towards the real-time implementation of a system that allows a mobile robot to transform quantitative information about human trajectories (i.e. coordinates and speed) into qualitative concepts, and from these to generate appropriate control commands. The problem is formulated using a simple version of qualitative trajectory calculus, then solved using an inference engine based on fuzzy temporal logic and situation graph trees. Preliminary results are discussed and future directions of the current research are drawn

    Stam: a framework for spatio-temporal affordance maps

    Get PDF
    A�ordances have been introduced in literature as action op- portunities that objects o�er, and used in robotics to semantically rep- resent their interconnection. However, when considering an environment instead of an object, the problem becomes more complex due to the dynamism of its state. To tackle this issue, we introduce the concept of Spatio-Temporal A�ordances (STA) and Spatio-Temporal A�ordance Map (STAM). Using this formalism, we encode action semantics re- lated to the environment to improve task execution capabilities of an autonomous robot. We experimentally validate our approach to support the execution of robot tasks by showing that a�ordances encode accurate semantics of the environment

    An extension of GHMMs for environments with occlusions and automatic goal discovery for person trajectory prediction

    Get PDF
    This work is partially funded by the EC-FP7 under grant agreement no. 611153 (TERESA) and the project PAIS-MultiRobot, funded by the Junta de Andalucía (TIC-7390). I. Perez-Hurtado is also supported by the Postdoctoral Junior Grant 2013 co-funded by the Spanish Ministry of Economy and Competitiveness and the Pablo de Olavide University.Robots navigating in a social way should use some knowledge about common motion patterns of people in the environment. Moreover, it is known that people move intending to reach certain points of interest, and machine learning techniques have been widely used for acquiring this knowledge by observation. Learning algorithms such as Growing Hidden Markov Models (GHMMs) usually assume that points of interest are located at the end of human trajectories, but complete trajectories cannot always be observed by a mobile robot due to occlusions and people going out of sensor range. This paper extends GHMMs to deal with partial observed trajectories where people's goals are not known a priori. A novel technique based on hypothesis testing is also used to discover the points of interest (goals) in the environment. The approach is validated by predicting people's motion in three different datasets.Universidad Pablo de Olavide. Departamento de Deporte e InformáticaPostprin

    Extended Object Tracking: Introduction, Overview and Applications

    Full text link
    This article provides an elaborate overview of current research in extended object tracking. We provide a clear definition of the extended object tracking problem and discuss its delimitation to other types of object tracking. Next, different aspects of extended object modelling are extensively discussed. Subsequently, we give a tutorial introduction to two basic and well used extended object tracking approaches - the random matrix approach and the Kalman filter-based approach for star-convex shapes. The next part treats the tracking of multiple extended objects and elaborates how the large number of feasible association hypotheses can be tackled using both Random Finite Set (RFS) and Non-RFS multi-object trackers. The article concludes with a summary of current applications, where four example applications involving camera, X-band radar, light detection and ranging (lidar), red-green-blue-depth (RGB-D) sensors are highlighted.Comment: 30 pages, 19 figure

    Robot companion: a social-force based approach with human awareness-navigation in crowded environments

    Get PDF
    Robots accompanying humans is one of the core capacities every service robot deployed in urban settings should have. We present a novel robot companion approach based on the so-called Social Force Model (SFM). A new model of robot-person interaction is obtained using the SFM which is suited for our robots Tibi and Dabo. Additionally, we propose an interactive scheme for robot’s human-awareness navigation using the SFM and prediction information. Moreover, we present a new metric to evaluate the robot companion performance based on vital spaces and comfortableness criteria. Also, a multimodal human feedback is proposed to enhance the behavior of the system. The validation of the model is accomplished throughout an extensive set of simulations and real-life experiments.Peer ReviewedPostprint (author’s final draft

    Human Motion Trajectory Prediction: A Survey

    Full text link
    With growing numbers of intelligent autonomous systems in human environments, the ability of such systems to perceive, understand and anticipate human behavior becomes increasingly important. Specifically, predicting future positions of dynamic agents and planning considering such predictions are key tasks for self-driving vehicles, service robots and advanced surveillance systems. This paper provides a survey of human motion trajectory prediction. We review, analyze and structure a large selection of work from different communities and propose a taxonomy that categorizes existing methods based on the motion modeling approach and level of contextual information used. We provide an overview of the existing datasets and performance metrics. We discuss limitations of the state of the art and outline directions for further research.Comment: Submitted to the International Journal of Robotics Research (IJRR), 37 page

    Comparison of interaction modalities for mobile indoor robot guidance : direct physical interaction, person following, and pointing control

    Get PDF
    © 2015 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other worksThree advanced natural interaction modalities for mobile robot guidance in an indoor environment were developed and compared using two tasks and quantitative metrics to measure performance and workload. The first interaction modality is based on direct physical interaction requiring the human user to push the robot in order to displace it. The second and third interaction modalities exploit a 3-D vision-based human-skeleton tracking allowing the user to guide the robot by either walking in front of it or by pointing toward a desired location. In the first task, the participants were asked to guide the robot between different rooms in a simulated physical apartment requiring rough movement of the robot through designated areas. The second task evaluated robot guidance in the same environment through a set of waypoints, which required accurate movements. The three interaction modalities were implemented on a generic differential drive mobile platform equipped with a pan-tilt system and a Kinect camera. Task completion time and accuracy were used as metrics to assess the users’ performance, while the NASA-TLX questionnaire was used to evaluate the users’ workload. A study with 24 participants indicated that choice of interaction modality had significant effect on completion time (F(2,61)=84.874, p<0.001), accuracy (F(2,29)=4.937, p=0.016), and workload (F(2,68)=11.948, p<0.001). The direct physical interaction required less time, provided more accuracy and less workload than the two contactless interaction modalities. Between the two contactless interaction modalities, the person-following interaction mod- lity was systematically better than the pointing-control one: The participants completed the tasks faster with less workloadPeer ReviewedPostprint (author's final draft

    Robot social-aware navigation framework to accompany people walking side-by-side

    Get PDF
    The final publication is available at link.springer.comWe present a novel robot social-aware navigation framework to walk side-by-side with people in crowded urban areas in a safety and natural way. The new system includes the following key issues: to propose a new robot social-aware navigation model to accompany a person; to extend the Social Force Model,Peer ReviewedPostprint (author's final draft
    corecore