211 research outputs found

    Human Motion Trajectory Prediction: A Survey

    Full text link
    With growing numbers of intelligent autonomous systems in human environments, the ability of such systems to perceive, understand and anticipate human behavior becomes increasingly important. Specifically, predicting future positions of dynamic agents and planning considering such predictions are key tasks for self-driving vehicles, service robots and advanced surveillance systems. This paper provides a survey of human motion trajectory prediction. We review, analyze and structure a large selection of work from different communities and propose a taxonomy that categorizes existing methods based on the motion modeling approach and level of contextual information used. We provide an overview of the existing datasets and performance metrics. We discuss limitations of the state of the art and outline directions for further research.Comment: Submitted to the International Journal of Robotics Research (IJRR), 37 page

    Detection of People Boarding/Alighting a Metropolitan Train using Computer Vision

    Get PDF
    This paper has been presented at : 9th International Conference on Pattern Recognition Systems (ICPRS 2018)Pedestrian detection and tracking have seen a major progress in the last two decades. Nevertheless there are always appli-cation areas which either require further improvement or that have not been sufficiently explored or where production level performance (accuracy and computing efficiency) has not been demonstrated. One such area is that of pedestrian monitoring and counting in metropolitan railways platforms. In this paper we first present a new partly annotated dataset of a full-size laboratory observation of people boarding and alighting from a public transport vehicle. We then present baseline results for automatic detection of such passengers, based on computer vi-sion, that could open the way to compute variables of interest to traffic engineers and vehicle designers such as counts and flows and how they are related to vehicle and platform layout.The authors gratefully acknowledge the Chilean National Science and Technology Council (Conicyt) for its funding under grants CONICYT-Fondecyt Regular nos. 1140209 (“OBSERVE”) , 1120219, and 1080381 . S.A. Velastin is grateful to funding received from the Universidad Carlos III de Madrid, the European Union’s Seventh Framework Programme for research, technological development and demonstration under grant agreement no. 600371, el Ministerio de Economía y Competitividad (COFUND2013-51509) and Banco Santander. Finally, we are grateful to NVIDIA for its donation as part of its academic GPU Grant Program

    Few-Shot Deep Adversarial Learning for Video-based Person Re-identification

    Full text link
    Video-based person re-identification (re-ID) refers to matching people across camera views from arbitrary unaligned video footages. Existing methods rely on supervision signals to optimise a projected space under which the distances between inter/intra-videos are maximised/minimised. However, this demands exhaustively labelling people across camera views, rendering them unable to be scaled in large networked cameras. Also, it is noticed that learning effective video representations with view invariance is not explicitly addressed for which features exhibit different distributions otherwise. Thus, matching videos for person re-ID demands flexible models to capture the dynamics in time-series observations and learn view-invariant representations with access to limited labeled training samples. In this paper, we propose a novel few-shot deep learning approach to video-based person re-ID, to learn comparable representations that are discriminative and view-invariant. The proposed method is developed on the variational recurrent neural networks (VRNNs) and trained adversarially to produce latent variables with temporal dependencies that are highly discriminative yet view-invariant in matching persons. Through extensive experiments conducted on three benchmark datasets, we empirically show the capability of our method in creating view-invariant temporal features and state-of-the-art performance achieved by our method.Comment: Appearing at IEEE Transactions on Image Processin

    Human-robot co-navigation using anticipatory indicators of human walking motion

    Get PDF
    Mobile, interactive robots that operate in human-centric environments need the capability to safely and efficiently navigate around humans. This requires the ability to sense and predict human motion trajectories and to plan around them. In this paper, we present a study that supports the existence of statistically significant biomechanical turn indicators of human walking motions. Further, we demonstrate the effectiveness of these turn indicators as features in the prediction of human motion trajectories. Human motion capture data is collected with predefined goals to train and test a prediction algorithm. Use of anticipatory features results in improved performance of the prediction algorithm. Lastly, we demonstrate the closed-loop performance of the prediction algorithm using an existing algorithm for motion planning within dynamic environments. The anticipatory indicators of human walking motion can be used with different prediction and/or planning algorithms for robotics; the chosen planning and prediction algorithm demonstrates one such implementation for human-robot co-navigation
    • …
    corecore