2,122 research outputs found

    Multisensor-based human detection and tracking for mobile service robots

    Get PDF
    The one of fundamental issues for service robots is human-robot interaction. In order to perform such a task and provide the desired services, these robots need to detect and track people in the surroundings. In the present paper, we propose a solution for human tracking with a mobile robot that implements multisensor data fusion techniques. The system utilizes a new algorithm for laser-based legs detection using the on-board LRF. The approach is based on the recognition of typical leg patterns extracted from laser scans, which are shown to be very discriminative also in cluttered environments. These patterns can be used to localize both static and walking persons, even when the robot moves. Furthermore, faces are detected using the robot's camera and the information is fused to the legs position using a sequential implementation of Unscented Kalman Filter. The proposed solution is feasible for service robots with a similar device configuration and has been successfully implemented on two different mobile platforms. Several experiments illustrate the effectiveness of our approach, showing that robust human tracking can be performed within complex indoor environments

    Pedestrian Mobility Mining with Movement Patterns

    Get PDF
    In street-based mobility mining, pedestrian volume estimation receives increasing attention, as it provides important applications such as billboard evaluation, attraction ranking and emergency support systems. In practice, empirical measurements are sparse due to budget limitations and constrained mounting options. Therefore, estimation of pedestrian quantity is required to perform pedestrian mobility analysis at unobserved locations. Accurate pedestrian mobility analysis is difficult to achieve due to the non-random path selection of individual pedestrians (resulting from motivated movement behaviour), causing the pedestrian volumes to distribute non-uniformly among the traffic network. Existing approaches (pedestrian simulations and data mining methods) are hard to adjust to sensor measurements or require more expensive input data (e.g. high fidelity floor plans or total number of pedestrians in the site) and are thus unfeasible. In order to achieve a mobility model that encodes pedestrian volumes accurately, we propose two methods under the regression framework which overcome the limitations of existing methods. Namely, these two methods incorporate not just topological information and episodic sensor readings, but also prior knowledge on movement preferences and movement patterns. The first one is based on Least Squares Regression (LSR). The advantage of this method is the easy inclusion of route choice heuristics and robustness towards contradicting measurements. The second method is Gaussian Process Regression (GPR). The advantages of this method are the possibilities to include expert knowledge on pedestrian movement and to estimate the uncertainty in predicting the unknown frequencies. Furthermore the kernel matrix of the pedestrian frequencies returned by the method supports sensor placement decisions. Major benefits of the regression approach are (1) seamless integration of expert data and (2) simple reproduction of sensor measurements. Further advantages are (3) invariance of the results against traffic network homeomorphism and (4) the computational complexity depends not on the number of modeled pedestrians but on the traffic network complexity. We compare our novel approaches to state-of-the-art pedestrian simulation (Generalized Centrifugal Force Model) as well as existing Data Mining methods for traffic volume estimation (Spatial k-Nearest Neighbour) and commonly used graph kernels for the Gaussian Process Regression (Squared Exponential, Regularized Laplacian and Diffusion Kernel) in terms of prediction performance (measured with mean absolute error). Our methods showed significantly lower error rates. Since pattern knowledge is not easy to obtain, we present algorithms for pattern acquisition and analysis from Episodic Movement Data. The proposed analysis of Episodic Movement Data involve spatio-temporal aggregation of visits and flows, cluster analyses and dependency models. For pedestrian mobility data collection we further developed and successfully applied the recently evolved Bluetooth tracking technology. The introduced methods are combined to a system for pedestrian mobility analysis which comprises three layers. The Sensor Layer (1) monitors geo-coded sensor recordings on people’s presence and hands this episodic movement data in as input to the next layer. By use of standardized Open Geographic Consortium (OGC) compliant interfaces for data collection, we support seamless integration of various sensor technologies depending on the application requirements. The Query Layer (2) interacts with the user, who could ask for analyses within a given region and a certain time interval. Results are returned to the user in OGC conform Geography Markup Language (GML) format. The user query triggers the (3) Analysis Layer which utilizes the mobility model for pedestrian volume estimation. The proposed approach is promising for location performance evaluation and attractor identification. Thus, it was successfully applied to numerous industrial applications: Zurich central train station, the zoo of Duisburg (Germany) and a football stadium (Stade des Costières Nîmes, France)

    Comparative analysis and fusion of spatiotemporal information for footstep recognition

    Full text link
    Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. R. Vera-Rodriguez, J. S. D. Mason, J. Fierrez, and J. Ortega-Garcia, "Comparative analysis and fusion of spatiotemporal information for footstep recognition", Pattern Analysis and Machine Intelligence, IEEE Transaction, vol. 35, no. 4, pp. 823-834, August 2012Footstep recognition is a relatively new biometric which aims to discriminate people using walking characteristics extracted from floor-based sensors. This paper reports for the first time a comparative assessment of the spatiotemporal information contained in the footstep signals for person recognition. Experiments are carried out on the largest footstep database collected to date, with almost 20,000 valid footstep signals and more than 120 people. Results show very similar performance for both spatial and temporal approaches (5 to 15 percent EER depending on the experimental setup), and a significant improvement is achieved for their fusion (2.5 to 10 percent EER). The assessment protocol is focused on the influence of the quantity of data used in the reference models, which serves to simulate conditions of different potential applications such as smart homes or security access scenarios.Ruben Vera-Rodriguez, Julian Fierrez and Javier Ortega Garcia are supported by projects Contexts (S2009/TIC-1485), Bio-Challenge (TEC2009-11186), TeraSense (CSD2008-00068) and ‘Catedra UAM-Telefonica’

    Advances towards behaviour-based indoor robotic exploration

    Get PDF
    215 p.The main contributions of this research work remain in object recognition by computer vision, by one side, and in robot localisation and mapping by the other. The first contribution area of the research address object recognition in mobile robots. In this area, door handle recognition is of great importance, as it help the robot to identify doors in places where the camera is not able to view the whole door. In this research, a new two step algorithm is presented based on feature extraction that aimed at improving the extracted features to reduce the superfluous keypoints to be compared at the same time that it increased its efficiency by improving accuracy and reducing the computational time. Opposite to segmentation based paradigms, the feature extraction based two-step method can easily be generalized to other types of handles or even more, to other type of objects such as road signals. Experiments have shown very good accuracy when tested in real environments with different kind of door handles. With respect to the second contribution, a new technique to construct a topological map during the exploration phase a robot would perform on an unseen office-like environment is presented. Firstly a preliminary approach proposed to merge the Markovian localisation in a distributed system, which requires low storage and computational resources and is adequate to be applied in dynamic environments. In the same area, a second contribution to terrain inspection level behaviour based navigation concerned to the development of an automatic mapping method for acquiring the procedural topological map. The new approach is based on a typicality test called INCA to perform the so called loop-closing action. The method was integrated in a behaviour-based control architecture and tested in both, simulated and real robot/environment system. The developed system proved to be useful also for localisation purpose

    Joint Probabilistic People Detection in Overlapping Depth Images

    Get PDF
    Privacy-preserving high-quality people detection is a vital computer vision task for various indoor scenarios, e.g. people counting, customer behavior analysis, ambient assisted living or smart homes. In this work a novel approach for people detection in multiple overlapping depth images is proposed. We present a probabilistic framework utilizing a generative scene model to jointly exploit the multi-view image evidence, allowing us to detect people from arbitrary viewpoints. Our approach makes use of mean-field variational inference to not only estimate the maximum a posteriori (MAP) state but to also approximate the posterior probability distribution of people present in the scene. Evaluation shows state-of-the-art results on a novel data set for indoor people detection and tracking in depth images from the top-view with high perspective distortions. Furthermore it can be demonstrated that our approach (compared to the the mono-view setup) successfully exploits the multi-view image evidence and robustly converges in only a few iterations
    corecore