3,939 research outputs found

    Spatial footstep recognition by convolutional neural networks for biometrie applications

    Full text link
    We propose a Convolutional Neural Network model to learn spatial footstep features end-to-end from a floor sensor system for biometric applications. Our model’s generalization performance is assessed by independent validation and evaluation datasets from the largest footstep database to date, containing nearly 20,000 footstep signals from 127 users. We report footstep recognition performance as Equal Error Rate in the range of 9% to 13% depending on the test set. This improves previously reported footstep recognition rates in the spatial domain up to 4% EE

    Material Recognition CNNs and Hierarchical Planning for Biped Robot Locomotion on Slippery Terrain

    Full text link
    In this paper we tackle the problem of visually predicting surface friction for environments with diverse surfaces, and integrating this knowledge into biped robot locomotion planning. The problem is essential for autonomous robot locomotion since diverse surfaces with varying friction abound in the real world, from wood to ceramic tiles, grass or ice, which may cause difficulties or huge energy costs for robot locomotion if not considered. We propose to estimate friction and its uncertainty from visual estimation of material classes using convolutional neural networks, together with probability distribution functions of friction associated with each material. We then robustly integrate the friction predictions into a hierarchical (footstep and full-body) planning method using chance constraints, and optimize the same trajectory costs at both levels of the planning method for consistency. Our solution achieves fully autonomous perception and locomotion on slippery terrain, which considers not only friction and its uncertainty, but also collision, stability and trajectory cost. We show promising friction prediction results in real pictures of outdoor scenarios, and planning experiments on a real robot facing surfaces with different friction

    A novel approach of gait recognition through fusion with footstep information

    Full text link
    Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. R. Vera-Rodríguez, J. Fiérrez, J. S.D. Mason, J. Ortega-García, "A novel approach of gait recognition through fusion with footstep information" in International Conference on Biometrics (ICB), Madrid (Spain), 2013, 1-6This paper is focused on two biometric modes which are very linked together: gait and footstep biometrics. Footstep recognition is a relatively new biometric based on signals extracted from floor sensors, while gait has been more researched and it is based on video sequences of people walking. This paper reports a directly comparative assessment of both biometrics using the same database (SFootBD) and experimental protocols. A fusion of the two modes leads to an enhanced gait recognition performance, as the information from both modes comes from different capturing devices and is not very correlated. This fusion could find application in indoor scenarios where a gait recognition system is present, such as in security access (e.g. security gate at airports) or smart homes. Gait and footstep systems achieve results of 8.4% and 10.7% EER respectively, which can be significantly improved to 4.8% EER with their fusion at the score level into a walking biometric.This work has been partially supported by projects Bio-Shield (TEC2012-34881), Contexts (S2009/TIC-1485), TeraSense (CSD2008-00068) and “Cátedra UAM-Telefónica”

    Comparative analysis and fusion of spatiotemporal information for footstep recognition

    Full text link
    Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. R. Vera-Rodriguez, J. S. D. Mason, J. Fierrez, and J. Ortega-Garcia, "Comparative analysis and fusion of spatiotemporal information for footstep recognition", Pattern Analysis and Machine Intelligence, IEEE Transaction, vol. 35, no. 4, pp. 823-834, August 2012Footstep recognition is a relatively new biometric which aims to discriminate people using walking characteristics extracted from floor-based sensors. This paper reports for the first time a comparative assessment of the spatiotemporal information contained in the footstep signals for person recognition. Experiments are carried out on the largest footstep database collected to date, with almost 20,000 valid footstep signals and more than 120 people. Results show very similar performance for both spatial and temporal approaches (5 to 15 percent EER depending on the experimental setup), and a significant improvement is achieved for their fusion (2.5 to 10 percent EER). The assessment protocol is focused on the influence of the quantity of data used in the reference models, which serves to simulate conditions of different potential applications such as smart homes or security access scenarios.Ruben Vera-Rodriguez, Julian Fierrez and Javier Ortega Garcia are supported by projects Contexts (S2009/TIC-1485), Bio-Challenge (TEC2009-11186), TeraSense (CSD2008-00068) and ‘Catedra UAM-Telefonica’

    Analysis of spatio-temporal representations for robust footstep recognition with deep residual neural networks

    Full text link
    IEEE: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.”Human footsteps can provide a unique behavioural pattern for robust biometric systems. We propose spatio-temporal footstep representations from floor-only sensor data in advanced computational models for automatic biometric verification. Our models deliver an artificial intelligence capable of effectively differentiating the fine-grained variability of footsteps between legitimate users (clients) and impostor users of the biometric system. The methodology is validated in the largest to date footstep database, containing nearly 20,000 footstep signals from more than 120 users. The database is organized by considering a large cohort of impostors and a small set of clients to verify the reliability of biometric systems. We provide experimental results in 3 critical data-driven security scenarios, according to the amount of footstep data made available for model training: at airports security checkpoints (smallest training set), workspace environments (medium training set) and home environments (largest training set). We report state-of-the-art footstep recognition rates with an optimal equal false acceptance and false rejection rate of 0.7% (equal error rate), an improvement ratio of 371% from previous state-of-the-art. We perform a feature analysis of deep residual neural networks showing effective clustering of client's footstep data and provide insights of the feature learning process.This work has been partially supported by Cognimetrics TEC2015-70627-R MINECO/FEDE

    Special issue on smart interactions in cyber-physical systems: Humans, agents, robots, machines, and sensors

    Get PDF
    In recent years, there has been increasing interaction between humans and non‐human systems as we move further beyond the industrial age, the information age, and as we move into the fourth‐generation society. The ability to distinguish between human and non‐human capabilities has become more difficult to discern. Given this, it is common that cyber‐physical systems (CPSs) are rapidly integrated with human functionality, and humans have become increasingly dependent on CPSs to perform their daily routines.The constant indicators of a future where human and non‐human CPSs relationships consistently interact and where they allow each other to navigate through a set of non‐trivial goals is an interesting and rich area of research, discovery, and practical work area. The evidence of con- vergence has rapidly gained clarity, demonstrating that we can use complex combinations of sensors, artificial intelli- gence, and data to augment human life and knowledge. To expand the knowledge in this area, we should explain how to model, design, validate, implement, and experiment with these complex systems of interaction, communication, and networking, which will be developed and explored in this special issue. This special issue will include ideas of the future that are relevant for understanding, discerning, and developing the relationship between humans and non‐ human CPSs as well as the practical nature of systems that facilitate the integration between humans, agents, robots, machines, and sensors (HARMS).Fil: Kim, Donghan. Kyung Hee University;Fil: Rodriguez, Sebastian Alberto. Universidad Tecnológica Nacional; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Tucumán; ArgentinaFil: Matson, Eric T.. Purdue University; Estados UnidosFil: Kim, Gerard Jounghyun. Korea University

    Optimizing Player and Viewer Amusement in Suspense Video Games

    Get PDF
    Broadcast video games need to provide amusement to both players and audience. To achieve this, one of the most consumed genres is suspense, due to the psychological effects it has on both roles. Suspense is typically achieved in video games by controlling the amount of delivered information about the location of the threat. However, previous research suggests that players need more frequent information to reach similar amusement than viewers, even at the cost of jeopardizing viewers' engagement. In order to obtain models that maximize amusement for both interactive and passive audiences, we conducted an experiment in which a group of subjects played a suspenseful video game while another group watched it remotely. The subjects were asked to report their perceived suspense and amusement, and the data were used to obtain regression models for two common strategies to evoke suspense in video games: by alerting when the threat is approaching and by random circumstantial indications about the location of the threat. The results suggest that the optimal level is reached through randomly providing the minimal amount of information that still allows players to counteract the threat.We reckon that these results can be applied to a broad narrative media, beyond interactive games
    corecore