890 research outputs found

    DeepWalking: Enabling Smartphone-based Walking Speed Estimation Using Deep Learning

    Full text link
    Walking speed estimation is an essential component of mobile apps in various fields such as fitness, transportation, navigation, and health-care. Most existing solutions are focused on specialized medical applications that utilize body-worn motion sensors. These approaches do not serve effectively the general use case of numerous apps where the user holding a smartphone tries to find his or her walking speed solely based on smartphone sensors. However, existing smartphone-based approaches fail to provide acceptable precision for walking speed estimation. This leads to a question: is it possible to achieve comparable speed estimation accuracy using a smartphone over wearable sensor based obtrusive solutions? We find the answer from advanced neural networks. In this paper, we present DeepWalking, the first deep learning-based walking speed estimation scheme for smartphone. A deep convolutional neural network (DCNN) is applied to automatically identify and extract the most effective features from the accelerometer and gyroscope data of smartphone and to train the network model for accurate speed estimation. Experiments are performed with 10 participants using a treadmill. The average root-mean-squared-error (RMSE) of estimated walking speed is 0.16m/s which is comparable to the results obtained by state-of-the-art approaches based on a number of body-worn sensors (i.e., RMSE of 0.11m/s). The results indicate that a smartphone can be a strong tool for walking speed estimation if the sensor data are effectively calibrated and supported by advanced deep learning techniques.Comment: 6 pages, 9 figures, published in IEEE Global Communications Conference (GLOBECOM

    Evaluation of CNN architectures for gait recognition based on optical flow maps

    Get PDF
    This work targets people identification in video based on the way they walk (\ie gait) by using deep learning architectures. We explore the use of convolutional neural networks (CNN) for learning high-level descriptors from low-level motion features (\ie optical flow components). The low number of training samples for each subject and the use of a test set containing subjects different from the training ones makes the search of a good CNN architecture a challenging task.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tec

    Automatic learning of gait signatures for people identification

    Get PDF
    This work targets people identification in video based on the way they walk (i.e. gait). While classical methods typically derive gait signatures from sequences of binary silhouettes, in this work we explore the use of convolutional neural networks (CNN) for learning high-level descriptors from low-level motion features (i.e. optical flow components). We carry out a thorough experimental evaluation of the proposed CNN architecture on the challenging TUM-GAID dataset. The experimental results indicate that using spatio-temporal cuboids of optical flow as input data for CNN allows to obtain state-of-the-art results on the gait task with an image resolution eight times lower than the previously reported results (i.e. 80x60 pixels).Comment: Proof of concept paper. Technical report on the use of ConvNets (CNN) for gait recognition. Data and code: http://www.uco.es/~in1majim/research/cnngaitof.htm

    Learning optimised representations for view-invariant gait recognition

    Get PDF
    Gait recognition can be performed without subject cooperation under harsh conditions, thus it is an important tool in forensic gait analysis, security control, and other commercial applications. One critical issue that prevents gait recognition systems from being widely accepted is the performance drop when the camera viewpoint varies between the registered templates and the query data. In this paper, we explore the potential of combining feature optimisers and representations learned by convolutional neural networks (CNN) to achieve efficient view-invariant gait recognition. The experimental results indicate that CNN learns highly discriminative representations across moderate view variations, and these representations can be further improved using view-invariant feature selectors, achieving a high matching accuracy across views

    IntoxiGait Deep Learning

    Get PDF
    Alcohol abuse has been a pervasive problem worldwide, causing 88,000 annual deaths. Recently, several projects have attempted to estimate a users level of intoxication by measuring gait using mobile sensors. The goal of this project was to compare a deep learning approach to previous methods to predict the blood alcohol concentration of a user by training a convolutional neural network and creating a mobile app which could accurately determine intoxication level. We gathered data from 38 participants over the course of 12 weeks, collecting accelerometer and gyroscope data simultaneously from both a smartwatch and smartphone. Our neural networks accuracy is roughly 64% on the test set and 69% on the training set into 5 BAC ranges for an input containing two seconds of data

    A multilevel paradigm for deep convolutional neural network features selection with an application to human gait recognition

    Get PDF
    Human gait recognition (HGR) shows high importance in the area of video surveillance due to remote access and security threats. HGR is a technique commonly used for the identification of human style in daily life. However, many typical situations like change of clothes condition and variation in view angles degrade the system performance. Lately, different machine learning (ML) techniques have been introduced for video surveillance which gives promising results among which deep learning (DL) shows best performance in complex scenarios. In this article, an integrated framework is proposed for HGR using deep neural network and fuzzy entropy controlled skewness (FEcS) approach. The proposed technique works in two phases: In the first phase, deep convolutional neural network (DCNN) features are extracted by pre-trained CNN models (VGG19 and AlexNet) and their information is mixed by parallel fusion approach. In the second phase, entropy and skewness vectors are calculated from fused feature vector (FV) to select best subsets of features by suggested FEcS approach. The best subsets of picked features are finally fed to multiple classifiers and finest one is chosen on the basis of accuracy value. The experiments were carried out on four well-known datasets, namely, AVAMVG gait, CASIA A, B and C. The achieved accuracy of each dataset was 99.8, 99.7, 93.3 and 92.2%, respectively. Therefore, the obtained overall recognition results lead to conclude that the proposed system is very promising
    corecore