930 research outputs found

    Wearable device-based gait recognition using angle embedded gait dynamic images and a convolutional neural network

    Get PDF
    The widespread installation of inertial sensors in smartphones and other wearable devices provides a valuable opportunity to identify people by analyzing their gait patterns, for either cooperative or non-cooperative circumstances. However, it is still a challenging task to reliably extract discriminative features for gait recognition with noisy and complex data sequences collected from casually worn wearable devices like smartphones. To cope with this problem, we propose a novel image-based gait recognition approach using the Convolutional Neural Network (CNN) without the need to manually extract discriminative features. The CNN’s input image, which is encoded straightforwardly from the inertial sensor data sequences, is called Angle Embedded Gait Dynamic Image (AE-GDI). AE-GDI is a new two-dimensional representation of gait dynamics, which is invariant to rotation and translation. The performance of the proposed approach in gait authentication and gait labeling is evaluated using two datasets: (1) the McGill University dataset, which is collected under realistic conditions; and (2) the Osaka University dataset with the largest number of subjects. Experimental results show that the proposed approach achieves competitive recognition accuracy over existing approaches and provides an effective parametric solution for identification among a large number of subjects by gait patterns

    Wearable device-based gait recognition using angle embedded gait dynamic images and a convolutional neural network

    Get PDF
    The widespread installation of inertial sensors in smartphones and other wearable devices provides a valuable opportunity to identify people by analyzing their gait patterns, for either cooperative or non-cooperative circumstances. However, it is still a challenging task to reliably extract discriminative features for gait recognition with noisy and complex data sequences collected from casually worn wearable devices like smartphones. To cope with this problem, we propose a novel image-based gait recognition approach using the Convolutional Neural Network (CNN) without the need to manually extract discriminative features. The CNN’s input image, which is encoded straightforwardly from the inertial sensor data sequences, is called Angle Embedded Gait Dynamic Image (AE-GDI). AE-GDI is a new two-dimensional representation of gait dynamics, which is invariant to rotation and translation. The performance of the proposed approach in gait authentication and gait labeling is evaluated using two datasets: (1) the McGill University dataset, which is collected under realistic conditions; and (2) the Osaka University dataset with the largest number of subjects. Experimental results show that the proposed approach achieves competitive recognition accuracy over existing approaches and provides an effective parametric solution for identification among a large number of subjects by gait patterns

    Stereo-vision-based navigation of a six-legged walking robot in unknown rough terrain

    Get PDF
    In this paper we presents a visual navigation algorithm for the six-legged walking robot DLR Crawler in rough terrain. The algorithm is based on stereo images from which depth images are computed using the semi- global matching (SGM) method. Further, a visual odometry is calculated along with an error measure. Pose estimates are obtained by fusing iner- tial data with relative leg odometry and visual odometry measurements using an indirect information filter. The visual odometry error measure is used in the filtering process to put lower weights on erroneous visual odometry data, hence, improving the robustness of pose estimation. From the estimated poses and the depth images, a dense digital terrain map is created by applying the locus method. The traversability of the terrain is estimated by a plane fitting approach and paths are planned using a D* Lite planner taking the traversability of the terrain and the current motion capabilities of the robot into account. Motion commands and the traversability measures of the upcoming terrain are sent to the walking layer of the robot so that it can choose an appropriate gait for the terrain. Experimental results show the accuracy of the navigation algorithm and its robustness against visual disturbances

    Walking Recognition in Mobile Devices

    Get PDF
    Presently, smartphones are used more and more for purposes that have nothing to do with phone calls or simple data transfers. One example is the recognition of human activity, which is relevant information for many applications in the domains of medical diagnosis, elderly assistance, indoor localization, and navigation. The information captured by the inertial sensors of the phone (accelerometer, gyroscope, and magnetometer) can be analyzed to determine the activity performed by the person who is carrying the device, in particular in the activity of walking. Nevertheless, the development of a standalone application able to detect the walking activity starting only from the data provided by these inertial sensors is a complex task. This complexity lies in the hardware disparity, noise on data, and mostly the many movements that the smartphone can experience and which have nothing to do with the physical displacement of the owner. In this work, we explore and compare several approaches for identifying the walking activity. We categorize them into two main groups: the first one uses features extracted from the inertial data, whereas the second one analyzes the characteristic shape of the time series made up of the sensors readings. Due to the lack of public datasets of inertial data from smartphones for the recognition of human activity under no constraints, we collected data from 77 different people who were not connected to this research. Using this dataset, which we published online, we performed an extensive experimental validation and comparison of our proposalsThis research has received financial support from AEI/FEDER (European Union) grant number TIN2017-90135-R, as well as the Consellería de Cultura, Educación e Ordenación Universitaria of Galicia (accreditation 2016–2019, ED431G/01 and ED431G/08, reference competitive group ED431C2018/29, and grant ED431F2018/02), and the European Regional Development Fund (ERDF). It has also been supported by the Ministerio de Educación, Cultura y Deporte of Spain in the FPU 2017 program (FPU17/04154), and the Ministerio de Economía, Industria y Competitividad in the Industrial PhD 2014 program (DI-14-06920)S

    Sensor Data Fusion for Body State Estimation in a Hexapod Robot With Dynamical Gaits

    Get PDF
    We report on a hybrid 12-dimensional full body state estimator for a hexapod robot executing a jogging gait in steady state on level terrain with regularly alternating ground contact and aerial phases of motion. We use a repeating sequence of continuous time dynamical models that are switched in and out of an extended Kalman filter to fuse measurements from a novel leg pose sensor and inertial sensors. Our inertial measurement unit supplements the traditionally paired three-axis rate gyro and three-axis accelerometer with a set of three additional three-axis accelerometer suites, thereby providing additional angular acceleration measurement, avoiding the need for localization of the accelerometer at the center of mass on the robot’s body, and simplifying installation and calibration. We implement this estimation procedure offline, using data extracted from numerous repeated runs of the hexapod robot RHex (bearing the appropriate sensor suite) and evaluate its performance with reference to a visual ground-truth measurement system, comparing as well the relative performance of different fusion approaches implemented via different model sequences
    • …
    corecore