3,865 research outputs found
On gait as a biometric: progress and prospects
There is increasing interest in automatic recognition by gait given its unique capability to recognize people at a distance when other biometrics are obscured. Application domains are those of any noninvasive biometric, but with particular advantage in surveillance scenarios. Its recognition capability is supported by studies in other domains such as medicine (biomechanics), mathematics and psychology which also suggest that gait is unique. Further, examples of recognition by gait can be found in literature, with early reference by Shakespeare concerning recognition by the way people walk. Many of the current approaches confirm the early results that suggested gait could be used for identification, and now on much larger databases. This has been especially influenced by DARPAās Human ID at a Distance research program with its wide scenario of data and approaches. Gait has benefited from the developments in other biometrics and has led to new insight particularly in view of covariates. Equally, gait-recognition approaches concern extraction and description of moving articulated shapes and this has wider implications than just in biometrics
The way we walk
Mark Nixon and John Carter reveal how developments in biometrics could mean the increasing use of biometric evidence such ear shape and gait to identify defendants
Attack Of The Dronesā Exploration Of The Sound Power Levels Emitted And The Impact Droneās Could Have Upon Rural Areas
This study considers the acoustic emission from a DJi Phantom 4 commercial drone using different rotor blades. Measurements were taken from a hovering drone with four commercial product blade configurations. Measurements were taken in accordance with (BS) EN ISO 3745: 2009 āAcoustics ā Determination of sound power levels and sound energy levels of noise sources using sound pres-sure ā Precision methods for anechoic rooms and hemi-anechoic roomsā. The aim of the project was to consider the sound characteristics emitted, specifically tonality and to determine the dis-tance a drone could be heard from, with the different blade configurations, in a rural setting. By considering the different blade configurations within a rural setting, the role drones have within society is considered
Towards automated visual surveillance using gait for identity recognition and tracking across multiple non-intersecting cameras
Despite the fact that personal privacy has become a major concern, surveillance technology is now becoming ubiquitous in modern society. This is mainly due to the increasing number of crimes as well as the essential necessity to provide secure and safer environment. Recent research studies have confirmed now the possibility of recognizing people by the way they walk i.e. gait. The aim of this research study is to investigate the use of gait for people detection as well as identification across different cameras. We present a new approach for people tracking and identification between different non-intersecting un-calibrated stationary cameras based on gait analysis. A vision-based markerless extraction method is being deployed for the derivation of gait kinematics as well as anthropometric measurements in order to produce a gait signature. The novelty of our approach is motivated by the recent research in biometrics and forensic analysis using gait. The experimental results affirmed the robustness of our approach to successfully detect walking people as well as its potency to extract gait features for different camera viewpoints achieving an identity recognition rate of 73.6 % processed for 2270 video sequences. Furthermore, experimental results confirmed the potential of the proposed method for identity tracking in real surveillance systems to recognize walking individuals across different views with an average recognition rate of 92.5 % for cross-camera matching for two different non-overlapping views.<br/
Gait Recognition By Walking and Running: A Model-Based Approach
Gait is an emerging biometric for which some techniques, mainly holistic, have been developed to recognise people by their walking patterns. However, the possibility of recognising people by the way they run remains largely unexplored. The new analytical model presented in this paper is based on the biomechanics of walking and running, and will serve as the foundation of an automatic person recognition system that is invariant to these distinct gaits. A bilateral and dynamically coupled oscillator is the key concept underlying this work. Analysis shows that this new model can be used to automatically describe walking and running subjects without parameter selection. Temporal template matching that takes into account the whole sequence of a gait cycle is applied to extract the angles of thigh and lower leg rotation. The phase-weighted magnitudes of the lower order Fourier components of these rotations form the gait signature. Classification of walking and running subjects is performed using the k-nearest-neighbour classifier. Recognition rates are similar to that achieved by other techniques with a similarly sized database. Future work will investigate feature set selection to improve the recognition rate and will determine the invariance attributes, for inter- and intra- class, of both walking and running
Accurate object reconstruction by statistical moments
Statistical moments can offer a powerful means for object description in object sequences. Moments used in this way provide a description of the changing shape of the object with time. Using these descriptions to predict temporal views of the object requires efficient and accurate reconstruction of the object from a limited set of moments, but accurate reconstruction from moments has as yet received only limited attention. We show how we can improve accuracy not only by consideration of formulation, but also by a new adaptive thresholding technique that removes one parameter needed in reconstruction. Both approaches are equally applicable for Legendre and other orthogonal moments to improve accuracy in reconstruction
The image ray transform for structural feature detection
The use of analogies to physical phenomena is an exciting paradigm in computer vision that allows unorthodox approaches to feature extraction, creating new techniques with unique properties. A technique known as the "image ray transform" has been developed based upon an analogy to the propagation of light as rays. The transform analogises an image to a set of glass blocks with refractive index linked to pixel properties and then casts a large number of rays through the image. The course of these rays is accumulated into an output image. The technique can successfully extract tubular and circular features and we show successful circle detection, ear biometrics and retinal vessel extraction. The transform has also been extended through the use of multiple rays arranged as a beam to increase robustness to noise, and we show quantitative results for fully automatic ear recognition, achieving 95.2% rank one recognition across 63 subjects
Force field feature extraction for ear biometrics
The overall objective in defining feature space is to reduce the dimensionality of the original pattern space, whilst maintaining discriminatory power for classification. To meet this objective in the context of ear biometrics a new force field transformation treats the image as an array of mutually attracting particles that act as the source of a Gaussian force field. Underlying the force field there is a scalar potential energy field, which in the case of an ear takes the form of a smooth surface that resembles a small mountain with a number of peaks joined by ridges. The peaks correspond to potential energy wells and to extend the analogy the ridges correspond to potential energy channels. Since the transform also turns out to be invertible, and since the surface is otherwise smooth, information theory suggests that much of the information is transferred to these features, thus confirming their efficacy. We previously described how field line feature extraction, using an algorithm similar to gradient descent, exploits the directional properties of the force field to automatically locate these channels and wells, which then form the basis of characteristic ear features. We now show how an analysis of the mechanism of this algorithmic approach leads to a closed analytical description based on the divergence of force direction, which reveals that channels and wells are really manifestations of the same phenomenon. We further show that this new operator, with its own distinct advantages, has a striking similarity to the Marr-Hildreth operator, but with the important difference that it is non-linear. As well as addressing faster implementation, invertibility, and brightness sensitivity, the technique is also validated by performing recognition on a database of ears selected from the XM2VTS face database, and by comparing the results with the more established technique of Principal Components Analysis. This confirms not only that ears do indeed appear to have potential as a biometric, but also that the new approach is well suited to their description, being robust especially in the presence of noise, and having the advantage that the ear does not need to be explicitly extracted from the background
- ā¦