7,489 research outputs found

    On gait as a biometric: progress and prospects

    No full text
    There is increasing interest in automatic recognition by gait given its unique capability to recognize people at a distance when other biometrics are obscured. Application domains are those of any noninvasive biometric, but with particular advantage in surveillance scenarios. Its recognition capability is supported by studies in other domains such as medicine (biomechanics), mathematics and psychology which also suggest that gait is unique. Further, examples of recognition by gait can be found in literature, with early reference by Shakespeare concerning recognition by the way people walk. Many of the current approaches confirm the early results that suggested gait could be used for identification, and now on much larger databases. This has been especially influenced by DARPA’s Human ID at a Distance research program with its wide scenario of data and approaches. Gait has benefited from the developments in other biometrics and has led to new insight particularly in view of covariates. Equally, gait-recognition approaches concern extraction and description of moving articulated shapes and this has wider implications than just in biometrics

    Graphical model based facial feature point tracking in a vehicle environment

    Get PDF
    Facial feature point tracking is a research area that can be used in human-computer interaction (HCI), facial expression analysis, fatigue detection, etc. In this paper, a statistical method for facial feature point tracking is proposed. Feature point tracking is a challenging topic in case of uncertain data because of noise and/or occlusions. With this motivation, a graphical model that incorporates not only temporal information about feature point movements, but also information about the spatial relationships between such points is built. Based on this model, an algorithm that achieves feature point tracking through a video observation sequence is implemented. The proposed method is applied on 2D gray scale real video sequences taken in a vehicle environment and the superiority of this approach over existing techniques is demonstrated

    A robust and efficient video representation for action recognition

    Get PDF
    This paper introduces a state-of-the-art video representation and applies it to efficient action recognition and detection. We first propose to improve the popular dense trajectory features by explicit camera motion estimation. More specifically, we extract feature point matches between frames using SURF descriptors and dense optical flow. The matches are used to estimate a homography with RANSAC. To improve the robustness of homography estimation, a human detector is employed to remove outlier matches from the human body as human motion is not constrained by the camera. Trajectories consistent with the homography are considered as due to camera motion, and thus removed. We also use the homography to cancel out camera motion from the optical flow. This results in significant improvement on motion-based HOF and MBH descriptors. We further explore the recent Fisher vector as an alternative feature encoding approach to the standard bag-of-words histogram, and consider different ways to include spatial layout information in these encodings. We present a large and varied set of evaluations, considering (i) classification of short basic actions on six datasets, (ii) localization of such actions in feature-length movies, and (iii) large-scale recognition of complex events. We find that our improved trajectory features significantly outperform previous dense trajectories, and that Fisher vectors are superior to bag-of-words encodings for video recognition tasks. In all three tasks, we show substantial improvements over the state-of-the-art results

    Automatic object classification for surveillance videos.

    Get PDF
    PhDThe recent popularity of surveillance video systems, specially located in urban scenarios, demands the development of visual techniques for monitoring purposes. A primary step towards intelligent surveillance video systems consists on automatic object classification, which still remains an open research problem and the keystone for the development of more specific applications. Typically, object representation is based on the inherent visual features. However, psychological studies have demonstrated that human beings can routinely categorise objects according to their behaviour. The existing gap in the understanding between the features automatically extracted by a computer, such as appearance-based features, and the concepts unconsciously perceived by human beings but unattainable for machines, or the behaviour features, is most commonly known as semantic gap. Consequently, this thesis proposes to narrow the semantic gap and bring together machine and human understanding towards object classification. Thus, a Surveillance Media Management is proposed to automatically detect and classify objects by analysing the physical properties inherent in their appearance (machine understanding) and the behaviour patterns which require a higher level of understanding (human understanding). Finally, a probabilistic multimodal fusion algorithm bridges the gap performing an automatic classification considering both machine and human understanding. The performance of the proposed Surveillance Media Management framework has been thoroughly evaluated on outdoor surveillance datasets. The experiments conducted demonstrated that the combination of machine and human understanding substantially enhanced the object classification performance. Finally, the inclusion of human reasoning and understanding provides the essential information to bridge the semantic gap towards smart surveillance video systems
    corecore