12 research outputs found

    On Using High-Definition Body Worn Cameras for Face Recognition from a Distance

    Get PDF
    Recognition of human faces from a distance is highly desirable for law-enforcement. This paper evaluates the use of low-cost, high-definition (HD) body worn video cameras for face recognition from a distance. A comparison of HD vs. Standard-definition (SD) video for face recognition from a distance is presented. HD and SD videos of 20 subjects were acquired in different conditions and at varying distances. The evaluation uses three benchmark algorithms: Eigenfaces, Fisherfaces and Wavelet Transforms. The study indicates when gallery and probe images consist of faces captured from a distance, HD video result in better recognition accuracy, compared to SD video. This scenario resembles real-life conditions of video surveillance and law-enforcement activities. However, at a close range, face data obtained from SD video result in similar, if not better recognition accuracy than using HD face data of the same range

    Selection of Wavelet Subbands Using Genetic Algorithm for Face Recognition

    Full text link
    Abstract. In this paper, a novel representation called the subband face is proposed for face recognition. The subband face is generated from selected subbands obtained using wavelet decomposition of the original face image. It is surmised that certain subbands contain information that is more significant for discriminating faces than other subbands. The problem of subband selection is cast as a combinatorial optimization problem and genetic algorithm (GA) is used to find the optimum subband combination by maximizing Fisher ratio of the training features. The performance of the GA selected subband face is evaluated using three face databases and compared with other wavelet-based representations.

    Multi-view based estimation of human upper-body orientation

    Get PDF
    The knowledge about the body orientation of humans can improve speed and performance of many service components of a smart-room. Since many of such components run in parallel, an estimator to acquire this knowledge needs a very low computational complexity. In this paper we address these two points with a fast and efficient algorithm using the smart-room's multiple camera output. The estimation is based on silhouette information only and is performed for each camera view separately. The single view results are fused within a Bayesian filter framework. We evaluate our system on a subset of videos from the CLEAR 2007 dataset [1] and achieve an average correct classification rate of 87.8 %, while the estimation itself just takes 12 ms when four cameras are used

    Image Processing Applications with a PCNN

    No full text

    Towards high-level human activity recognition through computer vision and temporal logic

    Get PDF
    Most approaches to the visual perception of humans do not include high-level activity recognitition. This paper presents a system that fuses and interprets the outputs of several computer vision components as well as speech recognition to obtain a high-level understanding of the perceived scene. Our laboratory for investigating new ways of human-machine interaction and teamwork support, is equipped with an assemblage of cameras, some close-talking microphones, and a videowall as main interaction device. Here, we develop state of the art real-time computer vision systems to track and identify users, and estimate their visual focus of attention and gesture activity. We also monitor the users' speech activity in real time. This paper explains our approach to high-level activity recognition based on these perceptual components and a temporal logic engine
    corecore