3,001 research outputs found

    Video-based driver identification using local appearance face recognition

    Get PDF
    In this paper, we present a person identification system for vehicular environments. The proposed system uses face images of the driver and utilizes local appearance-based face recognition over the video sequence. To perform local appearance-based face recognition, the input face image is decomposed into non-overlapping blocks and on each local block discrete cosine transform is applied to extract the local features. The extracted local features are then combined to construct the overall feature vector. This process is repeated for each video frame. The distribution of the feature vectors over the video are modelled using a Gaussian distribution function at the training stage. During testing, the feature vector extracted from each frame is compared to each person’s distribution, and individual likelihood scores are generated. Finally, the person is identified as the one who has maximum joint-likelihood score. To assess the performance of the developed system, extensive experiments are conducted on different identification scenarios, such as closed set identification, open set identification and verification. For the experiments a subset of the CIAIR-HCC database, an in-vehicle data corpus that is collected at the Nagoya University, Japan is used. We show that, despite varying environment and illumination conditions, that commonly exist in vehicular environments, it is possible to identify individuals robustly from their face images. Index Terms — Local appearance face recognition, vehicle environment, discrete cosine transform, fusion. 1

    The effect of time on gait recognition performance

    No full text
    Many studies have shown that it is possible to recognize people by the way they walk. However, there are a number of covariate factors that affect recognition performance. The time between capturing the gallery and the probe has been reported to affect recognition the most. To date, no study has shown the isolated effect of time, irrespective of other covariates. Here we present the first principled study that examines the effect of elapsed time on gait recognition. Using empirical evidence we show for the first time that elapsed time does not affect recognition significantly in the short to medium term. By controlling the clothing worn by the subjects and the environment, a Correct Classification Rate (CCR) of 95% has been achieved over 9 months, on a dataset of 2280 gait samples. Our results show that gait can be used as a reliable biometric over time and at a distance. We have created a new multimodal temporal database to enable the research community to investigate various gait and face covariates. We have also investigated the effect of different type of clothes, variations in speed and footwear on the recognition performance. We have demonstrated that clothing drastically affects performance regardless of elapsed time and significantly more than any of the other covariates that we have considered here. The research then suggests a move towards developing appearance invariant recognition algorithms. Thi

    Finger texture verification systems based on multiple spectrum lighting sensors with four fusion levels

    Get PDF
    Finger Texture (FT) is one of the most recent attractive biometric characteristic. It refers to a finger skin area which is restricted between the fingerprint and the palm print (just after including the lower knuckle). Different specifications for the FT can be obtained by employing multiple images spectrum of lights. Individual verification systems are established in this paper by using multiple spectrum FT specifications. The key idea here is that by combining two various spectrum lightings of FTs, high personal recognitions can be attained. Four types of fusion will be listed and explained here: Sensor Level Fusion (SLF), Feature Level Fusion (FLF), Score Level Fusion (ScLF) and Decision Level Fusion (DLF). Each fusion method is employed, examined for different rules and analysed. Then, the best performance procedure is benchmarked to be considered. From the database of Multiple Spectrum CASIA (MSCASIA), FT images have been collected. Two types of spectrum lights have been exploited (the wavelength of 460 nm, which represents a Blue (BLU) light, and the White (WHT) light). Supporting comparisons were performed, including the state-of-the-art. Best recognition performance was recorded for the FLF based concatenation rule by improving the Equal Error Rate (EER) percentages from 5% for the BLU and 7% for the WHT to 2%
    corecore