13 research outputs found
Face Recognition in compressed domain based on wavelet transform and kd-tree matching
This paper presents a novel idea for implementing face recognition system in compressed domain. A major advantage of the proposed approach is the fact that face recognition systems can directly work with JPEG and JPEG2000 compressed images, i.e. it uses directly the entropy points provided by the compression standards as input without any necessity of completely decompressing the image before recognition. The Kd-tree technique is used in the proposed approach for the matching of the images. This algorithm shows improvement in reducing the computational time of the overall approach. This proposed method significantly improves the recognition rates while greatly reducing computational time and storage requirements
A Robust Face Recognition Algorithm for Real-World Applications
The proposed face recognition algorithm utilizes representation of local facial regions with the DCT. The local representation provides robustness against appearance variations in local regions caused by partial face occlusion or facial expression, whereas utilizing the frequency information provides robustness against changes in illumination. The algorithm also bypasses the facial feature localization step and formulates face alignment as an optimization problem in the classification stage
Discriminative Appearance Models for Face Alignment
The proposed face alignment algorithm uses local gradient features as the appearance representation. These features are obtained by pixel value comparison, which provide robustness against changes in illumination, as well as partial occlusion and local deformation due to the locality. The adopted features are modeled in three discriminative methods, which correspond to different alignment cost functions. The discriminative appearance modeling alleviate the generalization problem to some extent
Contextual Person Identification in Multimedia Data
We propose methods to improve automatic person identification, regardless of the visibility of a face, by integration of multiple cues including multiple modalities and contextual information. We propose a joint learning approach using contextual information from videos to improve learned face models. Further, we integrate additional modalities in a global fusion framework. We evaluate our approaches on a novel TV series data set, consisting of over 100 000 annotated faces