201,651 research outputs found

    A robust illumination-invariant face recognition based on fusion of thermal IR, maximum filter and visible image

    Get PDF
    Face recognition has many challenges especially in real life detection, whereby to maintain consistency in getting an accurate recognition is almost impossible. Even for well-established state-of-the-art algorithms or methods will produce low accuracy in recognition if it was conducted under poor or bad lighting. To create a more robust face recognition with illumination invariant, this paper proposed an algorithm using a triple fusion approach. We are also implementing a hybrid method that combines the active approach by implementing thermal infrared imaging and also the passive approach of Maximum Filter and visual image. These approaches allow us to improve the image pre-processing as well as feature extraction and face detection, even if we capture a person’s face image in total darkness. In our experiment, Extended Yale B database are tested with Maximum Filter and compared against other state-of-the-filters. We have conduct-ed several experiments on mid-wave and long-wave thermal Infrared performance during pre-processing and saw that it is capable to improve recognition beyond what meets the eye. In our experiment, we found out that PCA eigenface cannot be produced in a poor or bad illumination. Mid-wave thermal creates the heat signature in the body and the Maximum Filter maintains the fine edges that are easily used by any classifiers such as SVM, OpenCV or even kNN together with Euclidian distance to perform face recognition. These configurations have been assembled for a face recognition portable robust system and the result showed that creating fusion between these processed image illumination invariants during preprocessing show far better results than just using visible image, thermal image or maximum filtered image separately

    Fast Landmark Localization with 3D Component Reconstruction and CNN for Cross-Pose Recognition

    Full text link
    Two approaches are proposed for cross-pose face recognition, one is based on the 3D reconstruction of facial components and the other is based on the deep Convolutional Neural Network (CNN). Unlike most 3D approaches that consider holistic faces, the proposed approach considers 3D facial components. It segments a 2D gallery face into components, reconstructs the 3D surface for each component, and recognizes a probe face by component features. The segmentation is based on the landmarks located by a hierarchical algorithm that combines the Faster R-CNN for face detection and the Reduced Tree Structured Model for landmark localization. The core part of the CNN-based approach is a revised VGG network. We study the performances with different settings on the training set, including the synthesized data from 3D reconstruction, the real-life data from an in-the-wild database, and both types of data combined. We investigate the performances of the network when it is employed as a classifier or designed as a feature extractor. The two recognition approaches and the fast landmark localization are evaluated in extensive experiments, and compared to stateof-the-art methods to demonstrate their efficacy.Comment: 14 pages, 12 figures, 4 table

    PersonRank: Detecting Important People in Images

    Full text link
    Always, some individuals in images are more important/attractive than others in some events such as presentation, basketball game or speech. However, it is challenging to find important people among all individuals in images directly based on their spatial or appearance information due to the existence of diverse variations of pose, action, appearance of persons and various changes of occasions. We overcome this difficulty by constructing a multiple Hyper-Interaction Graph to treat each individual in an image as a node and inferring the most active node referring to interactions estimated by various types of clews. We model pairwise interactions between persons as the edge message communicated between nodes, resulting in a bidirectional pairwise-interaction graph. To enrich the personperson interaction estimation, we further introduce a unidirectional hyper-interaction graph that models the consensus of interaction between a focal person and any person in a local region around. Finally, we modify the PageRank algorithm to infer the activeness of persons on the multiple Hybrid-Interaction Graph (HIG), the union of the pairwise-interaction and hyperinteraction graphs, and we call our algorithm the PersonRank. In order to provide publicable datasets for evaluation, we have contributed a new dataset called Multi-scene Important People Image Dataset and gathered a NCAA Basketball Image Dataset from sports game sequences. We have demonstrated that the proposed PersonRank outperforms related methods clearly and substantially.Comment: 8 pages, conferenc

    Fast and Accurate Algorithm for Eye Localization for Gaze Tracking in Low Resolution Images

    Full text link
    Iris centre localization in low-resolution visible images is a challenging problem in computer vision community due to noise, shadows, occlusions, pose variations, eye blinks, etc. This paper proposes an efficient method for determining iris centre in low-resolution images in the visible spectrum. Even low-cost consumer-grade webcams can be used for gaze tracking without any additional hardware. A two-stage algorithm is proposed for iris centre localization. The proposed method uses geometrical characteristics of the eye. In the first stage, a fast convolution based approach is used for obtaining the coarse location of iris centre (IC). The IC location is further refined in the second stage using boundary tracing and ellipse fitting. The algorithm has been evaluated in public databases like BioID, Gi4E and is found to outperform the state of the art methods.Comment: 12 pages, 10 figures, IET Computer Vision, 201
    • …
    corecore