14,698 research outputs found

    Race classification using gaussian-based weight K-nn algorithm for face recognition

    Get PDF
    One of the greatest challenges in facial recognition systems is to recognize faces around different race and illuminations. Chromaticity is an essential factor in facial recognition and shows the intensity of the color in a pixel, it can greatly vary depending on the lighting conditions. The race classification scheme proposed which is Gaussian based-weighted K-Nearest Neighbor classifier in this paper, has very sensitive to illumination intensity. The main idea is first to identify the minority class instances in the training data and then generalize them to Gaussian function as concept for the minority class. By using combination of K-NN algorithm with Gaussian formula for race classification. In this paper, image processing is divided into two phases. The first is preprocessing phase. There are three preprocessing comprises of auto contrast balance, noise reduction and auto-color balancing. The second phase is face processing which contains six steps; face detection, illumination normalization, feature extraction, skin segmentation, race classification and face recognition. There are two type of dataset are being used; first FERET dataset where images inside this dataset involve of illumination variations. The second is Caltech dataset which images side this dataset contains noises

    SfSNet: Learning Shape, Reflectance and Illuminance of Faces in the Wild

    Full text link
    We present SfSNet, an end-to-end learning framework for producing an accurate decomposition of an unconstrained human face image into shape, reflectance and illuminance. SfSNet is designed to reflect a physical lambertian rendering model. SfSNet learns from a mixture of labeled synthetic and unlabeled real world images. This allows the network to capture low frequency variations from synthetic and high frequency details from real images through the photometric reconstruction loss. SfSNet consists of a new decomposition architecture with residual blocks that learns a complete separation of albedo and normal. This is used along with the original image to predict lighting. SfSNet produces significantly better quantitative and qualitative results than state-of-the-art methods for inverse rendering and independent normal and illumination estimation.Comment: Accepted to CVPR 2018 (Spotlight

    Face Recognition in Color Using Complex and Hypercomplex Representation

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-540-72847-4_29Color has plenty of discriminative information that can be used to improve the performance of face recognition algorithms, although it is difficult to use it because of its high variability. In this paper we investigate the use of the quaternion representation of a color image for face recognition. We also propose a new representation for color images based on complex numbers. These two color representation methods are compared with the traditional grayscale and RGB representations using an eigenfaces based algorithm for identity verification. The experimental results show that the proposed method gives a very significant improvement when compared to using only the illuminance information.Work supported by the Spanish Project DPI2004-08279-C02-02 and the Generalitat Valenciana - Consellería d’Empresa, Universitat i Ciència under an FPI scholarship.Villegas, M.; Paredes Palacios, R. (2007). Face Recognition in Color Using Complex and Hypercomplex Representation. En Pattern Recognition and Image Analysis: Third Iberian Conference, IbPRIA 2007, Girona, Spain, June 6-8, 2007, Proceedings, Part I. Springer Verlag (Germany). 217-224. https://doi.org/10.1007/978-3-540-72847-4_29S217224Yip, A., Sinha, P.: Contribution of color to face recognition. Perception 31(5), 995–1003 (2002)Torres, L., Reutter, J.Y., Lorente, L.: The importance of the color information in face recognition. In: ICIP, vol. 3, pp. 627–631 (1999)Jones III, C., Abbott, A.L.: Color face recognition by hypercomplex gabor analysis. In: FG2006, University of Southampton, UK, pp. 126–131 (2006)Hamilton, W.R.: On a new species of imaginary quantities connected with a theory of quaternions. In: Proc. Royal Irish Academy, vol. 2, pp. 424–434 (1844)Zhang, F.: Quaternions and matrices of quaternions. Linear Algebra And Its Applications 251(1-3), 21–57 (1997)Pei, S., Cheng, C.: A novel block truncation coding of color images by using quaternion-moment preserving principle. In: ISCAS, Atlanta, USA, vol. 2, pp. 684–687 (1996)Sangwine, S., Ell, T.: Hypercomplex fourier transforms of color images. In: ICIP, Thessaloniki, Greece, vol. 1, pp. 137–140 (2001)Bihan, N.L., Sangwine, S.J.: Quaternion principal component analysis of color images. In: ICIP, Barcelona, Spain, vol. 1, pp. 809–812 (2003)Chang, J.-H., Pei, S.-C., Ding, J.J.: 2d quaternion fourier spectral analysis and its applications. In: ISCAS, Vancouver, Canada, vol. 3, pp. 241–244 (2004)Li, S.Z., Jain, A.K.: 6. In: Handbook of Face Recognition. Springer (2005)Gross, R., Brajovic, V.: An image preprocessing algorithm for illumination invariant face recognition. In: Kittler, J., Nixon, M.S. (eds.) AVBPA 2003. LNCS, vol. 2688, p. 1055. Springer, Heidelberg (2003)Lee, K., Ho, J., Kriegman, D.: Nine points of light: Acquiring subspaces for face recognition under variable lighting. In: CVPR, vol. 1, pp. 519–526 (2001)Zhang, L., Samaras, D.: Face recognition under variable lighting using harmonic image exemplars. In: CVPR, vol. 1, pp. 19–25 (2003)Villegas, M., Paredes, R.: Comparison of illumination normalization methods for face recognition. In: COST 275, University of Hertfordshire, UK, pp. 27–30 (2005)Turk, M., Pentland, A.: Face recognition using eigenfaces. In: CVPR, Hawaii, pp. 586–591 (1991)Bihan, N.L., Mars, J.: Subspace method for vector-sensor wave separation based on quaternion algebra. In: EUSIPCO, Toulouse, France (2002)XM2VTS (CDS00{1,6}), http://www.ee.surrey.ac.uk/Reseach/VSSP/xm2vtsdbLuettin, J., Maître, G.: Evaluation protocol for the extended M2VTS database (XM2VTSDB). IDIAP-COM 05, IDIAP (1998

    Design of automatic vision-based inspection system for solder joint segmentation

    Get PDF
    Purpose: Computer vision has been widely used in the inspection of electronic components. This paper proposes a computer vision system for the automatic detection, localisation, and segmentation of solder joints on Printed Circuit Boards (PCBs) under different illumination conditions. Design/methodology/approach: An illumination normalization approach is applied to an image, which can effectively and efficiently eliminate the effect of uneven illumination while keeping the properties of the processed image the same as in the corresponding image under normal lighting conditions. Consequently special lighting and instrumental setup can be reduced in order to detect solder joints. These normalised images are insensitive to illumination variations and are used for the subsequent solder joint detection stages. In the segmentation approach, the PCB image is transformed from an RGB color space to a YIQ color space for the effective detection of solder joints from the background. Findings: The segmentation results show that the proposed approach improves the performance significantly for images under varying illumination conditions. Research limitations/implications: This paper proposes a front-end system for the automatic detection, localisation, and segmentation of solder joint defects. Further research is required to complete the full system including the classification of solder joint defects. Practical implications: The methodology presented in this paper can be an effective method to reduce cost and improve quality in production of PCBs in the manufacturing industry. Originality/value: This research proposes the automatic location, identification and segmentation of solder joints under different illumination conditions

    Evaluating color texture descriptors under large variations of controlled lighting conditions

    Full text link
    The recognition of color texture under varying lighting conditions is still an open issue. Several features have been proposed for this purpose, ranging from traditional statistical descriptors to features extracted with neural networks. Still, it is not completely clear under what circumstances a feature performs better than the others. In this paper we report an extensive comparison of old and new texture features, with and without a color normalization step, with a particular focus on how they are affected by small and large variation in the lighting conditions. The evaluation is performed on a new texture database including 68 samples of raw food acquired under 46 conditions that present single and combined variations of light color, direction and intensity. The database allows to systematically investigate the robustness of texture descriptors across a large range of variations of imaging conditions.Comment: Submitted to the Journal of the Optical Society of America

    Biometric Authentication System on Mobile Personal Devices

    Get PDF
    We propose a secure, robust, and low-cost biometric authentication system on the mobile personal device for the personal network. The system consists of the following five key modules: 1) face detection; 2) face registration; 3) illumination normalization; 4) face verification; and 5) information fusion. For the complicated face authentication task on the devices with limited resources, the emphasis is largely on the reliability and applicability of the system. Both theoretical and practical considerations are taken. The final system is able to achieve an equal error rate of 2% under challenging testing protocols. The low hardware and software cost makes the system well adaptable to a large range of security applications
    corecore