4 research outputs found

    Integration of blcm and flbp in low resolution face recognition

    Get PDF
    Face recognition from face image has been a fast-growing topic in biometrics research community and a sizeable number of face recognition techniques based on texture analysis have been developed in the past few years. These techniques work well on grayscale and colour images with very few techniques deal with binary and low resolution image. With binary image becoming the preferred format for low face resolution analysis, there is need for further studies to provide a complete solution for image-based face recognition system with higher accuracy. To overcome the limitation of the existing techniques in extracting distinctive features in low resolution images due to the contrast between the face and background, we proposed a statistical feature analysis technique to fill in the gaps. To achieve this, the proposed technique integrates Binary Level Occurrence Matrix (BLCM) and Fuzzy Local Binary Pattern (FLBP) named BLCM-FLBP to extract global and local features of face from face low resolution images. The purpose of BLCM-FLBP is to distinctively improve performance of edge sharpness between black and white pixels in the binary image and to extract significant data relating to the features of face pattern. Experimental results on Yale and FEI datasets validates the superiority of the proposed technique over the other top-performing feature analysis techniques methods by utilizing different classifier which is Neural network (NN) and Random Forest (RF). The proposed technique achieved performance accuracy of 93.16% (RF), 95.27% (NN) when FEI dataset used, and the accuracy of 94.54% (RF), 93.61% (NN) when Yale.B used. Hence, the proposed technique outperforming other technique such as Gray Level Co-Occurrence Matrix (GLCM), Bag of Word (BOW), Fuzzy Local Binary Pattern (FLBP) respectively and Binary Level Occurrence Matrix (BLCM)

    Character Prototyping in Document Images Using Gabor Filters

    No full text
    International audienceIn this article we present a particular application of Gabor filtering for machine-printed document image understanding. To do so, we assume that the text can be seen as texture, characters being the smallest texture elements, and we verify this hypothesis by a series of experiments over different sets of character images. We first apply a bank of 24 Gabor filters (4 frequencies and 6 orientations) on each set, then we extract texture features, that are used to classify character images without a priori knowledge using a Bayesian classifier. Results are shown for different characters written in a same font, and for different font types given a character

    Character prototyping in document images using gabor filters

    No full text
    International audienceIn this article we present a particular application of Gabor filtering for machine-printed document image understanding. To do so, we assume that the text can be seen as texture, characters being the smallest texture elements, and we verify this hypothesis by a series of experiments over different sets of character images. We first apply a bank of 24 Gabor filters (4 frequencies and 6 orientations) on each set, then we extract texture features further used to classify character images without a priori knowledge using a Bayesian classifier. Results are shown for different characters written in a same font, and for different font types given a character

    Character prototyping in document images using gabor filters

    No full text
    International audienceIn this article we present a particular application of Gabor filtering for machine-printed document image understanding. To do so, we assume that the text can be seen as texture, characters being the smallest texture elements, and we verify this hypothesis by a series of experiments over different sets of character images. We first apply a bank of 24 Gabor filters (4 frequencies and 6 orientations) on each set, then we extract texture features further used to classify character images without a priori knowledge using a Bayesian classifier. Results are shown for different characters written in a same font, and for different font types given a character
    corecore