578 research outputs found

    Color and Texture Feature Extraction Using Gabor Filter - Local Binary Patterns for Image Segmentation with Fuzzy C-Means

    Full text link
    Image segmentation to be basic for image analysis and recognition process. Segmentation divides the image into several regions based on the unique homogeneous image pixel. Image segmentation classify homogeneous pixels basedon several features such as color, texture and others. Color contains a lot of information and human vision can see thousands of color combinations and intensity compared with grayscale or with black and white (binary). The method is easy to implement to segementation is clustering method such as the Fuzzy C-Means (FCM) algorithm. Features to beextracted image is color and texture, to use the color vector L* a* b* color space and to texture using Gabor filters. However, Gabor filters have poor performance when the image is segmented many micro texture, thus affecting the accuracy of image segmentation. As support in improving the accuracy of the extracted micro texture used method of Local Binary Patterns (LBP). Experimental use of color features compared with grayscales increased 16.54% accuracy rate for texture Gabor filters and 14.57% for filter LBP. While the LBP texture features can help improve the accuracy of image segmentation, although small at 2% on a grayscales and 0.05% on the color space L* a* b*

    Multi-resolution texture classification based on local image orientation

    Get PDF
    The aim of this paper is to evaluate quantitatively the discriminative power of the image orientation in the texture classification process. In this regard, we have evaluated the performance of two texture classification schemes where the image orientation is extracted using the partial derivatives of the Gaussian function. Since the texture descriptors are dependent on the observation scale, in this study the main emphasis is placed on the implementation of multi-resolution texture analysis schemes. The experimental results were obtained when the analysed texture descriptors were applied to standard texture databases

    Improving face gender classification by adding deliberately misaligned faces to the training data

    Get PDF
    A novel method of face gender classifier construction is proposed and evaluated. Previously, researchers have assumed that a computationally expensive face alignment step (in which the face image is transformed so that facial landmarks such as the eyes, nose, chin, etc, are in uniform locations in the image) is required in order to maximize the accuracy of predictions on new face images. We, however, argue that this step is not necessary, and that machine learning classifiers can be made robust to face misalignments by automatically expanding the training data with examples of faces that have been deliberately misaligned (for example, translated or rotated). To test our hypothesis, we evaluate this automatic training dataset expansion method with two types of image classifier, the first based on weak features such as Local Binary Pattern histograms, and the second based on SIFT keypoints. Using a benchmark face gender classification dataset recently proposed in the literature, we obtain a state-of-the-art accuracy of 92.5%, thus validating our approach
    corecore