48,537 research outputs found

    Scale Selective Extended Local Binary Pattern for Texture Classification

    Full text link
    In this paper, we propose a new texture descriptor, scale selective extended local binary pattern (SSELBP), to characterize texture images with scale variations. We first utilize multi-scale extended local binary patterns (ELBP) with rotation-invariant and uniform mappings to capture robust local micro- and macro-features. Then, we build a scale space using Gaussian filters and calculate the histogram of multi-scale ELBPs for the image at each scale. Finally, we select the maximum values from the corresponding bins of multi-scale ELBP histograms at different scales as scale-invariant features. A comprehensive evaluation on public texture databases (KTH-TIPS and UMD) shows that the proposed SSELBP has high accuracy comparable to state-of-the-art texture descriptors on gray-scale-, rotation-, and scale-invariant texture classification but uses only one-third of the feature dimension.Comment: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 201

    Texture classification using invariant ranklet features

    Get PDF
    A novel invariant texture classification method is proposed. Invariance to linear/non-linear monotonic gray-scale transformations is achieved by submitting the image under study to the ranklet transform, an image processing technique relying on the analysis of the relative rank of pixels rather than on their gray-scale value. Some texture features are then extracted from the ranklet images resulting from the application at different resolutions and orientations of the ranklet transform to the image. Invariance to 90°-rotations is achieved by averaging, for each resolution, correspondent vertical, horizontal, and diagonal texture features. Finally, a texture class membership is assigned to the texture feature vector by using a support vector machine (SVM) classifier. Compared to three recent methods found in literature and having being evaluated on the same Brodatz and Vistex datasets, the proposed method performs better. Also, invariance to linear/non-linear monotonic gray-scale transformations and 90°-rotations are evidenced by training the SVM classifier on texture feature vectors formed from the original images, then testing it on texture feature vectors formed from contrast-enhanced, gamma-corrected, histogram-equalized, and 90°-rotated images

    Image databases, scale and fractal transforms

    Full text link
    In contemporary image databases one finds many images with the same image content but perturbed by zooming, scaling, rotation etc. For the purpose of image recognition in such databases we employ features based on statistics stemming from fractal transforms gray-scale images. We show how the features derived from these statistical aspects can be made invariant to zooming or rescaling. A feature invariance measure is defined and described. The method is especially suitable for images of textures. We produce numerical results which validate the approach

    Multiple Kernel-Based Multimedia Fusion for Automated Event Detection from Tweets

    Get PDF
    A method for detecting hot events such as wildfires is proposed. It uses visual and textual information to improve detection. Starting with picking up tweets having texts and images, it preprocesses the data to eliminate unwanted data, transforms unstructured data into structured data, then extracts features. Text features include term frequency-inverse document frequency. Image features include histogram of oriented gradients, gray-level co-occurrence matrix, color histogram, and scale-invariant feature transform. Next, it inputs the features to the multiple kernel learning (MKL) for fusion to automatically combine both feature types to achieve the best performance. Finally, it does event detection. The method was tested on Brisbane hailstorm 2014 and California wildfires 2017. It was compared with methods that used text only or images only. With the Brisbane hailstorm data, the proposed method achieved the best performance, with a fusion accuracy of 0.93, comparing to 0.89 with text only, and 0.85 with images only. With the California wildfires data, a similar performance was recorded. It has demonstrated that event detection in Twitter is enhanced and improved by combination of multiple features. It has delivered an accurate and effective event detection method for spreading awareness and organizing responses, leading to better disaster management

    Improvement Of Face Recognition Using Principal Component Analysis And Moment Invariant

    Get PDF
    Face recognition attracts many researchers and has made significant progress in recent years. Face recognition is a type of biometric just like fingerprint and iris scans. This technology plays an important role in real-world applications, such as commercial and law enforcement applications, from here comes the importance of tackling this kind of research. In this research, we have proposed a method that integrates Principal Component Analysis (PCA) and Moment Invariant with face colour in gray scale to recognize face images of various pose. The PCA method is used to analyze the face image because it is optimal with any similar face image analysis and it has been employed to extract the global information. The vectors of a face in the database that are matched with the one of face image will be recognized the owner. If the vector is not matched, the original face image will be reconsidered with moment invariant and face colour in gray scale extraction. Then, the face will be rematched.In this way, the unrecognized faces will be reconsidered again and some will be recognized accurately to increase the number of recognized faces and improve the recognition accuracy as well. We have applied our method on Olivetti Research Laboratory (ORL) database which is issued by AT&T. The database contains 40 different faces images with 10 each face. Our experiment is done by using the holdout to measure the recognition accuracy, as we divided about 2/3 of the data 280 faces for training, and about 1/3 which is 120 faces for testing. The results showed a recognition accuracy of 94% for applying PCA, and 96% after reconsidering the unrecognized patterns by dealing with pose-varied faces and face colour extraction. Our proposed method has improved the recognition accuracy with the additional features extracted (PCA + face colour in gray scale) with the consideration of the total time process

    Evaluating color texture descriptors under large variations of controlled lighting conditions

    Full text link
    The recognition of color texture under varying lighting conditions is still an open issue. Several features have been proposed for this purpose, ranging from traditional statistical descriptors to features extracted with neural networks. Still, it is not completely clear under what circumstances a feature performs better than the others. In this paper we report an extensive comparison of old and new texture features, with and without a color normalization step, with a particular focus on how they are affected by small and large variation in the lighting conditions. The evaluation is performed on a new texture database including 68 samples of raw food acquired under 46 conditions that present single and combined variations of light color, direction and intensity. The database allows to systematically investigate the robustness of texture descriptors across a large range of variations of imaging conditions.Comment: Submitted to the Journal of the Optical Society of America
    corecore