606 research outputs found

    Binary Patterns Encoded Convolutional Neural Networks for Texture Recognition and Remote Sensing Scene Classification

    Full text link
    Designing discriminative powerful texture features robust to realistic imaging conditions is a challenging computer vision problem with many applications, including material recognition and analysis of satellite or aerial imagery. In the past, most texture description approaches were based on dense orderless statistical distribution of local features. However, most recent approaches to texture recognition and remote sensing scene classification are based on Convolutional Neural Networks (CNNs). The d facto practice when learning these CNN models is to use RGB patches as input with training performed on large amounts of labeled data (ImageNet). In this paper, we show that Binary Patterns encoded CNN models, codenamed TEX-Nets, trained using mapped coded images with explicit texture information provide complementary information to the standard RGB deep models. Additionally, two deep architectures, namely early and late fusion, are investigated to combine the texture and color information. To the best of our knowledge, we are the first to investigate Binary Patterns encoded CNNs and different deep network fusion architectures for texture recognition and remote sensing scene classification. We perform comprehensive experiments on four texture recognition datasets and four remote sensing scene classification benchmarks: UC-Merced with 21 scene categories, WHU-RS19 with 19 scene classes, RSSCN7 with 7 categories and the recently introduced large scale aerial image dataset (AID) with 30 aerial scene types. We demonstrate that TEX-Nets provide complementary information to standard RGB deep model of the same network architecture. Our late fusion TEX-Net architecture always improves the overall performance compared to the standard RGB network on both recognition problems. Our final combination outperforms the state-of-the-art without employing fine-tuning or ensemble of RGB network architectures.Comment: To appear in ISPRS Journal of Photogrammetry and Remote Sensin

    Cross-Spectral Full and Partial Face Recognition: Preprocessing, Feature Extraction and Matching

    Get PDF
    Cross-spectral face recognition remains a challenge in the area of biometrics. The problem arises from some real-world application scenarios such as surveillance at night time or in harsh environments, where traditional face recognition techniques are not suitable or limited due to usage of imagery obtained in the visible light spectrum. This motivates the study conducted in the dissertation which focuses on matching infrared facial images against visible light images. The study outspreads from aspects of face recognition such as preprocessing to feature extraction and to matching.;We address the problem of cross-spectral face recognition by proposing several new operators and algorithms based on advanced concepts such as composite operators, multi-level data fusion, image quality parity, and levels of measurement. To be specific, we experiment and fuse several popular individual operators to construct a higher-performed compound operator named GWLH which exhibits complementary advantages of involved individual operators. We also combine a Gaussian function with LBP, generalized LBP, WLD and/or HOG and modify them into multi-lobe operators with smoothed neighborhood to have a new type of operators named Composite Multi-Lobe Descriptors. We further design a novel operator termed Gabor Multi-Levels of Measurement based on the theory of levels of measurements, which benefits from taking into consideration the complementary edge and feature information at different levels of measurements.;The issue of image quality disparity is also studied in the dissertation due to its common occurrence in cross-spectral face recognition tasks. By bringing the quality of heterogeneous imagery closer to each other, we successfully achieve an improvement in the recognition performance. We further study the problem of cross-spectral recognition using partial face since it is also a common problem in practical usage. We begin with matching heterogeneous periocular regions and generalize the topic by considering all three facial regions defined in both a characteristic way and a mixture way.;In the experiments we employ datasets which include all the sub-bands within the infrared spectrum: near-infrared, short-wave infrared, mid-wave infrared, and long-wave infrared. Different standoff distances varying from short to intermediate and long are considered too. Our methods are compared with other popular or state-of-the-art methods and are proven to be advantageous

    Homogeneous and Heterogeneous Face Recognition: Enhancing, Encoding and Matching for Practical Applications

    Get PDF
    Face Recognition is the automatic processing of face images with the purpose to recognize individuals. Recognition task becomes especially challenging in surveillance applications, where images are acquired from a long range in the presence of difficult environments. Short Wave Infrared (SWIR) is an emerging imaging modality that is able to produce clear long range images in difficult environments or during night time. Despite the benefits of the SWIR technology, matching SWIR images against a gallery of visible images presents a challenge, since the photometric properties of the images in the two spectral bands are highly distinct.;In this dissertation, we describe a cross spectral matching method that encodes magnitude and phase of multi-spectral face images filtered with a bank of Gabor filters. The magnitude of filtered images is encoded with Simplified Weber Local Descriptor (SWLD) and Local Binary Pattern (LBP) operators. The phase is encoded with Generalized Local Binary Pattern (GLBP) operator. Encoded multi-spectral images are mapped into a histogram representation and cross matched by applying symmetric Kullback-Leibler distance. Performance of the developed algorithm is demonstrated on TINDERS database that contains long range SWIR and color images acquired at a distance of 2, 50, and 106 meters.;Apart from long acquisition range, other variations and distortions such as pose variation, motion and out of focus blur, and uneven illumination may be observed in multispectral face images. Recognition performance of the face recognition matcher can be greatly affected by these distortions. It is important, therefore, to ensure that matching is performed on high quality images. Poor quality images have to be either enhanced or discarded. This dissertation addresses the problem of selecting good quality samples.;The last chapters of the dissertation suggest a number of modifications applied to the cross spectral matching algorithm for matching low resolution color images in near-real time. We show that the method that encodes the magnitude of Gabor filtered images with the SWLD operator guarantees high recognition rates. The modified method (Gabor-SWLD) is adopted in a camera network set up where cameras acquire several views of the same individual. The designed algorithm and software are fully automated and optimized to perform recognition in near-real time. We evaluate the recognition performance and the processing time of the method on a small dataset collected at WVU
    • …
    corecore